Welcome and Opening by EE Times Editors Sally Ward-Foxton and Nitin Dahad
By Lian Jye Su, Principal Analyst at ABI Research
Edge AI promises its users an enhanced user experience through minimal latency and privacy risk. As more machine learning workloads shift to the edge, edge AI has become a critical change agent driving digital transformation. This session will focus on the current state of edge AI by key vertical, prominent hardware, software, service players, and major trends in the industry. With more intelligent devices deployed, hardware manufacturers and software vendors are looking for innovative solutions to avoid heavy investments and long development cycles. The session offers a closer look at methods that accelerate edge AI design, development, and deployment, such as new hardware architecture, integration with cloud services, and increased focus on edge AIOps.
By Ron Martino, Executive Vice President and General Manager, Edge Processing, NXP Semiconductors
Advancements in compute capabilities have removed many of the barriers for AI at the edge. Separate chips that handle large computation demands of machine learning (ML) are no longer needed. Instead, it is becoming common for energy efficient ML-capable cores or dedicated accelerators to be integrated in edge processing SoCs. But more is required for AI at the edge to be widely deployed. Device manufacturers see opportunities to use AI in their products, but not all of them have experts to handle complex ML development tasks. To demystify AI, user-friendly development tools are needed. Devices manufactured by different companies must also be interoperable to communicate and make informed decisions. This means rules, guidelines and protocols are needed to enable a collaborative network to flourish. How can manufacturers navigate through these challenges? How can they maximize the potential of AI?
By Anoop Saha, Head of Strategy and Business Development, Siemens EDA Digital Implementation Products, Siemens Digital Industries Software
AI has ushered in a new golden age in semiconductor development. While the Moore’s law is still running in full steam with performance doubling every 2 years, data center compute requirements are now growing at an even faster pace – doubling every 4 months. In addition, machine learning is getting disaggregated and there is increasing need for more processing at the edge. This leads to two contrasting developments in chips purpose built for AI; on one side we see the emergence of large chips tailored for training in the data center which can improve performance by orders of magnitude and on the other side there is a nascent boom in chips for the edge to enable energy efficient machine learning. This brings us the era of domain specific chip development – silicon customization for domain specific architectures, languages, and networks. To sustain the innovation in AI, not only do we need these new chips but also new ways of building those chips. The incremental improvement in power, performance and area are not enough. Customization requires enabling technologies that helps bring hardware and software together as well as agile design methodologies for iterative design process. In this talk, we will cover some of those enabling technologies – from high level synthesis to hardware software codesign. We will go over the past, present and the future of silicon design process, the disruptions, and the latest technological trends. This talk will focus on the silicon for the edge, and the design choices that help deployment of powerful machine learning algorithms on tiny low-cost energy-efficient hardware.
By Manny Singh, Principal Product Marketing Manager, Renesas
Renesas has developed DRP-AI (Dynamically Reconfigurable Processor for AI) as an AI accelerator with high-speed AI inference processing that achieves low power and provides the flexibility required by embedded devices. DRP-AI enables endpoint products with real-time Vision AI processing, while its high-power efficiency reduces product size and BOM cost. In this session, we will demonstrate DRP-AI’s power efficiency, real-time AI performance, flexibility, and scalability and provide benchmark results.
By Roger Silloway, Sales Director, Efinix
The need for edge AI processing continues to grow in leaps and bounds. In this presentation we will share details on the Efinix Quantum architecture and how our approach enables Efinix FPGAs to solve long term challenges for low power and small form factor that meets volume pricing needs. Efinix solutions provide maximum flexibility to address last minute changes in requirements without wasting time and money on NREs. We will discuss ease of development using traditional open source standard embedded processor design flows with the ability to customize our device to your specific needs, right through the life of your product.
Panel Discussion with Industry Experts, moderated by EE Times Editor Sally Ward-Foxton
The surge in powerful chips for AI acceleration at the edge is allowing workloads that were previously sent to the cloud to be handled locally in “edge boxes” and endpoint devices. Why are applications cutting off cloud connectivity and relying on these edge boxes, what kind of AI workloads can be handled efficiently by edge compute today and which types of workload will remain in the cloud?
Panelists are:
Lian Jye Su, ABI Research
Gowri Chindalore, NXP
Mark Oliver, Efinix
Manny Singh, Renesas
Welcome and Opening by EE Times Editors Sally Ward-Foxton and Nitin Dahad
By Marian Verhelst, Associate Professor at KU Leuven and Scientific Director at Imec
Sensors are embedded more and more ubiquitously into our environment. It is however impossible to swallow all this sensory data in the cloud. This requires local processing of the sensor feed in the so-called extreme edge nodes. This is a challenge for the local nodes, which only have limited processing, memory and energy resources. This talk will give an overview of different strategies to enable and exploit local AI processing in extreme edge devices, ranging from low-power circuit and processor design techniques, to efficient algorithmic mapping. As the resulting design and mapping space is vast, we end by presenting ZigZag, a design space exploration tool tuned to neural network processors.
By Zach Shelby, Co-Founder and CEO, Edge Impulse
Embedded machine learning is reshaping the way products are built, used and maintained, ushering in a new generation of engineers that are leveraging data to design new experiences. In his talk, Zach Shelby will delve deeper into this concept of data-driven engineering and how industries such as manufacturing, logistics, and wearables are ripe for disruption, uncovering various real world applications from predictive maintenance and asset tracking to human and animal sensing. Hear how companies like Edge Impulse are equipping developers with easy-to-use, end-to-end tools that enable ML-powered devices to extract insights right where the data is born.
By Nilam Ruparelia, Segment Leader AI & 5G, Microchip
AI/ML is a new concept and its actual implementation have been fraught with challenges. Nevertheless, interest and demand has been very high for smart IoT devices using AI. This brief presentation outlines some of the challenges faced by engineers and how they are being addressed by industry leaders like Microchip. We will discuss implementations of Machine Learning based on MCUs and based on FPGAs Neural Networks and Adaptive Algorithms..
By Nirupam Kulkarni, Senior Product Marketing Manager, eInfochips – an Arrow company
AI and machine learning is transforming industry value chains as it moves out of prototype/ proof-of-concept purgatory and into widespread, platform led adoption at the edge and cloud. Increasing number of AI workloads for perception, control and insights are getting integrated with the enterprise systems for managing data (sensor feeds, customer interactions and enterprise processes) and technology operations (cloud, CI/CD and service management). Ensuring that AI/ML pipelines evolve from algorithm training on specific domain class knowledge to inference optimization for a lean edge to finally rolling them out in production seamlessly and at scale requires strong technical expertise. In this session, eInfochips will showcase key enablers and best practices for building AI solutions at scale. We will also outline one such customer journey involving computer vision-based AI/ML solution development, deployment, and optimization.
By Richard Oxland, Product Manager, Tessent Embedded Analytics and Gajinder Panesar, Siemens Fellow
Gajinder Panesar and Richard Oxland describe some of the challenges of creating highly complex manycore AI / ML implementations, and show how system-level visibility of SoC functionality helps turn complexity into a competitive advantage – in development, validation, and throughout the deployed lifetime of the device. The presentation describes how Tessent Embedded Analytics can deliver such system-level visibility. Embedded Analytics is a suite of silicon IP, software tools and libraries that provide a platform for monitoring and functional analytics. We refer to specific examples, many based on real customer experiences, to illustrate the value of this approach.
Panel Discussion with Industry Experts, moderated by EE Times Editor Sally Ward-Foxton
Developments in algorithms, tools and hardware are bringing AI/ML to smaller devices than ever before. The panel will discuss what is possible with microcontroller-level devices today, and consider how these new capabilities will affect the rollout of the IoT. Is algorithm development the main factor squeezing performance out of AIoT devices, or is hardware evolving just as fast? What are the pros and cons of specialist tinyML hardware versus widely-available microcontrollers? And is it possible to rollout the AIoT without huge numbers of data science PhDs in the embedded hardware development space?
Panelists are:
Zach Shelby, Edge Impulse
Anoop Saha, Siemens Digital Industries Software
Kurt Busch, Syntiant
Yann LeFaou, Microchip
Welcome and Opening by EE Times Editors Sally Ward-Foxton and Nitin Dahad
By David Kanter, Executive Director, MLCommons
As the industry drives towards more capable ML, workloads are rapidly evolving and the need for performance is nearly unlimited. Performance has vastly outstripped the pace of Moore's Law as measured by MLPerf™ through software/hardware co-design. This talk will explore the design space and also identify challenges and opportunities.
By Dr. Mukesh V. Khare, Vice President, IBM Research
Mukesh will begin by giving an overview of IBM Research, including its unique Semiconductor R&D ecosystem built in Albany, NY. IBM's Albany Research Lab is also home to the AI Hardware Center, a partnership between New York State, IBM, and partner companies that is accelerating the development of both digital and analog AI hardware and software. Mukesh will talk about the advances in reduced-precision computation enabling dramatic improvements in both training and inference without sacrificing accuracy, and the foundational work in analog AI that has led to in-memory computation to deliver 1000x improvement in AI compute performance by 2030.
By Paul Yeaman, Senior Director, Applications Engineering, Vicor
High performance processors require higher steady state and peak currents with dramatically increasing slew rates, while operating at lower voltages with an increasing number of high speed I/O’s. This trend is accelerating and continually challenging power system designers to ensure delivery of adequate power to the processor core with low loss in the Power Delivery Network(PDN). Conventional approaches utilizing multiphase buck regulators are becoming significantly challenged, rendering a new approach necessary to keep pace. Vicor’s Factorized Power Architecture (FPA™) is a departure from the common multiphase methods and uniquely addresses each of the challenges facing VR developments for new processor technologies. FPA also enables Lateral Power Delivery (LPD) and Vertical Power Delivery (VPD) PCB deployment options. The VPD solution, reduces losses by up to 95% and eliminates bottlenecks by freeing up 100% of the processor perimeter.
By Bob Beachler, Vice President of Product, Untether AI
The demand for acceleration AI inference is exploding – driven by the increasing sophistication of the networks and the sheer number being deployed. We are seeing that traditional von Neumann architectures have not been able to efficiently scale to this increased compute demands. Untether AI has developed a novel approach to solving this problem – at-memory compute - which enables AI inference workloads to run faster, cooler, and more cost-effectively. This talk will detail how at-memory compute enables large-scale inference deployments of a multiplicity of differing neural network architectures.
By Rob Telson, Vice President of World Wide Sales, BrainChip
BrainChip’s Akida neural processor unit brings intelligent AI to the Edge with ease. Sensors at the edge require real-time computation and managing both ultra-low power and latency requirements with traditional machine learning is extremely difficult when it comes to empowering smart intelligent edge sensors. In the next generation of intelligent AI at the edge (not the edge of the cloud!), BrainChip’s Akida NPU, leverages advanced neuromorphic computing as the engine. Akida solves critical problems such as privacy, security, latency, low power requirements, with key features such as one shot learning and computing on the device with no dependency on the cloud. BrainChip is delivering on the next generation demands by achieving efficient, effective and easy AI functionality. In this session, you will learn how to easily apply efficient AI in smart transportation edge devices by implementing Akida IP into your SOC or as stand-alone silicon.
Panel Discussion with Industry Experts, moderated by EE Times Editor Sally Ward-Foxton
Due to advances in cutting-edge algorithms and models, data center AI workloads are something of a moving target. How does one design chips and systems to keep up with this rapid pace of evolution? How much can be done in software, how much can be done with clever architecture decisions, and which applications need full hardware programmability? How important are flexibility and future proofing versus pure performance? Will there be more than one eventual winner in this space?
Panelists are:
David Kanter, MLCommons
Mukesh Khare, IBM Research
Alex Grbic, Untether AI
Kevin Krewell, Tirias Research
Welcome and Opening by EE Times Editors Sally Ward-Foxton and Nitin Dahad
By Marian Verhelst, Associate Professor at KU Leuven and Scientific Director at Imec
Sensors are embedded more and more ubiquitously into our environment. It is however impossible to swallow all this sensory data in the cloud. This requires local processing of the sensor feed in the so-called extreme edge nodes. This is a challenge for the local nodes, which only have limited processing, memory and energy resources. This talk will give an overview of different strategies to enable and exploit local AI processing in extreme edge devices, ranging from low-power circuit and processor design techniques, to efficient algorithmic mapping. As the resulting design and mapping space is vast, we end by presenting ZigZag, a design space exploration tool tuned to neural network processors.
By Zach Shelby, Co-Founder and CEO, Edge Impulse
Embedded machine learning is reshaping the way products are built, used and maintained, ushering in a new generation of engineers that are leveraging data to design new experiences. In his talk, Zach Shelby will delve deeper into this concept of data-driven engineering and how industries such as manufacturing, logistics, and wearables are ripe for disruption, uncovering various real world applications from predictive maintenance and asset tracking to human and animal sensing. Hear how companies like Edge Impulse are equipping developers with easy-to-use, end-to-end tools that enable ML-powered devices to extract insights right where the data is born.
By Nilam Ruparelia, Segment Leader AI & 5G, Microchip
AI/ML is a new concept and its actual implementation have been fraught with challenges. Nevertheless, interest and demand has been very high for smart IoT devices using AI. This brief presentation outlines some of the challenges faced by engineers and how they are being addressed by industry leaders like Microchip. We will discuss implementations of Machine Learning based on MCUs and based on FPGAs Neural Networks and Adaptive Algorithms..
By Nirupam Kulkarni, Senior Product Marketing Manager, eInfochips – an Arrow company
AI and machine learning is transforming industry value chains as it moves out of prototype/ proof-of-concept purgatory and into widespread, platform led adoption at the edge and cloud. Increasing number of AI workloads for perception, control and insights are getting integrated with the enterprise systems for managing data (sensor feeds, customer interactions and enterprise processes) and technology operations (cloud, CI/CD and service management). Ensuring that AI/ML pipelines evolve from algorithm training on specific domain class knowledge to inference optimization for a lean edge to finally rolling them out in production seamlessly and at scale requires strong technical expertise. In this session, eInfochips will showcase key enablers and best practices for building AI solutions at scale. We will also outline one such customer journey involving computer vision-based AI/ML solution development, deployment, and optimization.
By Richard Oxland, Product Manager, Tessent Embedded Analytics and Gajinder Panesar, Siemens Fellow
Gajinder Panesar and Richard Oxland describe some of the challenges of creating highly complex manycore AI / ML implementations, and show how system-level visibility of SoC functionality helps turn complexity into a competitive advantage – in development, validation, and throughout the deployed lifetime of the device. The presentation describes how Tessent Embedded Analytics can deliver such system-level visibility. Embedded Analytics is a suite of silicon IP, software tools and libraries that provide a platform for monitoring and functional analytics. We refer to specific examples, many based on real customer experiences, to illustrate the value of this approach.
Panel Discussion with Industry Experts, moderated by EE Times Editor Sally Ward-Foxton
Developments in algorithms, tools and hardware are bringing AI/ML to smaller devices than ever before. The panel will discuss what is possible with microcontroller-level devices today, and consider how these new capabilities will affect the rollout of the IoT. Is algorithm development the main factor squeezing performance out of AIoT devices, or is hardware evolving just as fast? What are the pros and cons of specialist tinyML hardware versus widely-available microcontrollers? And is it possible to rollout the AIoT without huge numbers of data science PhDs in the embedded hardware development space?
Panelists are:
Zach Shelby, Edge Impulse
Anoop Saha, Siemens Digital Industries Software
Kurt Busch, Syntiant
Yann LeFaou, Microchip
Welcome and Opening by EE Times Editors Sally Ward-Foxton and Nitin Dahad
By David Kanter, Executive Director, MLCommons
As the industry drives towards more capable ML, workloads are rapidly evolving and the need for performance is nearly unlimited. Performance has vastly outstripped the pace of Moore's Law as measured by MLPerf™ through software/hardware co-design. This talk will explore the design space and also identify challenges and opportunities.
By Dr. Mukesh V. Khare, Vice President, IBM Research
Mukesh will begin by giving an overview of IBM Research, including its unique Semiconductor R&D ecosystem built in Albany, NY. IBM's Albany Research Lab is also home to the AI Hardware Center, a partnership between New York State, IBM, and partner companies that is accelerating the development of both digital and analog AI hardware and software. Mukesh will talk about the advances in reduced-precision computation enabling dramatic improvements in both training and inference without sacrificing accuracy, and the foundational work in analog AI that has led to in-memory computation to deliver 1000x improvement in AI compute performance by 2030.
By Paul Yeaman, Senior Director, Applications Engineering, Vicor
High performance processors require higher steady state and peak currents with dramatically increasing slew rates, while operating at lower voltages with an increasing number of high speed I/O’s. This trend is accelerating and continually challenging power system designers to ensure delivery of adequate power to the processor core with low loss in the Power Delivery Network(PDN). Conventional approaches utilizing multiphase buck regulators are becoming significantly challenged, rendering a new approach necessary to keep pace. Vicor’s Factorized Power Architecture (FPA™) is a departure from the common multiphase methods and uniquely addresses each of the challenges facing VR developments for new processor technologies. FPA also enables Lateral Power Delivery (LPD) and Vertical Power Delivery (VPD) PCB deployment options. The VPD solution, reduces losses by up to 95% and eliminates bottlenecks by freeing up 100% of the processor perimeter.
By Bob Beachler, Vice President of Product, Untether AI
The demand for acceleration AI inference is exploding – driven by the increasing sophistication of the networks and the sheer number being deployed. We are seeing that traditional von Neumann architectures have not been able to efficiently scale to this increased compute demands. Untether AI has developed a novel approach to solving this problem – at-memory compute - which enables AI inference workloads to run faster, cooler, and more cost-effectively. This talk will detail how at-memory compute enables large-scale inference deployments of a multiplicity of differing neural network architectures.
By Rob Telson, Vice President of World Wide Sales, BrainChip
BrainChip’s Akida neural processor unit brings intelligent AI to the Edge with ease. Sensors at the edge require real-time computation and managing both ultra-low power and latency requirements with traditional machine learning is extremely difficult when it comes to empowering smart intelligent edge sensors. In the next generation of intelligent AI at the edge (not the edge of the cloud!), BrainChip’s Akida NPU, leverages advanced neuromorphic computing as the engine. Akida solves critical problems such as privacy, security, latency, low power requirements, with key features such as one shot learning and computing on the device with no dependency on the cloud. BrainChip is delivering on the next generation demands by achieving efficient, effective and easy AI functionality. In this session, you will learn how to easily apply efficient AI in smart transportation edge devices by implementing Akida IP into your SOC or as stand-alone silicon.
Panel Discussion with Industry Experts, moderated by EE Times Editor Sally Ward-Foxton
Due to advances in cutting-edge algorithms and models, data center AI workloads are something of a moving target. How does one design chips and systems to keep up with this rapid pace of evolution? How much can be done in software, how much can be done with clever architecture decisions, and which applications need full hardware programmability? How important are flexibility and future proofing versus pure performance? Will there be more than one eventual winner in this space?
Panelists are:
David Kanter, MLCommons
Mukesh Khare, IBM Research
Alex Grbic, Untether AI
Kevin Krewell, Tirias Research
Welcome and Opening by EE Times Editors Sally Ward-Foxton and Nitin Dahad
By Lian Jye Su, Principal Analyst at ABI Research
Edge AI promises its users an enhanced user experience through minimal latency and privacy risk. As more machine learning workloads shift to the edge, edge AI has become a critical change agent driving digital transformation. This session will focus on the current state of edge AI by key vertical, prominent hardware, software, service players, and major trends in the industry. With more intelligent devices deployed, hardware manufacturers and software vendors are looking for innovative solutions to avoid heavy investments and long development cycles. The session offers a closer look at methods that accelerate edge AI design, development, and deployment, such as new hardware architecture, integration with cloud services, and increased focus on edge AIOps.
By Ron Martino, Executive Vice President and General Manager, Edge Processing, NXP Semiconductors
Advancements in compute capabilities have removed many of the barriers for AI at the edge. Separate chips that handle large computation demands of machine learning (ML) are no longer needed. Instead, it is becoming common for energy efficient ML-capable cores or dedicated accelerators to be integrated in edge processing SoCs. But more is required for AI at the edge to be widely deployed. Device manufacturers see opportunities to use AI in their products, but not all of them have experts to handle complex ML development tasks. To demystify AI, user-friendly development tools are needed. Devices manufactured by different companies must also be interoperable to communicate and make informed decisions. This means rules, guidelines and protocols are needed to enable a collaborative network to flourish. How can manufacturers navigate through these challenges? How can they maximize the potential of AI?
By Anoop Saha, Head of Strategy and Business Development, Siemens EDA Digital Implementation Products, Siemens Digital Industries Software
AI has ushered in a new golden age in semiconductor development. While the Moore’s law is still running in full steam with performance doubling every 2 years, data center compute requirements are now growing at an even faster pace – doubling every 4 months. In addition, machine learning is getting disaggregated and there is increasing need for more processing at the edge. This leads to two contrasting developments in chips purpose built for AI; on one side we see the emergence of large chips tailored for training in the data center which can improve performance by orders of magnitude and on the other side there is a nascent boom in chips for the edge to enable energy efficient machine learning. This brings us the era of domain specific chip development – silicon customization for domain specific architectures, languages, and networks. To sustain the innovation in AI, not only do we need these new chips but also new ways of building those chips. The incremental improvement in power, performance and area are not enough. Customization requires enabling technologies that helps bring hardware and software together as well as agile design methodologies for iterative design process. In this talk, we will cover some of those enabling technologies – from high level synthesis to hardware software codesign. We will go over the past, present and the future of silicon design process, the disruptions, and the latest technological trends. This talk will focus on the silicon for the edge, and the design choices that help deployment of powerful machine learning algorithms on tiny low-cost energy-efficient hardware.
By Manny Singh, Principal Product Marketing Manager, Renesas
Renesas has developed DRP-AI (Dynamically Reconfigurable Processor for AI) as an AI accelerator with high-speed AI inference processing that achieves low power and provides the flexibility required by embedded devices. DRP-AI enables endpoint products with real-time Vision AI processing, while its high-power efficiency reduces product size and BOM cost. In this session, we will demonstrate DRP-AI’s power efficiency, real-time AI performance, flexibility, and scalability and provide benchmark results.
By Roger Silloway, Sales Director, Efinix
The need for edge AI processing continues to grow in leaps and bounds. In this presentation we will share details on the Efinix Quantum architecture and how our approach enables Efinix FPGAs to solve long term challenges for low power and small form factor that meets volume pricing needs. Efinix solutions provide maximum flexibility to address last minute changes in requirements without wasting time and money on NREs. We will discuss ease of development using traditional open source standard embedded processor design flows with the ability to customize our device to your specific needs, right through the life of your product.
Panel Discussion with Industry Experts, moderated by EE Times Editor Sally Ward-Foxton
The surge in powerful chips for AI acceleration at the edge is allowing workloads that were previously sent to the cloud to be handled locally in “edge boxes” and endpoint devices. Why are applications cutting off cloud connectivity and relying on these edge boxes, what kind of AI workloads can be handled efficiently by edge compute today and which types of workload will remain in the cloud?
Panelists are:
Lian Jye Su, ABI Research
Gowri Chindalore, NXP
Mark Oliver, Efinix
Manny Singh, Renesas