1 | 1 | Welcome to AI Hardware Summit 2021 from Synopsys & Opening Remarks from Conference Director | 113 |
|
|
---|
2 | 1 | AI SYSTEMS IN INDUSTRY KEYNOTE: The Intersection of AI Hardware and Software in Large Scale ML Systems at Ebay | Unknown |
|
|
---|
3 | 1 | PRESENTATION: AI is Consuming Software - and All the Power on the Planet: How At-Memory Computation Will Solve the Impending AI Energy Crisis (Bob Beachler) | Unknown | Bob Beachler presenting |
|
---|
4 | 1 | PANEL: Designing AI Super-Chips at the Speed of Memory | Unknown |
|
|
---|
5 | 1 | WORKSHOP 1: How Cerebras Does It: Building the Largest Chip Ever Made, and Delivering Unprecedented Deep Learning Acceleration | 104 | Held same time as workshops 2 and 3 |
|
---|
6 | 1 | WORKSHOP 2: Running state-of-the-art models on SambaNova Systems | 72 | Held same time as workshops 1 & 3 |
|
---|
7 | 1 | WORKSHOP 3: Enabling Scalable On-Device & Edge AI using Cadence Tensilica IP | 57 | Held same time as workshops 1 & 2 |
|
---|
8 |
|
|
|
|
|
---|
9 | 2 | Welcome to AI Hardware Summit & Chairperson's Opening Remarks | 60 |
|
|
---|
10 | 2 | FEATURED KEYNOTE: Builders of the Imaginary: From Artificial Intelligence to Artificial Architects in the Era of SysMoore | 105 |
| As we are propelled into an era of exponential, cloud-to-edge intelligence, it’s clear that emerging AI architectures will be born of a techonomic pull from a wide range of traditional and emerging marketing verticals. Synopsys Founder and Co-CEO, Aart de Geus will showcase new innovations set to empower architects, and transform both architecture and artificial intelligence. |
---|
11 | 2 | PANEL: System-Level AI Acceleration: Addressing System Bottlenecks and Exploring Heterogeneity and Design in AI Systems | Unknown | Featuring Norman Jouppi (Distinguished Engineer Google), Michael Gschwind (Engineering Leader Facebook AI), et al. | Achieving the blistering speed ups promised by AI accelerators requires optimization across all the components of a system. This panel of systems experts will cover fine-grain vs. coarse-grain heterogeneity, the role of memory, networking and storage, system design for trustworthy AI, and trends in the specialization of AI systems for training, inference, reinforcement learning, vision, NLP and more. |
---|
12 |
|
|
|
|
|
---|
13 | 2 | PRESENTATION: Building Out Large Scale AI Infrastructure at Google | Unknown | Featuring Norman Jouppi (Distinguished Engineer Google). Attendance not recorded, but I imagine this would be a popular one. Number of questions asked by audience: 8 | Moderator: Paolo Faraboschi - VP & HPE Fellow, Director, AI Research Lab, HP Labs |
---|
14 | 2 | PRESENTATION: Co-designing AI HW/SW at Scale for Recommendation Systems | Unknown | Ft. Dheevastsa Mudigere Research Scientist, Tech Lead Facebook. Questions asked / comments made by audience: 7 |
|
---|
15 | 2 | PRESENTATION: Pushing the Frontiers of NLP forward at NVIDIA | Unknown | Questions/comments: 4 |
|
---|
16 |
|
|
|
|
|
---|
17 | 3 | KEYNOTE: AI and the Data Centric Era | 97 | Ft. Lip-Bu Tan CEO Cadence Design Systems & Chairman Walden International Cadence |
|
---|
18 | 3 | PANEL: Call to Action: Ensuring Sustainability in AI Systems | Unknown | Ft. Carole-Jean Wu Research Scientist Facebook, David Patterson Distinguished Engineer Google Brain, David Kanter Executive Director MLPerf. Questions/Comments: 8 |
|
---|
19 | 3 | PRESENTATION: Scaling AI at the Edge to Meet the Needs of Tomorrow with Cadence Tensilica | 71 |
| In this talk, Sanjive will present the on-device AI IP requirements for intelligent sensor, IoT audio/vision, mobile, and automotive/ADAS markets. The presentation will cover full range of Cadence’s Tensilica on-device AI solutions for low-cost voice-activated consumer devices to high-throughput autonomous vehicle perception. These IP are widely deployed in high-volume AI-enabled end products such as smart speakers, mobile phones, surveillance cameras, and automotive subsystems. We’ll show how the wide portfolio of Cadence’s low-power programmable DSP and AI engines in on-device AI IP meets the need of each specific market. Finally, we’ll outline our on-device AI software tools and support for a wide range of software frameworks and broad market. |
---|
20 | 3 | PRESENTATION: Implementing a Power- Efficient Scalable AI Inference Platform | 82 |
| Edge AI applications demand high-quality, low-latency processing to enable fast decision making and deterministic execution; at the same time, edge devices have stringent size, thermal, and cost constraints. Based on its groundbreaking Analog Compute Engine, Mythic has architected its Analog Matrix Processor for both performance and power-efficiency. In this session, you will learn how Mythic has implemented a power-efficient AI inference platform that features an analog compute-in-memory processor with unique power management techniques and a fully-featured software toolkit that enables seamless support for state-of-the-art DNN models. |
---|
21 | 3 | PRESENTATION: Mapping the AI Acceleration Landscape | 97 | Ft. Daniel Wu Head of AI & Machine Learning, Commercial Banking JPMorgan Chase & Co. | Localized optimizations in hardware, software, and algorithms have all enabled the rapid development of AI we have witnessed in the recent decade. This talk will provide a holistic view on the next steps to accelerate AI development and deployment across industries. To further accelerate and democratize further AI innovations, it is helpful to take a holistic view that covers all components in the system, the operation, and the societal/ environmental impacts. |
---|
22 | 3 | PRESENTATION: AI Inferencing from Cloud to Edge: The Qualcomm Cloud AI100 scales from 70 TOPs to Peta-OPs and beyond while maintaining industry-leading power efficiency |
| Mike Vildibil VP & GM, Cloud Edge AI Qualcomm | The Cloud AI 100 accelerator offers leadership class performance and power efficiency across many applications ranging from datacenter to edge deployment. In this talk we will discuss Qualcomm’s comprehensive offering of commercial hardware and software tools for integration and deployment in production settings. The presentation will dive into Foxconn´s Gloria AI Edge Box, our new joint announcement, featuring a turnkey commercial device powered by Qualcomm Cloud AI 100 with Snapdragon running on Linux and 5G |
---|
23 | 3 | Memory Developments to Tackle Challenges in AI | 61 | Tien Shiah Senior Manager Samsung, Sarah Peach Senior Director, Memory Marketing Samsung Semiconductor Inc. | AI models are getting exponentially larger and more complex, creating memory and compute bottlenecks for AI performance. Samsung will explore some of the new developments in processing in memory and computational storage to address these challenges and supercharge your AI application |
---|
24 |
| PANEL: From App to Silicon: Personalizing AI Hardware | 67 | Moderator: Karl Freund - Founder & Principal Analyst - Cambrian AI
Stelios Diamantidis - Senior Director, Artificial Intelligence Solutions - Synopsys
Steve Oberlin, CTO, Tesla, NVIDIA | AI model complexity is doubling every few months. With trillion+ parameter models already in hand, is AI hardware development keeping up with the intricacies of new AI applications? What would it take for system innovators to build and deploy custom silicon solutions in weeks instead of, currently, 18 months or longer? Our expert panel will look at the fascinating journey from cloud graphs to silicon and debate the future of application-specific cognitive systems. |
---|
25 |
|
|
|
|
|
---|
26 | 4 | Roundtable 1: Scalable On-Device to Edge AI for Pervasive Intelligence | Unknown | Adam Abed - Product Marketing Director, Cadence
Suhas Mitra- Product Marketing Director, Cadence | This roundtable will cover the full range of Cadence’s Tensilica on-device AI solutions for low-cost voice-activated consumer devices to high-throughput autonomous vehicle perception. These IP are widely deployed in high-volume AI-enabled end products such as smart speakers, mobile phones, surveillance cameras, and automotive subsystems. We’ll show how the wide portfolio of Cadence’s low-power programmable DSP and AI engines in on-device AI IP meets the need of each specific market. |
---|
27 | 4 | Roundtable 3: Advanced microelectronic technologies driving the AI revolution | 88 | Romano Hoofman - Program Director, imec
Sean Lee Hsien-Hsin - Area Research Lead, Facebook AI Research
Lucas Tsai - Director, Market Development & Emerging Business Management, TSMC
Menno Lindwer - VP IP&Silicon, Managing Director, GrAI Matter Labs |
|
---|
28 | 4 | Roundtable: AI Models of the Future: A discussion with Cerebras (Virtual) | 40 |
|
|
---|
29 | 4 | Roundtable 1: Is Reinforcement Learning the Key to Success for AI Chips? | 65 |
|
|
---|
30 | 4 | Roundtable 2: Accelerating Recommender Models with Intel® Deep Learning Boost | 53 |
| Recommender models (RMs) have a significant monetary impact across cloud service providers predicting the preferences of the end user. In this talk various RM families used across industry, the resounding differences with other types of models, and some of the optimization challenges are discussed. Intel® Xeon® Deep Learning Boost is presented, how it is enabled in the popular frameworks (TensorFlow, PyTorch, etc.), and its application to RMs. |
---|
31 |
|
|
|
|
|
---|
32 | 4 | Roundtable 2: Optimizing AI/ML Processing for 5G and networking applications |
| Multiple speakers from Marvell Technology (and 1 from Nokia) | Join our distinguished panel to explore how in-line AI/ML acceleration implemented in Data Processing Units (DPUs) is addressing the unique challenges of automating 5G network operations in areas such as radio performance optimization as well as networking security and threat detection. |
---|
33 |
|
|
|
|
|
---|
34 | 4 | Roundtable 1: High-Bandwidth, Low-Latency Interconnect For AI System Architectures | 72 | Matthew Burns - Technical Marketing Manager. Samtec | Emerging AI training and inference system architectures require faster bandwidth and low latency. System density, small form factors, scalability and configurability remain key design criteria. Whether an AI-focused SoMs, AI Accelerators or complex distributed compute architectures, Samtec’s innovative connectivity solutions optimize the entire signal channel at 112 Gbps PAM4 data rates. In this round table, technical experts from Samtec will detail high-speed board-to-board connectors, high-speed cable assemblies and precision RF solutions enabling high bandwidth and low latency in read world AI applications. |
---|
35 | 4 | Roundtable 2: Learning on the Edge | 90 | Anil and Rob
This was held at same time as Roundtable 1 (noting it's possible to enrol in both sessions - but questions weren't recorded for Roundtable1, so hard to guage how engaged audience were) | BrainChip’s Akida addresses the demand for ultra-low power and incremental learning on the edge, inspired by the biology of human brain processing. This technology is key to the future of Intelligent AI at the Edge. Advanced neuromorphic computing delivers a pathway to new technologies driving the ecosystem, and solves problems in machine learning such as privacy, latency and reliance on the cloud, all on the edge. |
---|