BRN 0.00% 23.5¢ brainchip holdings ltd

Renesas, page-143

  1. 1,003 Posts.
    lightbulb Created with Sketch. 126
    Excellent blog by Dr Sailesh Chittipeddi (EXECUTIVE VICE PRESIDENT AND GENERAL MANAGER OF THE IOT AND INFRASTRUCTURE BUSINESS UNIT) dated 22/09/2021 who thinks edge and endpoints is the future.

    https://www.embeddedcomputing.com/technology/iot/edge-computing/where-edge-and-endpoint-ai-meet-the-cloud

    Where Edge and Endpoint AI Meet the Cloud

    By Dr. SaileshChittipeddi

    EXECUTIVEVICE PRESIDENT AND GENERAL MANAGER OF THE IOT AND INFRASTRUCTURE BUSINESS UNIT


    RENESASELECTRONICS CORPORATION

    September 22, 2021


    The COVID-19pandemic created new health and safety requirements that transformed how peopleinteract with each other and their direct environments. The skyrocketing demandfor touch-free experiences has in turn accelerated the move toward AI-poweredsystems and voice-based control and other contactless user interfaces – pushingintelligence closer and closer to the endpoint.

    One of the most important trends in the electronicsindustry today is the incorporation of AI into embedded devices, particularlyAI interpreting sensor data such as images and machine learning for alternativeuser interfaces such as voice.

    Embedded Artificial Intelligence of Things (AIoT)is the key to unlocking the seamless, hands-free experience that will help keepusers safe in a post-Covid environment. Consider the possibilities: Smartshopping carts that allow you to scan your goods as you drop them in your cartand use mobile payments to bypass the checkout counter, or intelligent videoconferencing systems that automatically recognize and switch focus on differentspeakers during meetings to provide a more ‘in-person’ experience for remoteteams.

    Why is now the time for an embedded AIoTbreakthrough?

    AIoT is Moving Out

    Initially, AI sat up in the cloud where it tookadvantage of computational power, memory, and storage scalability levels thatthe edge and endpoint just could not match. However, more and more, we areseeing not only machine learning training algorithms move out toward the edgeof the network, but also a shift from deep learning training to deep learninginference.

    Where “training” typically sits in the networkcore, “inference” now lives at the endpoint where developers can access AIanalytics in real time and then optimize device performance, rather thansifting through the device-to-cloud-to-device loop.

    Today, most of the inference process runs at theCPU level. However, this is shifting to a chip architecture that integratesmore AI acceleration on chip. Efficient AI inference demands efficientendpoints that can infer, pre-process, and filter data in real time. EmbeddingAI at the chip level, integrating neural processing and hardware accelerators,and pairing embedded-AI chips with special-purpose processors designedspecifically for deep learning, offer developers a trifecta of the performance,bandwidth, and real-time responsiveness needed for next-generation connectedsystems.

    https://hotcopper.com.au/data/attachments/3611/3611679-5f98727c559e25b704357dd9ac690e2c.jpg


    An AIoT Future: AtHome and the Workplace

    In addition, a convergence of advancements aroundAI accelerators, adaptive and predictive control, and hardware and software forvoice and vision open up new user interface capabilities for a wide range ofsmart devices.

    For example, voice activation is quickly becomingthe preferred user interface for always-on connected systems for bothindustrial and consumer markets. We have seen the accessibility advantages thatvoice-control based systems offer for users navigating visual or other physicaldisabilities, using spoken commands to activate and achieve tasks. With therising demand for touchless control as a health and safety countermeasure inshared spaces like kitchens, workspaces, and factory floors, voice recognition– combined with a variety of wireless connectivity options – will bringseamless, non-contact experiences into the home and workspace.

    Multimodal architectures offer another path forAIoT. Using multiple input information streams improves safety and ease of usefor AI-based systems. For example, a voice + vision processing combination isparticularly well suited for hands-free AI-based vision systems. Voicerecognition activates object and facial recognition for critical vision-basedtasks for applications like smart surveillance or hands-free video conferencingsystems. Vision AI recognition then jumps in to track operator behavior,control operations, or manage error or risk detection.

    On factory and warehouse floors, multimodal AIpowers collaborative robots – or CoBots –as part of the technology groupingserving as the five senses that allow CoBots to safely perform tasksside-by-side with their human counterparts. Voice + gesture recognition allowsthe two groups to communicate in their shared workspace.

    What’s on theHorizon?

    According to IDC Research, there will be 55 billionconnected devices worldwide generating 73 zettabytes of data by 2025,and edge AI chips are set to outpace cloud AI chips as deep learning inferencecontinues to relocate out to the edge and device endpoints. This integrated AIwill be the foundation that powers a complex combination of “sense”technologies to create smart applications with more natural, “human-like”communication and interaction.

    Last edited by ahboy: 23/09/21
 
watchlist Created with Sketch. Add BRN (ASX) to my watchlist
(20min delay)
Last
23.5¢
Change
0.000(0.00%)
Mkt cap ! $461.1M
Open High Low Value Volume
24.0¢ 24.5¢ 22.5¢ $3.945M 16.76M

Buyers (Bids)

No. Vol. Price($)
1 22222 23.5¢
 

Sellers (Offers)

Price($) Vol. No.
24.0¢ 909177 10
View Market Depth
Last trade - 16.10pm 01/10/2024 (20 minute delay) ?
BRN (ASX) Chart
arrow-down-2 Created with Sketch. arrow-down-2 Created with Sketch.