BRN 20.5% 23.5¢ brainchip holdings ltd

2021 BRN Discussion, page-11409

  1. 6,614 Posts.
    lightbulb Created with Sketch. 2189
    Hi FF,

    A little news about any Tata/Akida hookup would be most welcome, but we are left to read the tea leaves.

    I had a look at Tataelxsi's webpage:

    https://www.tataelxsi.com/news-and-events/enabling-edge-ai-through-future-ready-software-development-kit

    For ADAS, they talk about a mix of GPU and ASIC, a bit like Anil's reference to hybrid computing, making the obvious distinction between real time actions and heavy duty background data processing, but they end up with a promising prediction for edge learning using ASIC (AKIDA!?) to perform most of the learning.

    Edge AI is here to stay!

    Artificial intelligence (AI) is powering many real-world applications which we see in our daily lives. AI, once seen as an emerging technology, has now successfully penetrated into every industry (B2B & B2C) Banking, logistics, healthcare, defence, manufacturing, retail, automotive, consumer electronics. Smart Speaker like Echo, Google Nest, is one such example of Edge AI solutions in the consumer electronics sector. AI technology is powerful, and human-kind has set its eye on the path of harnessing its potential to the fullest. Intelligence brought to the device can be very useful and creative.

    The key requirements that need to be factored in designing Edge AI architecture are — bandwidth, latency, privacy, security, power consumption. While envisioning an Edge AI solution, these requirements need to be thoroughly weighed in terms of what feature can be traded off and yet be effective.

    AI enables the machines to perform cognitive functions such as perceiving, reasoning, and learning similar to humans but much faster and accurate. AI implementation is majorly classified into two phases — Learning and Inference.Learning allows the machines to parse through a large number of existing datasets and be able to learn by recognising patterns. Due to its compute-intensive nature, learning happens on the cloud, as it provides access to vast storage capacity and AI accelerated hardware for faster processing. Inference phase involves the ability to make decisions on the new data based on the learnings from already processed data. For ADAS, the inference has to be fast and on the Edge device whereas learning which is compute-intensive can happen in the cloud.

    The self-driving vehicle would receive and generate a large amount of heterogeneous data which can be used for better predictability and reliability. Predictability of vehicle health conditions can be done on the cloud with an AI algorithm churning all that data and making sense out of it. Whereas pedestrian detection is time-critical as it has to actuate the automated braking system as well, zero-latency is expected. ADAS functionalities need to be processed in real-time. In this scenario, a self-driving vehicle needs to be considered as an edge device when it comes to ADAS functionalities. This is indeed possible with high-end GPUs available being assigned for safety-critical tasks while other vehicle telemetry data which is not time-critical can be processed on cloud.

    In the financial services space, on the other hand, financial data needs to be secured and always available, so the power consumption would be high. However, AI-based inference like pre-approved loan being sanctioned send to end-consumers mobile is not a time-critical event, latency is accepted. All computing for financial services can be done on the cloud with high-end technology infrastructure enabled with AI accelerated hardware platforms and security solutions and yet be cost-effective. So based on the end-application, there are trade-offs that would be valuable and serve the end-purpose.

    Edge AI market overview

    Globally, AI chipsets market size is expected to be valued at USD 7.6 billion in 2020 and likely to reach USD 57.8 billion by 2026, at a CAGR of 40.1% during the forecast period. Implementation of AI is the current trend in chip technology, and it’s going to stay that way. Many leading semiconductor companies and venture capitalists see it as the right tech-front for investment.

    Changing dynamics in terms of hardware consideration for learning and inference

    The Edge AI hardware market is segmented into CPU, GPU, ASIC, and FPGA. ASICs enable high processing capability with low-power consumption, making them perfectly suited for Edge devices in many applications. In contrast, GPUs could be overpriced for an Edge solution, but when we talk of the autonomous vehicle, GPUs would be the right fit to deal with image manipulations at lightning speed. Deep learning frameworks are still evolving, making it hard to design custom hardware; reconfigurable FPGAs provide device manufacturers close to ASIC-type performance, reprogrammed to serve the changing needs.

    It is estimated that an Edge AI architecture inference implemented on ASIC will grow from 30% to 70% and 20% on GPU by 2025. While the whole training activity, which was distributed among CPU & GPU equally in 2017, is estimated to move to 70% on ASIC, 20% on FPGA by 2025. Edge devices are basically embedded products with resource constraints, and hence, Edge AI implementation needs to be thought of as an application-specific use case. AI-based applications for Edge devices are intelligent robots, autonomous vehicles, smart home appliances, among others. The primary applications that run over Edge AI are related to image/video, sound, and speech, natural language processing, device control system, and high-volume computing
    .
    .
 
watchlist Created with Sketch. Add BRN (ASX) to my watchlist
arrow-down-2 Created with Sketch. arrow-down-2 Created with Sketch.