BRN 2.44% 20.0¢ brainchip holdings ltd

2024 BrainChip Discussion, page-7366

  1. 9,861 Posts.
    lightbulb Created with Sketch. 26254

    End-to-End Edge Neuromorphic Object Detection System

    DA Silva, A Shymyrbay, K Smagulova… - 2024 IEEE 6th …, 2024 - ieeexplore.ieee.org
    … on edge, based on a DVXplorer camera and the neuromorphic Brainchip’s Akida chip, to be suitable for edge … We would like also to thank Edge Impulse and Brainchip companies for …


    End-to-End Edge Neuromorphic Object Detection System

    D. A. Silva1

    , A. Shymyrbay1

    , K. Smagulova1

    , A. Elsheikh2

    , M. E. Fouda3,†

    and A. M. Eltawil1

    1. Department of ECE, CEMSE Division, King Abdullah University of Science and Technology, Thuwal 23955, Saudi Arabia

    2.Department of Mathematics and Engineering Physics, Faculty of Engineering, Cairo University, Giza 12613, Egypt

    3.Rain Neuromorphics, Inc., San Francisco, CA, 94110, USA

    Email:[email protected]

    Abstract—Neuromorphic accelerators are emerging as a po-

    tential solution to the growing power demands of Artificial

    Intelligence (AI) applications. Spiking Neural Networks (SNNs),

    which are bio-inspired architectures, are being considered as a

    way to address this issue. Neuromorphic cameras, which operate

    on a similar principle, have also been developed, offering low

    power consumption, microsecond latency, and robustness in var-

    ious lighting conditions. This work presents a full neuromorphic

    system for Computer Vision, from the camera to the processing

    hardware, with a focus on object detection. The system was

    evaluated on a compiled real-world dataset and a new synthetic

    dataset generated from existing videos, and it demonstrated good

    performance in both cases. The system was able to make accurate

    predictions while consuming 66mW, with a sparsity of 83%, and

    a time response of 138ms.

    Index Terms—SNN, DVS, Neuromorphic, Akida, YOLO

    In this work, we implemented an end-to-end fully neuro-

    morphic object detector using an event-based DVXplorer Lite

    sensor by Inivation [14] and event-based processor Akida by

    Brainchip [15]. The detector is based on an AkidaNet [16]

    backbone, designed by Brainchip and optimized for Akida

    chip, and a YOLOv2 [17] detection head, whose structure is

    suitable to the Akida constraints. The main purpose of this

    work is to show the potential of a full-spiking system by

    creating an object detector with a small memory footprint, low

    power consumption, and real-time response. The contributions

    of this work are as follows (a) implementation of the first end-

    to-end Akida-based full-spiking system for object detection

    in the literature. To the best of our knowledge, there is

    only one work implementing object detection on an actual

    spiking platform, implemented over a SpiNNaker 3 [12].

    (2)Generation of a new synthetic event-based dataset based

    on recordings of traffic junctions; and (3) Evaluation of the

    system’s performance, power consumption, and time-response,

    showing that it can perform real-time detections with low

    power consumption.

    For Object Detection in SNNs, there is a lack of data reported for such

    measures in actual spiking setups, where the only results

    reported up to now are from Attention RPN-SNN [12]. In

    that work, a SpiNNaker 3 neuromorphic chip was deployed

    for detection tasks, reporting a total processing time of 35.3s

    and power of 600mW.

    VI. CONCLUSION AND FUTURE WORK

    This work showed a low-power and real-time latency full

    spiking neuromorphic system for object detection based on Ini-

    Vation’s DVXplorer Lite event-based camera and Brainchip’s

    Akida AKD1000 spiking platform. The system was evaluated

    on three different datasets, comprising real-world and synthetic

    samples. The final mapped model achieved mAPs of 28.58 for

    the GEN1 dataset, equivalent to 54% of a more complex state-

    of-the-art model and 89% of the performance detection from

    the best-reported result for the single-class dataset PEDRo,

    having 17x less parameters. A power consumption of 66mW

    and a latency of 138.88ms were reported, being suitable for

    real-time edge applications.

    For future works, different models are expected to be

    adapted to the Akida platform, from which more recent

    releases of the YOLO family can be implemented. Moreover,

    it is expected to evaluate those models in real-world scenarios

    instead of recordings, as well as the acquisition of more data

    to evaluate this setup under different challenging situations.

    VII. ACKNOWLEDGEMENT

    This work has been partially supported by King Abdullah

    University of Science and Technology CRG program under

    grant number: URF/1/4704-01-01.

    We would like also to thank Edge Impulse and Brainchip

    companies for providing us with the software tools and hard-

    ware platform used during this work.

    Mr. Fouda has been according to his LinkedIn been employed at Rain Ai for 2 years and 5 months and is presently holding the position

    of Applied Research Lead.

    I found all of the above very interesting particularly given that some might wonder why with Mr. Fouda's connections at Rain Ai they chose to use AKIDA technology.

    My opinion only DYOR

    Fact Finder
 
watchlist Created with Sketch. Add BRN (ASX) to my watchlist
(20min delay)
Last
20.0¢
Change
-0.005(2.44%)
Mkt cap ! $371.1M
Open High Low Value Volume
20.5¢ 20.5¢ 19.5¢ $843.2K 4.220M

Buyers (Bids)

No. Vol. Price($)
5 71948 20.0¢
 

Sellers (Offers)

Price($) Vol. No.
20.5¢ 644737 16
View Market Depth
Last trade - 16.10pm 26/07/2024 (20 minute delay) ?
BRN (ASX) Chart
arrow-down-2 Created with Sketch. arrow-down-2 Created with Sketch.