BRN brainchip holdings ltd

2025 BrainChip Discussion, page-4699

  1. 11,355 Posts.
    lightbulb Created with Sketch. 31804

    A Distributed Time-of-Flight Sensor System for Autonomous Vehicles: Architecture, Sensor Fusion, and Spiking Neural Network Perception

    by
    Edgars Lielamurs
    1,*orcid.png?0465bc3812adeb52?1750315394,
    Ibrahim Sayed
    1orcid.png?0465bc3812adeb52?1750315394,
    Andrejs Cvetkovs
    1orcid.png?0465bc3812adeb52?1750315394,
    Rihards Novickis
    1orcid.png?0465bc3812adeb52?1750315394,
    Anatolijs Zencovs
    1orcid.png?0465bc3812adeb52?1750315394,
    Maksis Celitans
    1orcid.png?0465bc3812adeb52?1750315394,
    Andis Bizuns
    1orcid.png?0465bc3812adeb52?1750315394,
    George Dimitrakopoulos
    2,
    Jochen Koszescha
    2orcid.png?0465bc3812adeb52?1750315394 and
    Kaspars Ozols
    1orcid.png?0465bc3812adeb52?1750315394


    1
    Institute of Electronics and Computer Science, 14 Dzerbenes St., LV-1006 Riga, Latvia
    2
    Infineon Technologies AG, Am Campeon 1-15, 85579 Neubiberg, Germany
    *
    Author to whom correspondence should be addressed.
    Electronics 2025, 14(7), 1375; https://doi.org/10.3390/electronics14071375
    Submission received: 21 February 2025 /Revised: 24 March 2025 / Accepted: 26 March 2025 /Published: 29 March 2025
    (This article belongs to the Section Electrical and Autonomous Vehicle

    Abstract

    Mechanically scanning LiDAR imaging sensors are abundantly used in applications ranging from basic safety assistance to high-level automated driving, offering excellent spatial resolution and full surround-view coverage in most scenarios. However, their complex optomechanical structure introduces limitations, namely limited mounting options and blind zones, especially in elongated vehicles. To mitigate these challenges, we propose a distributed Time-of-Flight (ToF) sensor system with a flexible hardware–software architecture designed for multi-sensor synchronous triggering and fusion. We formalize the sensor triggering, interference mitigation scheme, data aggregation and fusion procedures and highlight challenges in achieving accurate global registration with current state-of-the-art methods. The resulting surround view visual information is then applied to Spiking Neural Network (SNN)-based object detection and probabilistic occupancy grid mapping (OGM) for enhanced environmental awareness. The proposed system is demonstrated on a test vehicle, achieving coverage of blind zones in a range of 0.5–6 m with a scalable and reconfigurable sensor mounting setup. Using seven ToF sensors, we can achieve a 10 Hz synchronized frame rate, with a 360° point cloud registration and fusion latency below 40 ms. We collected real-world driving data to evaluate the system, achieving 65% mean Average Precision (mAP) in object detection with our SNN. Overall, this work presents a replacement or addition to LiDAR in future high-level automation tasks, offering improved coverage and system integration….


    3.6. ToF Object Detection with SNN

    The pipeline for our custom 3D spiking convolutional neural network (3D-SCNN) depicted in Figure 8 involves spatiotemporally sparse ToF signal processing with convolutional SNN algorithms. The resulting point-cloud processing pipeline performs efficient information transmission and processing, overcoming conventional processing limitations on graphical processors (GPUs). To facilitate non-rate-based spike encoding, the temporal voxel coding (TVC) preprocessing step applies latency encoding on multiple ToF camera outputs, combining them into a single, coherent data frame. This time-domain spike encoding improves information sparsity retention, which can be leveraged for processing efficiency with specialized neuromorphic neural processing units (NPUs), such as the Brainchip Akida [72] custom FPGA accelerators.

    6. Conclusions

    The flexible software architecture presented in this paper demonstrates adaptation to different distributed ToF sensor configurations. The system combines a hardware triggering scheme, 3D point cloud registration with a continuous fidelity check, probabilistic occupancy grid mapping, SNN-based object detection, and runtime execution monitoring. Notably, by integrating up to seven cameras, the system scales effortlessly, preserving low average latency (<107 ms acquisition, <40 ms fusion, 29 ms inference, and 39 ms real-time visualization).
    The introduced external trigger scheme and the synchronization analysis show significant improvements in interference mitigation by applying only a 10 μs delay between staggered triggers. Through point cloud registration accuracy analysis, we highlight the challenges of maintaining global alignment in closed-loop geometries, observing a notable drift up to 0.68 m in direct versus indirect pairwise sequential transformations, which reveals that future refinements could include registration methods to mitigate indirect error propagation.
    The custom event-based SNN inference model demonstrated competitive precision of 65% mAP while benefiting from inherently sparse ToF data, making it well suited for real-time, low-power applications. Further optimizations could include SNN model quantization for neuromorphic hardware acceleration to enhance computational efficiency. While the pretrained networks detected objects in high-density ToF sensor point clouds, the LiDAR-to-ToF domain gap requires careful preprocessing, evidenced by PV-RCNN’s improved performance with double voxelization (52% to 59% improvement). Likewise, retraining models on the non-scanning LiDAR dataset showed improvements due to the closer similarity to ToF data. Future work could boost accuracy by incorporating synthetic ToF sensor data from the CARLA [78] simulator into the training set to better bridge this gap and enhance generalization.
    In terms of the ToF system technology transfer to real-world automated driving solutions, identifying vulnerable road user (VRU) object categories with ToF sensors and data-driven models could offer unique advantages regarding technology acceptance, primarily due to the ability to capture dense depth data instead of recognizable visual details, removing the privacy concerns related to facial identification. Additionally, prioritizing a safe zone around the vehicle and achieving demonstrable reliability in reducing accidents would gain trust and bolster societal acceptance of AD safety systems.
    Lastly, despite the notable limitations we observed with the chosen current generation sensors, such as occasional visible distance reduction in bright sunlight, hardware improvements could address these constraints and thorough characterization against environmental conditions could be conducted. By refining the sensor setup—for example, by incorporating higher range sensors in critical blind spots—the system could be further adapted for specific applications, making the approach adaptable to a wide range of autonomous perception tasks. Moreover, the simplistic low-level software architecture allows for deployment on embedded heterogeneous RISC architectures (AArch64, RISV-V) with neuromorphic NPU integration for low-power operation on resource-constrained platforms."

    Interestingly this is not the first time that AKIDA technologies compatibility with Time of Flight Sensors has been proven nor is it a novel idea that AKIDA technology might be integrated with RISC architectures: see Frontgrade Gaisler and GRAIN and Si Fives Intelligence Series incorporating AKIDA.

    My opinion only DYOR

    Fact Finder
 
Add to My Watchlist
What is My Watchlist?
A personalised tool to help users track selected stocks. Delivering real-time notifications on price updates, announcements, and performance stats on each to help make informed investment decisions.
(20min delay)
Last
19.5¢
Change
0.005(2.63%)
Mkt cap ! $394.9M
Open High Low Value Volume
19.5¢ 19.5¢ 19.0¢ $802.7K 4.129M

Buyers (Bids)

No. Vol. Price($)
41 1102480 19.0¢
 

Sellers (Offers)

Price($) Vol. No.
20.0¢ 710044 24
View Market Depth
Last trade - 16.10pm 24/06/2025 (20 minute delay) ?
BRN (ASX) Chart
arrow-down-2 Created with Sketch. arrow-down-2 Created with Sketch.