BRN 3.17% 30.5¢ brainchip holdings ltd

Sorry if posted As the title says to predict where object is...

  1. 7,700 Posts.
    lightbulb Created with Sketch. 1854
    Sorry if posted
    As the title says to predict where object is going to land or forward prediction of movement in time ..

    Peter VDM has referred this line of  progress also

    One thing i find interesting is choice of camera for this
    Humans V SNN  for trajectory   Let U read to see who won   LOL

    Anyway couple of snippets ....


    https://www.frontiersin.org/articles/10.3389/fncom.2021.658764/full


    Event-Based Trajectory Prediction Using Spiking Neural Networks

    Guillaume Debat1*, Tushar Chauhan1, Benoit R. Cottereau1, Timothée Masquelier1, Michel Paindavoine2 and Robin Baures1
    • 1CERCO UMR 5549, CNRS—Université Toulouse 3, Toulouse, France

    • Choice of the Event Camera

      Several models of event-driven cameras have already been proposed in the industry (Prophesee, iniVation, Insightness, Samsung, CelePixel) and operate mainly according to a pixel-to-pixel temporal difference. Although the performances of these devices are remarkable in terms of temporal frequency and dynamic range, they suffer from some crucial limitations in terms of bio-inspired modeling, namely the inexistence of spatial filters upstream of the spike-generation. Using bio-inspired models which capture various aspects of the visual system (Masquelier and Thorpe, 2007), these filters make it possible to reinforce the performances of spike-based analysis by introducing a bio-inspired component upstream of the spike-generation.
      In parallel to this observation, different devices have appeared during the last few years which allow the generation of spikes from standard CMOS image sensors (Abderrahmane and Miramond, 2019; Admin, 2020; Spike Event Sensor, 2021). The main objective behind these cameras is, on one hand, to be able to integrate spatial filters upstream of the spikes generation, and on the other hand, to have sensors of different formats going, for example, up to 2M pixels (Caiman Camera, 2021) [usually the pixel-count of event-based cameras is rather limited (Lichtsteiner et al., 2008; Posch et al., 2011; Brandli et al., 2014; Son et al., 2017)].
      In order to guarantee reliable integration of spikes, it is imperative that these image sensors work in global-shutter mode (instantaneous image acquisition), and with a short exposure time (in the order of few milliseconds). Such cameras are an intermediary between event-based sensors and frame-based cameras, and allow for spatial and temporal filtering with high FPS.
      The NeuroSoc camera from Yumain (Spike Event Sensor, 2021) possesses these characteristics and was, therefore, chosen for this study. This camera operates at 240 frames per second at a resolution of 128 × 120 px. Spatial and temporal filters are embedded in a processing board closed to the image sensor (see section Architecture of the NeuroSoc Event Camera) to detect brightness variations and generate spikes. As explained above, these spatial filters (here DoG type) are closer to those found in the lateral geniculate nucleus in the human visual system. As a result, they reduce noise, detect edges, and increase output sparseness.

    • Architecture of the NeuroSoc Event Camera

      We used the NeuroSoc event camera developed by Yumain which is based on a global-shutter CMOS MT9024 image sensor from On Semiconductor and a board called NeuroSoC. This board is composed of a MPSoC Zynq 7020 circuit from Xilinx and a 4 Gbits DDRAM memory (see Figure 2). The CMOS sensor operates in global-shutter mode (instantaneous image acquisition) with an exposure time in the range of 31 ns to 4 ms. In the context of this study, images are generated in a 128 × 120 pixels format guaranteeing a throughput of 240 frames per second, with an exposure time of 3.7 ms due to low luminosity conditions (window shutters closed for proper operation of the Vicon). Images from the image sensor transmitted to the Zynq MPSoC circuit are filtered in real time in order to extract the salient parts of the objects contained in the images. The first step of the process consisted of calculating the difference between the images at time tn and tn−1 (the sampling period tn-tn−1 was 4.17 ms or 1/240 fps). In this study, a DoG (Difference of Gaussian) filter was applied to this difference. The output of the filter was classified (positive/negative values generate ON/OFF spikes) and sorted according to the most important to the least important absolute values above a threshold, thus constituting a train of temporal spikes. The threshold value was set manually during the acquisition phase, and was adjusted to extract as many spikes from movements as possible while keeping the noise level low. As shown in Figure 2, all these treatments are implemented in the FPGA within the Zynq MPSoC.

    • Neuron Model

      Our SNN was based on leaky integrate and fire (LIF) neurons. When such a neuron receives an incoming spike, its membrane potential increases in proportion to the synaptic weight that connects it to the pre-synaptic neuron that emitted the spike. In the absence of incoming spikes, the neuron membrane potential leaks according to Equation (2):
    • In contrast to our SNN, the human participants already had experience with ball motion or motion in general. We nonetheless included an active-learning phase in the experiment so that participants could adapt to its specificities.
 
watchlist Created with Sketch. Add BRN (ASX) to my watchlist
(20min delay)
Last
30.5¢
Change
-0.010(3.17%)
Mkt cap ! $563.0M
Open High Low Value Volume
31.0¢ 31.5¢ 30.3¢ $1.138M 3.716M

Buyers (Bids)

No. Vol. Price($)
7 158536 30.5¢
 

Sellers (Offers)

Price($) Vol. No.
31.0¢ 271818 11
View Market Depth
Last trade - 16.10pm 26/04/2024 (20 minute delay) ?
Last
30.8¢
  Change
-0.010 ( 1.25 %)
Open High Low Volume
31.5¢ 31.5¢ 30.3¢ 2309338
Last updated 15.59pm 26/04/2024 ?
BRN (ASX) Chart
arrow-down-2 Created with Sketch. arrow-down-2 Created with Sketch.