BRN 2.22% 22.0¢ brainchip holdings ltd

An old article not sure if posted but interesting . ANALYSIS &...

  1. 2,551 Posts.
    lightbulb Created with Sketch. 1
    An old article not sure if posted but interesting .
    ANALYSIS & OPINION
    Tags:
    TECHNOLOGY
    What can drones learn from bees?
    29 March 2017
    Tweet

    Dr Andrew Schofield, who leads the Visual Image Interpretation in Human and Machines network in the UK, asks what computer vision can learn from biological vision, and how the two disciplines can collaborate better

    What can drones learn from drones? Almost every month the national news feeds carry a story about the latest development in drone aircraft, self-driving cars, or intelligent robot co-workers. If such systems are to achieve mass usage in a mixed environment with human users they will need advanced vision and artificial reasoning capabilities and will need to both behave, and fail, in ways that are acceptable to humans.

    Setting aside recent high profile crashes in self-driven cars, the complexity and un-reliability of our road systems mean that driverless cars will need to act very much like human drivers. A car that refuses to edge out into heavy traffic will cause grid lock. Likewise a drone should not fail to deliver its package because the front door at the target house has been painted since the last Google Street View update.

    So what can drones learn from drones – or, to be more precise, worker bees? In surveillance they may have similar tasks: explore the environment looking for particular targets while avoiding obstacles and eventually return home. They also have similar payload and power constraints: neither can afford a heavy, power hungry brain.

    The bee achieves its seek and locate task with very little neural hardware and a near zero energy budget. To do so it uses relatively simple navigation, avoidance and detection strategies that produce apparently intelligent behaviour. Much of the technology for this kind of task is already available in the form of optic flow sensors and simple pattern recognisers such as the ubiquitous face locators on camera phones. Even the vastly more complex human brain has within it separate modules or brain regions specialised for short range sub-conscious navigation via optic flow and rapid face detection. However, the human brain is much more adaptable and reliable than even the best computer vision systems.

    The Visual Image Interpretation in Human and Machines (ViiHM) network[1], funded by the Engineering and Physical Sciences Research Council, brings together around 250 researchers to foster the translation of discoveries from biological- to machine-vision systems. This aim is not new. In the early days of machine vision there was a natural crossover between these two fields. The Canny[2] edge detector for example computes edges as luminance gradients in a blurred (de-noised) image and then links weaker edge elements to stronger ones. This method has its roots in Marr and Hildreth’s[3] model of retinal processing plus the contour integration mechanisms found in visual cortex.

    More recent examples of biology inspired processing include Deep Convolutional Neural Networks (DNN)[4], which have multiple convolution-based filtering layers separated by non-linear operators and down sampling to achieve increasingly large-scale and complex filters until finally classifications can be made. This structure is very similar to and loosely modelled on the multiple feature detection layers and receptive field properties of biological vision systems. Alternatively the SpikeNet[5] recognition system has a similar convolutional structure but more directly models the production of neuron action potentials. The relationship between machine and biological vision is symbiotic: convolution filters developed for machine vision are used to model biological processing and DNNs have been applied to human behavioural data to characterise the visual system.

    However, in recent decades the biological and machine vision communities have diverged. Driven by different success criteria – a desire to understand specific visual systems on the one hand and to rapidly build working engineering solutions on the other – the two disciplines have developed different priorities, and ways of working. The ideal development cycle where observed phenomenon are explored in biology, results modelled computationally, and those models turned into useful applications can be protracted and requires multiple skill sets. The chain is often broken as academics on the biological vision side rush to publish their findings and get on with the next experiment while those working in industrial vision rightly employ any and every tool in the quest for better performance. Progress is hindered by language and understanding barriers with different terminology used even for the most basic concepts.

    To counter this separation ViiHM has developed a triad of Grand Challenges[6] for intelligent vision where we think success can best be achieved by working together. The overall aim is to produce a general purpose, embodied, integrated, and adaptive visual system for intelligent robots, mobile and wearable technologies. Within this scope the Application Challenge is to augment and enhance vision in both the visually impaired and normally sighted, and to develop cognitive and personal assistants that can help those with low vision, the elderly, or simply the busy executive to deal with everyday tasks. Such aids might extend from wearable technologies that secretly prompt their user, to fully autonomous robots acting as caregivers and personal assistants. Here it is important that robots think and act like humans while avoiding the ‘uncanny valley’ effect[7] – where people are repulsed by robots appearing almost, but not exactly, like real humans.

    These applications will be underpinned by the Technical Challenge of making low-power, small-footprint vision systems. To be acceptable, intelligent visual systems need to run all day on a single charge and be realised in discreet wearable devices. Such power and space savings can be achieved by learning how biological systems are implemented at the physical as well as algorithmic layer. Finally the Theoretical Challenge of general purpose, integrated and adaptive vision will see visual systems that can operate ‘out of the box’ and in the wild, but continuously adapt to and learn from their environment.

    Learning the behaviours of their users and co-workers, such systems will be robust and flexible. They will fail gracefully and in ways that are acceptable to the humans they co-operate with. They will, for example, be able to identify people and places despite quite gross changes, to safely navigate new and altered environments and learn from experience over very long periods of time with fixed and limited memory capacities. These are tough challenges but biology has shown them to be solvable.

    --
 
watchlist Created with Sketch. Add BRN (ASX) to my watchlist
(20min delay)
Last
22.0¢
Change
-0.005(2.22%)
Mkt cap ! $408.3M
Open High Low Value Volume
22.5¢ 22.5¢ 21.8¢ $1.107M 5.019M

Buyers (Bids)

No. Vol. Price($)
6 110458 22.0¢
 

Sellers (Offers)

Price($) Vol. No.
22.5¢ 355185 24
View Market Depth
Last trade - 16.10pm 28/06/2024 (20 minute delay) ?
BRN (ASX) Chart
arrow-down-2 Created with Sketch. arrow-down-2 Created with Sketch.