BRN 2.33% 22.0¢ brainchip holdings ltd

Akida is coming!

  1. 1,709 Posts.
    lightbulb Created with Sketch. 3746

    Bob Beachler is at it again! Enlightening the masses about the glory, power and efficiency that can be found in spiking neural networks, especially when implemented in silicon semiconductor chips. Little do they know that their saviour Akida will walk among them soon, both in data servers and edge devices. wink.png

    Neuromorphic computing gives AI a real-time boost

    Bob Beachler - December 03, 2018

    Machine learning continues to get ever more capable, though at the expense of continuously more compute power, and more time to learn from ever larger data sets. Of course, if you put machine learning in data centers, those costs are easily afforded. But now there are real-time applications emerging at the network edge that require instantaneous learning from small data sets, using minimal computational resources, and these applications can’t afford the network latency inherent in relying on data centers. A type of machine learning called neuromorphic computing fits that bill. One type of neuromorphic computing, known as spiking neural networks (SNN), is particularly well-suited to those demands.

    Before we get to SNNs, a quick review of artificial intelligence (AI) and machine learning technology would be useful. Machine learning describes AI systems that can learn the correct response simply by analyzing input data, without having to be explicitly programmed to perform specific tasks.



    There are a number of different approaches to machine learning, like decision tree learning, inductive logic programming, and association rule learning, but perhaps the most successful and widespread technique is the use of artificial neural networks, or ANNs.

    All neural networks might be considered “artificial” in that they all seek to imitate the neural activity in the brain. The phrase is used to distinguish ANNs from the work of computational neurobiologists. ANNs emulate the way that neurons function in biological systems (particularly the human brain) by creating a network of interconnected artificial neurons. Each artificial neuron features one or more inputs, and produces an output typically based on applying a nonlinear function to the inputs. Training neural networks relies on a technique called backpropagation, in which results are fed back into the system in order to adjust the weighting of the neuron’s functions until it delivers the correct results.

    ANNs have proven to be very effective at a number of tasks, especially those involving pattern recognition. That includes such applications as computer vision, speech recognition, or medical diagnosis from symptoms or scans.

    Data centers versus the edge
    For the past few decades, neural networks have largely been implemented in software, operating as a model, executed on general-purpose processors. The software emulates the way that each individual neuron functions, as well as the interconnections between them that govern their collective behavior.

    This is fine if you want to run a large-scale neural processing job on data that has been collected and uploaded to one of the major cloud platforms based on data centers full of servers. This arrangement is also perfectly suitable for some of the AI-based capabilities we’re already familiar with that originate at the edge.

    For example, most small-scale portable edge devices, such as smartphones, simply don’t have the compute power or memory space to operate neural nets of the size and complexity that would be required for many tasks. For this reason, applications such as Apple’s Siri virtual assistant typically upload speech to the cloud for processing. The inherent latency in cloud processing is tolerable if all that is being requested is a movie timetable.

    Relying on data centers is not an option for many emerging edge applications, however. Autonomous vehicle (AV) management is a good example of a new application that requires split-second responsiveness and therefore cannot tolerate the latencies involved with shipping data and calculations back to a remote data center.

    Furthermore, even if communication with data centers was instantaneous, traditional neural networks would still be simply too cumbersome and slow for some of the emerging real-life application challenges that are arising at the network edge. What’s required is a machine learning system that is portable, or one that at least does not require a rack full of servers to function.

    This is where neuromorphic computing methods come in. Neuromorphic computing is not new; the concepts go back to the roots of neural nets. The idea is to more closely simulate the way that biological neurons function.

    Traditional neural nets, such as convolutional neural networks (CNNs) and deep neural networks (DNNs), have evolved into complex structures with many specialized layers, becoming far more complex than anything that exists in nature. The artificial neurons themselves typically have a constant calculated value as output. To incorporate new information, the artificial neuron has to be retrained through backpropagation and other algorithm training tools. In short, the way traditional neural networks work barely resemble the way brains actually work – they’re not neuromorphic.

    The human brain has millions of neurons but, in contrast with the way traditional neural networks operate, those neurons aren’t all exercised at the same time. Neurons that are unaligned are standing by, always available to learn new information. To reiterate, traditional neural networks rely heavily on backpropagation – feeding results back into the system to fine tune the results. Backpropagation is indeed similar to a reinforcement process that exists in natural brains, but in contrast to traditional neural networks, learning in brains is by and large a feed-forward process.

    One of the most promising neuromorphic computing approaches uses a new type of neural model, SNN, which more closely mimics its biological counterpart because SNNs operate with a largely feed-forward process. In an SNN, neurons communicate through a series of spikes, with information being encoded not just in the rate of firing of the spikes, but also in their precise timing.

    There are some other differences between SNNs and regular neural nets that are highly pertinent to emerging edge applications. One is the speed with which information can be processed, because they operate in an event-driven manner.

    Although SNNs contain more neurons than similarly capable traditional neural networks, they are more sparsely connected, as their primary function is transmission of binary information (is there a spike: yes/no?), giving them a much higher capacity while being extremely functionally efficient. Because a smaller percentage of neurons in an SNN are required to transmit information, other neurons not aligned in the network are available to learn on the go.


    Figure 1
    A spiking neural network

    Also, SNNs learn in an unsupervised way and don’t require backpropagation as a training function, unlike traditional ANNs. This has a knock-on effect on power efficiency, because a neuron is only activated when it is triggered by an event, rather than operating in computational synchrony with all the other neurons in the network.

    SNNs also have clear advantages over traditional neural networks when it comes to scalability because they use fewer neurons and synaptic connections to achieve the same results as more complex CNNs and DNNs.

    In addition, SNNs don’t require floating point multiply-accumulate operations (MACs), allowing use of comparatively simpler compute architectures. This all makes for a very efficient implementation and easily scaled networks that can learn new things because only a small percentage of neurons and synaptic connections are used for each task.

    Silicon, not software
    These advantages make SNNs a good candidate for implementation in silicon circuits rather than being emulated in software, enabling the cognitive power of neural nets to operate with the energy efficiency and portability required for use in the field.

    Additionally, since SNNs don’t perform MACs or backpropagation as part of their functions, they can run perfectly adequately on regular CPUs. Because they are computationally lightweight, SNNs gain nothing by being run on more expensive and power-hungry GPUs that also take up extra chip real estate. Due to the relative simplicity of SNN computational functions, it’s possible to customize CPUs to squeeze a lot more capability in the same silicon area than when integrating GPUs.

    In short, since SNNs can be implemented in silicon in an extremely efficient manner, they can provide excellent performance at a fraction of the power and cost of software-based neural network implementations. And since they are more portable and consume less power, they are more attractive for edge applications.

    Shrinking down the power of a neural net onto a single semiconductor chip means that these learning and pattern recognition technologies can be embedded into a wider range of systems in future, from robots to handheld devices such as tablets or even smartphones, opening up a whole new field of potential applications.

    Such neuromorphic processors could lead to a new world of mobile devices and sensors able to operate intelligently and independently, without requiring mains power or a network connection to the cloud to provide their computational capabilities.

    SNNs and cybersecurity
    Not surprisingly, ANNs continue to be applied to new challenges created by the modern world, such as cybersecurity. AI scales, so one algorithm could be replicated to deliver the productivity of many workers – think of it as being able to clone your best employee.

    Why is this important? Because the threat from cyber attacks is also scaling at the same rate. Perhaps the most-used tool in the cyber criminal’s toolbox is the DDoS (distributed denial of service) attack, which is little more than a data hosepipe being pointed at a particular server or service. Now imagine this deluge scaled up and directed at entire corporations, countries, or even continents. The only realistic way to defend against an automated attack is to use an automated defense, and that defense is AI.

    The sheer volume of network traffic is only part of the challenge in this scenario; the other is the fact that the traffic is typically encrypted. However, AI can learn to identify patterns, even in encrypted packets, that could point to malicious or unusual payloads inside the traffic, at line speed. This ‘fight fire with fire’ approach to applying cybersecurity will be a battlefield for AI-empowered systems that will be carried out all day, every day in the near future, but with every packet inspected the neural networks will learn to defend better. SNNs promise to be appropriate for cybersecurity because they can learn on the fly, a potential advantage in detecting new attack behavior or new attack vectors.

    Neural networks and fintech
    These learning capabilities are also being brought to bear in the technology used within the finance sector and by the financial services industry (aka fintech). If your bank has ever phoned you directly after you have made an impulse purchase – particularly if you were travelling overseas at the time – the chances are it was a fintech application that flagged your behavior as unusual. Increasingly, fintech employs AI to make such decisions faster and with greater reliability. This is just one example of how AI is driving fintech; a technology sector that has investors and venture capitalists in a frenzy.

    Because AI is predominantly about pattern recognition, its benefits can be applied to anything where patterns exist, which includes investments. AI will play a bigger role in the way private and corporate investors operate in the future, from identifying trends in share prices and making recommendations, to applying what it knows about the client’s attitude toward risk. This will go beyond making recommendations for investment; it could also include the actual execution of buy and sell instructions. The ability to identify new behaviors in real time will be valuable here as well.

    Patterns exist in every aspect of the financial institution; not simply in how we manage our money, but also physical patterns that we as individuals exhibit. AI will be used to combat fraud by physically identifying the owner of a credit card before approving a transaction – something that simply isn’t possible with traditional monetary instruments. Similarly, the way we legitimately use our cards will become recognizable and, as a result, predictable. Our buying habits will be analyzed and used to present us with buying opportunities that fit our profile, a principle already evident when using online shopping platforms. This will increase as a result of AI-empowered fintech.

    Bob Beachler is SVP of Marketing and Business Development at BrainChip.

    SVP, Marketing and Business Development

    Mr. Beachler is a Silicon Valley veteran with over 30 years of success in developing and marketing cutting-edge technologies. His background includes more than 16 years of experience in a variety of engineering and marketing roles at Altera Corporation, a leading provider of fieldprogrammable gate array (FPGA) products, which was acquired by Intel Corporation in 2015 for over $16 billion. He has also served as Vice-President of Marketing, Operations and Systems Design at Stretch Inc., a provider of embedded video processing solutions, up until its acquisition by Exar Corporation in 2014. While at Exar, Mr. Beachler was appointed Vice-President of Corporate Marketing and Business Development. Most recently, he served at Xilinx Corporation, the leading worldwide independent provider of FPGA products, and led the marketing of imaging, video and machine learning solutions for Xilinx’s industrial, scientific and medical markets. Mr. Beachler holds a B.Sc. in electrical engineering from Ohio State University.


    https://www.edn.com/5G/4461349/Neuromorphic-computing-gives-AI-a-real-time-boost
    Last edited by glutenfree: 05/12/18
 
watchlist Created with Sketch. Add BRN (ASX) to my watchlist
(20min delay)
Last
22.0¢
Change
0.005(2.33%)
Mkt cap ! $408.3M
Open High Low Value Volume
21.5¢ 23.5¢ 21.3¢ $7.637M 34.28M

Buyers (Bids)

No. Vol. Price($)
15 503753 22.0¢
 

Sellers (Offers)

Price($) Vol. No.
23.0¢ 196385 4
View Market Depth
Last trade - 16.10pm 21/06/2024 (20 minute delay) ?
BRN (ASX) Chart
arrow-down-2 Created with Sketch. arrow-down-2 Created with Sketch.