BRN 5.66% 25.0¢ brainchip holdings ltd

New BRN, neuromorphic media article, page-3

  1. 1,399 Posts.
    lightbulb Created with Sketch. 268
    https://www.forbes.com/sites/tomcoughlin/2019/11/23/content-delivery-cache-and-neural-network-memory/#cef022b3686e

    Akida mention...
    Nov 23, 2019, 03:09pm

    Content Delivery Cache And Neural Network Memory


    We have written in the past about the uses of memory and storage in data movement and in AI applications. This piece will talk about digital distribution technology and the role of content caching from Mo-DV as well as some neural network-based AI systems from Supermicro and an upcoming edge solution neural network chip from startup, Brainchip.

    Although many people enjoy streaming content, there are many folks who download and view content off-line, e.g. when they are travelling. This is what Mo-DV’s Mo2Go Speedspots and SLAN is about.

    Mo2Go SpeedSpots are meant to provide fast mobile video delivery without the need for an Internet connection. Mo2Go SpeedSpots provide an alternative to traditional internet

    video streaming and downloading to mobile devices, combining secure Digital Rights Management with fast Wi-Fi delivery of high-definition video content from content stored in the SpeedSpot, even in places where there is inadequate or no Internet access. The storage capacity in these SpeedSpots is typically 1-2 TBs.

    The Mo2Go SpeedSpots are low-cost, physical content servers placed in high-traffic areas, such as airports, shopping centers, and train stations. These SpeedSpots connect automatically with any mobile device within range that’s running the Mo2Go App, and in seconds transfers secure video content for streaming and downloading to that device.


    Today In: Innovation
    Mo2Go Software Suite

    Mo2Go Content Delivery Solution

    FROM PRESENTATION DECK FROM MO-DV

    SLAN is an entirely new technology from MO-DV, in early development, that connects existing, local, business and home Wi-Fi access points into a mesh network with local content storage cache and proxy responses for content. It acts like a local content delivery network, using resources, such as storage, in a local mesh network. It is believed that these neighborhood networks can reduce bandwidth demand on the ISPs, reduce latency, and allow pre-loading of content. The result is a better viewing experience for users and lower costs for ISPs and content providers.

    PROMOTED




    The company says that SLAN is inexpensive to deploy, as it only requires replacing or modifying existing Wi-Fi access points (usually paid for by the users). Applications for SLAN are many, including bandwidth reduction for ISPs; lower latency for video conferencing, video telephony, and gaming; pre-loading content during off-peak hours; and performing as a Mo2Go SpeedSpot.

    Separately, Supermicro announced several large scale distributed training AI systems built using Intel Nervana Neural Network (NNP-T) ASICS.

    Supermicro products with Intel NNP-T Technology

    Supermicro AI Systems with Intel NNP-T

    IMAGES FROM SUPERMICRO ANNOUNCEMENT

    The company said that the Intel Nervana NNP-T helped solves memory constraints and is designed to scale out through systems with racks easier than today's solutions. As part of the validation process, Supermicro integrated 8 NNP-T processors, dual 2nd Generation Intel Xeon Scalable processors, up to 6TB DDR4 memory per node supporting both PCIe card and OAM form factors. Supermicro NNP-T systems are expected to be available mid-year 2020.

    Brainchip is going to introduce its Akida Neuron Fabric using 80 neural processing units (NPUs) with 8 MB of SRAM initially manufactured by foundry TSMC for AI edge applications. The layout of the Akida chip is shown below.

    Showing chip layout

    Brainchip Akida AI Edge Solution Neural SoC Chip

    IMAGE FROM BRAINCHIP PRESENTATION

    The company said that it is looking at the possibility of using non-volatile fast emerging memories in future versions of the chip in order to reduce power consumption for use in energy constrained applications. Layer computations are done on allocated NPUs and all NPUs run in parallel in the fabric. All intermediate results stored on chip memory, eliminating the additional overhead of off-chip memory access. The device includes interfaces for external memory.

    Local memory and storage can play an important role. Local cached content allows content delivery without burdening Internet networks. On-chip memory also speeds up AI applications and reduces power consumption. Using non-volatile fast memory will provide even lower power consumption.

 
watchlist Created with Sketch. Add BRN (ASX) to my watchlist
(20min delay)
Last
25.0¢
Change
-0.015(5.66%)
Mkt cap ! $482.5M
Open High Low Value Volume
26.5¢ 26.5¢ 25.0¢ $2.456M 9.509M

Buyers (Bids)

No. Vol. Price($)
83 1940540 25.0¢
 

Sellers (Offers)

Price($) Vol. No.
25.5¢ 154630 7
View Market Depth
Last trade - 16.10pm 29/05/2024 (20 minute delay) ?
Last
25.0¢
  Change
-0.015 ( 5.80 %)
Open High Low Volume
26.5¢ 27.0¢ 25.0¢ 5714219
Last updated 15.59pm 29/05/2024 ?
BRN (ASX) Chart
arrow-down-2 Created with Sketch. arrow-down-2 Created with Sketch.