https://www.forbes.com/sites/tomcoughlin/2019/11/23/content-delivery-cache-and-neural-network-memory/#cef022b3686e
Akida mention...Nov 23, 2019, 03:09pmContent Delivery Cache And Neural Network Memory
Enterprise TechTom CoughlinContributorWe have written in the past about the uses of memory and storage in data movement and in AI applications. This piece will talk about digital distribution technology and the role of content caching from Mo-DV as well as some neural network-based AI systems from Supermicro and an upcoming edge solution neural network chip from startup, Brainchip.
Although many people enjoy streaming content, there are many folks who download and view content off-line, e.g. when they are travelling. This is what Mo-DV’s Mo2Go Speedspots and SLAN is about.
Mo2Go SpeedSpots are meant to provide fast mobile video delivery without the need for an Internet connection. Mo2Go SpeedSpots provide an alternative to traditional internet
video streaming and downloading to mobile devices, combining secure Digital Rights Management with fast Wi-Fi delivery of high-definition video content from content stored in the SpeedSpot, even in places where there is inadequate or no Internet access. The storage capacity in these SpeedSpots is typically 1-2 TBs.
The Mo2Go SpeedSpots are low-cost, physical content servers placed in high-traffic areas, such as airports, shopping centers, and train stations. These SpeedSpots connect automatically with any mobile device within range that’s running the Mo2Go App, and in seconds transfers secure video content for streaming and downloading to that device.
Today In: InnovationMo2Go Content Delivery Solution
FROM PRESENTATION DECK FROM MO-DVSLAN is an entirely new technology from MO-DV, in early development, that connects existing, local, business and home Wi-Fi access points into a mesh network with local content storage cache and proxy responses for content. It acts like a local content delivery network, using resources, such as storage, in a local mesh network. It is believed that these neighborhood networks can reduce bandwidth demand on the ISPs, reduce latency, and allow pre-loading of content. The result is a better viewing experience for users and lower costs for ISPs and content providers.
PROMOTED
The company says that SLAN is inexpensive to deploy, as it only requires replacing or modifying existing Wi-Fi access points (usually paid for by the users). Applications for SLAN are many, including bandwidth reduction for ISPs; lower latency for video conferencing, video telephony, and gaming; pre-loading content during off-peak hours; and performing as a Mo2Go SpeedSpot.
Separately, Supermicro announced several large scale distributed training AI systems built using Intel Nervana Neural Network (NNP-T) ASICS.
Supermicro AI Systems with Intel NNP-T
IMAGES FROM SUPERMICRO ANNOUNCEMENTThe company said that the Intel Nervana NNP-T helped solves memory constraints and is designed to scale out through systems with racks easier than today's solutions. As part of the validation process, Supermicro integrated 8 NNP-T processors, dual 2nd Generation Intel Xeon Scalable processors, up to 6TB DDR4 memory per node supporting both PCIe card and OAM form factors. Supermicro NNP-T systems are expected to be available mid-year 2020.
Brainchip is going to introduce its Akida Neuron Fabric using 80 neural processing units (NPUs) with 8 MB of SRAM initially manufactured by foundry TSMC for AI edge applications. The layout of the Akida chip is shown below.
Brainchip Akida AI Edge Solution Neural SoC Chip
IMAGE FROM BRAINCHIP PRESENTATIONThe company said that it is looking at the possibility of using non-volatile fast emerging memories in future versions of the chip in order to reduce power consumption for use in energy constrained applications. Layer computations are done on allocated NPUs and all NPUs run in parallel in the fabric. All intermediate results stored on chip memory, eliminating the additional overhead of off-chip memory access. The device includes interfaces for external memory.
Local memory and storage can play an important role. Local cached content allows content delivery without burdening Internet networks. On-chip memory also speeds up AI applications and reduces power consumption. Using non-volatile fast memory will provide even lower power consumption.
- Forums
- ASX - By Stock
- BRN
- New BRN, neuromorphic media article
New BRN, neuromorphic media article, page-3
-
- There are more pages in this discussion • 611 more messages in this thread...
You’re viewing a single post only. To view the entire thread just sign in or Join Now (FREE)
Featured News
Add BRN (ASX) to my watchlist
(20min delay)
|
|||||
Last
19.5¢ |
Change
0.000(0.00%) |
Mkt cap ! $382.6M |
Open | High | Low | Value | Volume |
19.5¢ | 20.5¢ | 18.3¢ | $2.707M | 13.94M |
Buyers (Bids)
No. | Vol. | Price($) |
---|---|---|
13 | 93761 | 19.0¢ |
Sellers (Offers)
Price($) | Vol. | No. |
---|---|---|
19.5¢ | 829397 | 11 |
View Market Depth
No. | Vol. | Price($) |
---|---|---|
13 | 93761 | 0.190 |
9 | 361364 | 0.185 |
39 | 1271164 | 0.180 |
25 | 995878 | 0.175 |
37 | 1305136 | 0.170 |
Price($) | Vol. | No. |
---|---|---|
0.195 | 829397 | 11 |
0.200 | 558430 | 13 |
0.205 | 993584 | 17 |
0.210 | 599828 | 23 |
0.215 | 582193 | 11 |
Last trade - 16.10pm 27/09/2024 (20 minute delay) ? |
Featured News
BRN (ASX) Chart |