Moving into commercialising and then production
Weebit is currently targeting two key segments
- Embedded – replace existing embedded flash (eFlash) technologies, with a market estimated to be worth US$23bn. Targeting first demonstration to customer by end of Q1 2021. Targeting first customer agreements within 3-4 months of demonstration. Weebit are targeting this market with the Korean customer. For the embedded market, they largely have what they need and just need to get the module work completed and then they will start working on commercial deals.
- External (discrete) – single chip replacing NOR and NAND flash technologies and market estimated to be worth US$60bn. Targeting integrated ReRAM + selector to be demonstrated by Q3 2021. This is a more advanced market and it is where the work with XTX is. The verification and analysis of Weebit’s technology by XTX also involved a deep analysis of what is still needed in order to make the technology work for discrete components. Discrete components split into two markets: NOR flash and NAND flash. The first market would be NOR.
Although Weebit is primarily focused on getting to first revenues, it is also well placed to take advantage of upcoming future markets, for example, in-memory processing. Weebit is not currently investing time and effort into these areas. They do, however, have partners who have been sent packaged chips and are doing work in these areas. While Weebit remains focused on productising its technology and reaching first revenues, they are not neglecting future applications of their technology, especially since there are huge opportunities for their technology.
Evolving tech landscape and uses of the next generation of memory
How a new generation of memory would take hold in the market follows closely the same phases that Weebit has planned:
- Gaining a foothold in the market by targeting technologies with shrinking markets, e.g. EEPROM, NOR Flash and SRAM. These technologies will not move to the lower geometries because companies will not spend the money to create the processes that would allow them to do so. This means that NOR flash and System On a Chip (SOC) are ripe areas for new memory technologies and are likely where they are going to first get a decent foothold in the market.
- Opening up new markets. New markets will be opened up through the new capabilities that the next generation of memory allows, e.g. avoiding the Von Neumann bottleneck.
- Increasing market share and taking market share away from the dominant leaders in the market. Memory technology leaders, DRAM and NAND flash, will continue to cost reduce in accordance with Moore's law. However, new memories will be moving faster than Moores law because they will be using processes that are inexpensive to get developed on and then will eventually move onto the processes that are more advanced. New memories will also be able to progress quicker by having simpler structures which allows them to be developed and manufactured quicker. The crossover point when new memories cost the same as flash has been anticipated for over two decades, but it has been continually postponed. Going in the 3D direction has extended the timeline for flash’s scaling limit which means that it will likely only be taken over when a new memory technology gains a considerable portion of the market. This is because of the fact that economies of scale lead to significant price advantages.
The Von Neumann bottleneck
Computer architecture has not changed since Von Neumann introduced his concept on how to perform computing in 1945. Also, the advances that have allowed the computing industry to largely keep pace with Moore’s law have been focused on performance and scaling. This means that there has not been too much attention paid to the fact that the power consumption has kept on increasing with each new technological generation. This has lead to today’s memory technologies requiring a lot of power.
Artificial intelligence, big data analysis and edge computing are all placing requirements on the current Von Neumann architecture that it is not efficient at meeting. This is because in modern Von Neumann systems the data is stored in memory, but it is processed in a separate processing unit. Data transfer between these units incurs energy costs and delays several orders of magnitude greater than the energy costs and delays incurred by the computation itself. If an application is data intensive, then the data transfer requirement will severely limit both the performance and energy efficiency.
For more information you can look at this
post, which has links to the research I did earlier in the year. As always, DYOR as some information could be incorrect or out of date.