BRN 0.00% 19.5¢ brainchip holdings ltd

2020 BRN Discussion, page-19558

  1. 10,083 Posts.
    lightbulb Created with Sketch. 27198
    In Peter van der Made's recent presentation to the Brain Inspired Conference in covering the applications for AKIDA technology he said two things that really stood out for me personally. The first one was that in all their testing and benchmarking of AKD1000 for its various use cases they had only used from "100micro watts to 150milli watts of power". The second one was 'NASA has signed up with us for space applications'. These two statements are unequivocal and so I would strongly suggest that reading the following quite long article (the best I have found to date) will make it crystal clear as to why NASA has signed up. Take particular note under 'Further Reading' what the author states regarding cooling of chips in space:

    CAN'T JUSTUSE AN IPHONE —

    Space-gradeCPUs: How do you send more computing power into space?

    Figuring out radiation was a huge "turningpoint in the history of space electronics."

    Phobos-Grunt, perhaps the most ambitious deep space mission ever attempted by Russia, crashed down into the ocean at the beginning of 2012. The spacecraft was supposed to land on the battered Martian moon Phobos, gather soil samples, and get them back to Earth. Instead, it ended up helplessly drifting in Low Earth Orbit (LEO) for a few weeks because its onboard computer crashed just before it could fire the engines to send the spacecraft on its way to Mars.

    FURTHER READING

    Russia’s Phobos-GruntMars probe stranded in Earth orbit

    In the ensuing report, Russian authorities blamed heavy charged particles in galactic cosmic rays that hit the SRAM chips and led to a latch-up, a chip failure resulting from excessive current passing through. To deal with this latch-up, two processors working in the Phobos-Grunt’s TsVM22 computer initiated a reboot. After rebooting, the probe then went into a safe mode and awaited instructions from ground control. Unfortunately, those instructions never arrived.

    Ars Technica

    Top of Form

    SHAPE \* MERGEFORMAT

    nt for communications were supposed tobecome fully operational in the cruise stage of Phobos-Grunt, after thespacecraft left the LEO. But nobody planned for a failure preventing the probefrom reaching that stage. After the particle strike, the Phobos-Grunt ended upin a peculiar stalemate. Firing on-board engines was supposed to trigger thedeployment of antennas. At the same time, engines could only be fired with acommand issued from ground control. This command, however, could not get through,because antennas were not deployed. In this way, a computer error killed amission that was several decades in the making. It happened, in part, becauseof some oversights from the team at the NPO Lavochkin, a primary developer ofthe Phobos-Grunt probe. During development, in short, it was easier to countthe things that worked in their computer than to count the things that didn’t.Every little mistake they made, though, became a grave reminder that designingspace-grade computers is bloody hard. One misstep and billions of dollars godown in volved had simply grossly underestimated the challenge of Why so slow?

    Curiosity, everyone’s favorite Mars rover, workswith two BAE RAD750 processors clocked at up to 200MHz. It has 256MB of RAM and2GB of SSD. As we near 2020, the RAD750 stands as the current state-of-the-art,single-core space-grade processor. It’s the best we can send on deep spacemissions today.

    Advertisement

    Compared to any smartphone we wear in our pockets,unfortunately, the RAD750’s performance is simply pathetic. The design is basedon the PowerPC 750, a processor that IBM and Motorola introduced in late 1997to compete with Intel's Pentium II. This means that perhaps the mosttechnologically advanced space hardware up there is totally capable of runningthe original Starcraft (the one released in 1998, mind you)without hiccups, but anything more computationally demanding would proveproblematic. You can forget about playing Crysis on Mars.

    Still, the price tag on the RAD750 is around $200k.Why not just throw an iPhone in there and call it a day? Performance-wise,iPhones are entire generations ahead of RAD750s and cost just $1k apiece, whichremains much less than $200k. In retrospect, this is roughly what thePhobos-Grunt team tried to accomplish. They tried to boost performance and cutcosts, but they ended up cutting corners.

    The SRAM chip in the Phobos-Grunt that was hit by aheavily charged particle went under the name of WS512K32V20G24M. It was wellknown in the space industry because back in 2005, T.E. Page and J.M. Benedetto hadtested those chips in a particle accelerator at the Brookhaven NationalLaboratory to see how they perform when exposed to radiation. The researchersdescribed the chips as "extremely" vulnerable, and single-eventlatch-ups occurred even at the minimum heavy-ion linear energy transferavailable at Brookhaven. This was not a surprising result, mind you, becauseWS512K32V20G24M chips have never been meant nor tested for space. They havebeen designed for aircraft, military-grade aircraft for that matter. But still,they were easier to obtain and cheaper than real space-grade memories, so theRussians involved with Phobos-Grunt went for them regardless.

    "The discovery of the various kinds ofradiation present in the space environment was among the most important turningpoints in the history of space electronics, along with the understanding of howthis radiation affects electronics, and the development of hardening andmitigation techniques,” says Dr. Tyler Lovelly, a researcher at the US AirForce Research Laboratory. Main sources of this radiation are cosmic rays,solar particle events, and belts of protons and electrons circling at the edgeof the Earth’s magnetic field known as Van Allen belts. Particles hitting theEarth’s atmosphere are composed of roughly 89% protons, 9% alpha particles, 1%heavier nuclei, and 1% solitary electrons. They can reach energies up to 10^19eV. Using the chips not qualified for space in a probe that intended to travelthrough deep space for several years was asking for a disaster to happen. Infact, Krasnaya Zvezda, a Russian military newspaper, reported at thattime that 62% of themicrochips used on the Phobos-Grunt were not qualified for spaceflight.The probe design was 62% driven by a "let’s throw in an iPhone"mindset.

    Advertisement

    Radiation becomes a thing

    Today, radiation is one of the key factorsdesigners take into account when building space-grade computers. But it has notalways been that way. The first computer reachedspace onboard a Gemini spacecraft back in the 1960s. The machine had to undergomore than a hundred different tests to get flight clearance. Engineers checkedhow it performed when exposed to vibrations, vacuum, extreme temperatures, andso on. But none of those testes covered radiation exposure. Still, the Geminionboard computer managed to work pretty fine—no issues whatsoever. That was thecase because the Gemini onboard computer was too big to fail. Literally.Its whooping 19.5KB of memory was housed in a 700-cubic-inch box weighing26 pounds. The whole computer weighed 58.98 pounds.

    NASA

    Generally for computing, pushing processortechnology forward has always been done primarily by reducing feature sizes andincreasing clock rates. We just made transistors smaller and smaller movingfrom 240nm, to 65nm, to 14nm, to as low as the 7nm designs we have in modernsmartphones. The smaller the transistor, the lower the voltage necessary toturn it on and off. That’s why older processors with larger feature sizes were mostlyunaffected by radiation—or, unaffected by so-called single eventupsets (SEUs), to be specific. Voltage created by particle strikes was too lowto really affect the operation of large enough computers. But when space-facinghumans moved down with feature size to pack more transistors onto a chip, thoseparticle-generated voltages became more than enough to cause trouble.

    Another thing engineers and developers typically doto improve CPUs is to clock them higher. The Intel 386SX that ran the so-called"glass cockpit" in space shuttles was clocked roughly at 20MHz.Modern processors can go as high as 5GHz in short bursts. A clock ratedetermines how many processing cycles a processor can go through in a giventime. The problem with radiation is that a particle strike can corrupt datastored in an on-CPU memory (like L1 or L2 cache) only during an extremely briefmoment in time called a latching window. This means in every second, there is alimited number of opportunities for a charged particle to do damage. Inlow-clocked processors like the 386SX, this number was relatively low. But whenthe clock speeds got higher, the number of latching windows per secondincreased as well, making processors more vulnerable to radiation. This is whyradiation-hardened processors are almost always clocked way lower than theircommercial counterparts. The main reason why space CPUs develop at such asluggish pace is that pretty much every conceivable way to make them fasteralso makes them more fragile.

    Fortunately, there are ways around this issue.

    ARS VIDEO

    Collision Conference 2017: Naveen Jain of Moon Express | Ars Technica

    Dealing with radiation

    "In the old days, radiation effects were oftenmitigated by modifications implemented in the semiconductor process,” saysRoland Weigand, a VISI/ASIC engineer at the European Space Agency. "It wassufficient to take a commercially available information processing core andimplement it on a radiation hardened process.” Known as radiation hardening byprocess, this technique relied on using materials like sapphire or galliumarsenide that were less susceptible to radiation than silicon in thefabrication of microprocessors. Thus, manufactured processors worked very wellin radiation-heavy environments like space, but they required an entire foundryto be retooled just to make them.

    "To increase performance we had to use moreand more advanced processors. Considering the cost of a modern semiconductorfactory, custom modifications in the manufacturing process ceased to befeasible for such a niche market as space,” Weigand says. According to him,this trend eventually forced engineers to use commercial processors prone tosingle-event effects. "And to mitigate this, we had to move to alternativeradiation-hardening techniques, especially the one we call radiation hardeningby design,” Weigand adds.

    The RHBD (radiation hardening by design) processenabled manufacturers to use a standard CMOS (Complementarymetal–oxide–semiconductor) fabrication process. This way, space-gradeprocessors could be manufactured in commercial foundries, bringing the pricesdown to a manageable level and enabling space mission designers to catch up alittle to commercially available stuff. Radiation was dealt with by engineeringingenuity rather than the sheer physics of the material. "For example,Triple Modular Redundancy is one of the most popular ways to achieve increasedradiation resistance of an otherwise standard chip,” Weigand explained."Three identical copies of every single bit of information are stored inthe memory at all times. In the reading stage, all three copies are read andthe correct one is chosen by a majority voting.”

    With this approach, if all three copies areidentical, the bit under examination is declared correct. The same is true aswell when just two copies are identical but a third is different; the majorityvote decides which bit value is the correct one. When all three copies aredifferent, the system registers this as an error. The whole idea behind the TMRis that copies are stored at different addresses in the memory that are placedat different spots on a chip. To corrupt data, two particles would have tosimultaneously strike exactly where the two copies of the same bit are stored,and that is extremely unlikely. The downside to TMR, though,is that this approach leads to a lot of overhead. A processor has to go throughevery operation thrice, which means it can only reach one-third of itsperformance.

    Advertisement

    Thus, the latest idea in the field is to getspace-grade processors even closer to their commercially available counterparts.Instead of designing an entire system on chip with radiation-hard components,engineers choose where radiation hardness is really necessary and where it cansafely be dispensed with. That’s a significant shift in the design priorities.Space-grade processors of old were built to be immune to radiation. Modernprocessors are not immune anymore, but they are designed to automatically dealwith all kinds of errors radiation may cause.

    The LEON GR740, for example, is the latest Europeanspace-grade processor. It’s estimated to experience a staggering 9 SEUs a day ona geostationary Earth orbit. The trick is that all those SEUs are mitigated bythe system and do not lead to functional errors. The GR740 is built toexperience one functional error every 300 or so years. And even if thathappens, it can recover just by rebooting.

    ESA

    Europe goes open source

    The LEON line of space-grade processors workingin SPARCarchitecture is by far the most popular choice for space inEurope today. "Back in the 1990s, when the SPARC specification was chosen,it had significant industry penetration,” says Weigand. “Sun Microsystems wasusing SPARC on their successful workstations.” According to him, the keyreasons behind going to SPARC were existing software support and openness."An open source architecture meant anybody could use it without licensingissues. That was particularly important since in such a niche market as space,the license fee is distributed among a very limited number of devices, whichcan increase their prices dramatically," he explains.

    Ultimately, ESA learned about the issues withlicensing the hard way. The first European space-grade SPARC processor—theERC32, which is still in use today—was using commercial information processingcores. It was based on an open source architecture, but the processor designwas proprietary. "This led to problems. With proprietary designs youusually don’t have access to the source code, and thus making the custommodifications necessary to achieve radiation hardening is difficult,” saysWeigand. That’s why in the next step, ESA started working on its own processor,named LEON. "The design was fully under our control, so we were finallyfree to introduce all RHBD techniques we wanted."

    The latest development in the line of LEONprocessors is the quad-core GR740 clocked at roughly 250MHz. ("We’reexpecting to ship first flight parts towards the end of 2019,” Weigand says.)The GR740 isfabricated in the 65nm process technology. The device is asystem-on-chip designed for high-performance, general-purpose computing basedon the SPARC32 instruction set architecture. "The goal in building theGR740 was to achieve higher performance and capability to have additionaldevices included in one integrated circuit while keeping the whole systemcompatible with previous generations of European space-grade processors,” saysWeigand. Another feature of the GR740 is advanced fault-tolerance. Theprocessor can experience a significant number of errors caused by radiation andensure uninterrupted software execution nonetheless. Each block and function ofthe GR740 has been optimized for best possible performance. This meant thatcomponents sensitive to single event upsets were used alongside the one thatcould withstand them easily. All SEU-sensitive parts have been implementedwith a scheme designed to mitigate possible errors through redundancy.

    Advertisement

    For example, some flip-flops (basic processorcomponents that can store either 1s or 0s) in the GR740 are off-the-shelfcommercial parts known as CORELIB FFs. The choice to use them was made becausethey took less space on the chip and thus increased its computational density.The downside was that they were susceptible to SEUs, but this vulnerability hasbeen dealt with by the Block TMR correction scheme where every bit readfrom those flip-flops is voted on by modules arranged with adequate spacingamong them to prevent multiple bit upsets (scenarios where one particle canflip multiple bits at once). There are similar mitigation schemes implementedfor L1 and L2 cache memories composed of SRAM cells, which are also generallySEU-sensitive. When the penalty such schemes inflicted on performance waseventually considered too high, ESA engineers went for SEU-hardened SKYROBflip-flops. Those, however, took twice the area of CORELIBs. Ultimately whenthinking about space and computing power, there was always some kind oftrade-off to make.

    So far, the GR740 passed several radiationtests with flying colors. The chip has been bombarded withheavy ions with linear energy transfer (LET) reaching 125 MeV.cm^2/mg andworked through all of this without hiccups. To put that in perspective, feralSRAM chips that most likely brought down the Phobos-Grunt latched up when hitwith heavy ions of just 0.375 MeV.cm^2/mg. The GR740 withstood levels ofradiation over 300 times higher than what Russians had put in their probe.Besides a near-immunity to single-event effects, the GR740 is specced to takeup to 300 krad(Si) of radiation in its lifetime. In the testing phase,Weigand’s team even had one of the processors irradiated to 292 krad(Si).Despite that, the chip worked as usual, with no signs of degradationwhatsoever.

    Still, specific tests to check the actual totalionizing dose the GR740 can take are yet to come. All those numbers combinedmean that the processor working at the geostationary Earth orbit shouldexperience one functional error every 350 years. In LEO, this time should bearound 1,310 years. And even those errors wouldn’t kill the GR740. It wouldjust need to do a reset.

    NASA

    America goes proprietary

    "Space-grade CPUs developed in the US havetraditionally been based on proprietary processor architectures such as PowerPCbecause people had more extensive experience working with them and they werewidely supported in software,” says the Air Force Research Labs’ Lovelly. Afterall, the history of space computation began with digital processors deliveredby IBM for the Gemini mission back in the 1960s. And the technology IBM workedwith was proprietary.

    To this day, BAE RAD processors are based on thePowerPC, which was brought to life by a consortium of IBM, Apple, and Motorola.Processors powering glass cockpits in the Space Shuttles and Hubble Spacetelescope were made in the x86 architecture introduced by Intel. Both PowerPC andx86 were proprietary. So in carrying with the tradition, the latest Americandesign in this field is proprietary, too. Named High PerformanceSpaceflight Computing (HPSC), the only difference is thatPowerPC and x86 were best known from desktop computers. The HPSC is based onthe ARM architecture that today works in most smartphones and tablets.

    The HPSC has been designed by NASA, Air ForceResearch Laboratory, and Boeing, which is responsible for manufacturing thechips. The HPSC is based on the ARM Cortex A53 quad-core processors. It willhave two such processors connected by an AMBA bus, which makes it an octa-coresystem. This should place its performance somewhere in the range of mid-market2018 smartphones like Samsung Galaxy J8 or development boards like HiKeyLemaker or Raspberry Pi. (That’s before radiation hardening, which will cut itsperformance by more than half that.) Nevertheless, we’re no longer likely to readbleak headlines screaming that 200 processors powering the Curiosity roverwould not be enough to beat one iPhone. With the HPSC up and running, this ismore likely to be three or four chips required to get iPhone-like computingpower.

    FURTHER READING

    To makeCuriosity (et al.) more curious, NASA and ESA smarten up AI in space

    "Since we do not yet have an actual HPSC fortests, we can make some educated guesses as to what its performance may belike,” says Lovelly. Clock speed was the first aspect to go under scrutiny. CommercialCortex A53 octa-core processors are usually clocked between 1.2GHz (in theHiKey Lemaker for example) and 1.8GHz (in the Snapdragon 450). To estimate whatthe clock speed would look like in the HPSC after radiation hardening,Lovelly compared variousspace-grade processors with their commercially available counterparts."We just thought it reasonable to expect a similar hit on performance,” hesays. Lovelly estimated HPSC clock speed at 500MHz. This would still beexceptionally fast for a space-grade chip. In fact, if this turned out to betrue for the flight version, the HPSC would have the highest clock rate amongspace-grade processors. But more computing power and higher clock rates usuallycome at a dear price in space.

    BAE RAD5545 isprobably the most powerful radiation-hardened processor available today.Fabricated in the 45nm process, it is a 64-bit quad-core machine clocked at466MHz with power dissipation of up to 20 Watts—and 20 Watts is a lot. A QuadCore i5 sitting in a 13-inch MacBook Pro 2018 is a 28 Watt processor. It canheat its thin aluminum chassis to really high temperatures up to a point whereit becomes an issue for some users. Under more computationally intensiveworkloads, fans immediately kick in to cool the whole thing down. The onlyissue is that, in space, fans would do absolutely nothing, because there is noair they could blow onto a hot chip. The only possible way to get heat out of aspacecraft is through radiation, and that takes time. Sure, heat pipes arethere to take excessive heat away from the processor, but this heat has toeventually go somewhere. Moreover, some missions have tight energy budgets, andthey simply can’t use powerful processors like RAD5545 under such restrictions.That’s why the European GR740 has power dissipation at only 1.5 Watts. It’s notthe fastest of the lot, but it is the most efficient. It simply gives you themost computational bang per Watt. The HPSC with 10 Watt power dissipation comesin at a close second, but not always.

    Advertisement

    "Each core on the HPSC has its own SingleInstruction Multiple Data unit,” says Lovelly. "This gives it asignificant performance advantage over other space-grade processors.” SIMD is atechnology commonly used in commercial desktop and mobile processors since the1990s. It helps processors handle image and sound processing in video gamesbetter. Let’s say we want to brighten up an image. There are a number ofpixels, and each one has a brightness value that needs to be increased by two.Without SIMD, a processor would need to go through all those additions insequence, one pixel after the other. With SIMD, though, the task can beparallelized. The processor simply takes multiple data points—brightness valuesof all the pixels in the image—and performs the same instruction, adding two toall of them simultaneously. And because the Cortex A53 was a processor designedfor smartphones and tablets that handled a lot of media content, the HPSC cando this trick as well.

    "This is particularly beneficial in tasks likeimage compression, processing, or stereo vision,” says Lovelly. "Inapplications that can’t utilize this feature, the HPSC performs slightly betterthan the GR740 and other top-performing space processors. But when it comes tothings where it can be used, the chip gets well ahead of the competitors.”

    WATCH

    Talking Space and Robots with NASA's Terry Fong

    Back in 2018, we caught up with NASA roboticistTerry Fong. The space org's computational wishlist include algorithms that canpredict impending disasters based on sensor readouts, intelligent scheduling,advanced autonomy, and so on. (click here fortranscript; interview produced by Nathan Mattise and additionalfootage courtesy of NASA).

    Making space exploration sci-fi again

    Chip designers in the US tend to go for morepowerful, but more energy-hungry, space-grade processors because NASA aims torun more large-scale robotic and crewed missions compared to its Europeancounterparts. In Europe, there are no current plans to send humans or car-sizedplanetary rovers to the Moon or Mars in the predictable future. The modern ESAis more focused on probes and satellites, which usually work on tight energybudgets, meaning something light, nimble, and extremely energy-efficient likethe GR740 makes much more sense. The HPSC, in turn, has been designed from theground up to make at least some of NASA’s at-times sci-fi ambitions reality.

    Back in 2011, for instance, NASA’s Game ChangingDevelopment Program commissioned a study to determine what space computingneeds would look like in the next 15 to 20 years. A team of experts fromvarious NASA centers came up with a list ofproblems advanced processors could solve in both crewed and robotic missions.One of the first things they pointed to was advanced vehicle health management,which they deemed crucial for sending humans on long deep space missions. Itboils down to having sensors constantly monitoring the health of crucialcomponents. Fast processors are needed to get data from all those sensors athigh frequencies. A sluggish computer could probably cope with this task if thesensor readouts got in every 10 minutes or so, but if you want to do the entirecheckup multiple times a second to achieve something resembling real-timemonitoring, the processor needs to be really fast. All of this would need to bedevised to have astronauts seated in front of consoles showing the actualcondition of their spaceship with voiced alerts and advanced graphics. Andrunning such advanced graphics would also demand fast computers. The teamcalled that "improved displays and controls.”

    FURTHER READING

    To build thebest bots, NASA happily looks to others here on Earth

    But the sci-fi aspirations do not end at flightconsoles. Astronauts exploring alien worlds could likely have augmented realityfeatures built right into their visors. The view of a physical environmentaround them will be enhanced with computer-generated video, sound, or GPS data.Augmentation would in theory provide situational awareness, highlighting areasworthy of exploring and warning against potentially dangerous situations. Ofcourse, having the AR built into the helmets is only one possible option. Othernotable ideas mentioned in the study included hand-held, smartphone-likedevices and something vaguely specified as "other displaycapabilities" (whatever those other capabilities may be). Fasterspace-grade processors would be needed to power such computing advances.

    Advertisement

    Faster space-grade processors are meant toultimately improve robotic missions as well. Extreme terrain landing is one ofthe primary examples. Choosing a landing site for a rover is a tradeoff betweensafety and scientific value. The safest possible site is a flat plane with norocks, hills, valleys, or outcrops. The most scientifically interesting site,however, is geologically diverse, which usually means that it is packed withrocks, hills, valleys, and outcrops. So called Terrain Relative Navigation(TRN) capability is one of the ways to deal with that. Rovers equipped with theTRN could recognize important landmarks, see potential hazards, and navigatearound them, narrowing down the landing radius to less than 100 meters. Theproblem is that current space-grade processors are way too slow to processimages at such a rate. So the NASA team behind the study ran a TRN softwarebenchmark on the RAD 750 and found the update from a single camera took roughly10 seconds. Unfortunately, 10 seconds would be a lot when you’re falling downto the Martian surface. To land a rover within 100-meter radius, an update froma camera would have to be processed every second. For a pinpoint, one meterlanding, estimates would need to come at 10Hz, which is 10 updates per second.

    Other things on NASA’s computational wishlistinclude algorithms that can predict impending disasters based on sensorreadouts, intelligent scheduling, advanced autonomy, and so on. All this isbeyond the capabilities of current space-grade processors. So in the study,NASA engineers estimated how much processing power would be needed toefficiently run those things. They found that spacecraft health management andextreme terrain landing needed between 10 and 50 GOPS (gigaoperations persecond). Futuristic sci-fi flight consoles with fancy displays and advancedgraphics needed somewhere between 50 and 100 GOPS. The same thing goes foraugmented reality helmets or other devices; these also consumed between 50 and100 GOPS.

    Ideally, future space-grade processors would beable to power all those things smoothly. Today, the HPSC running at a powerdissipation between 7 and 10 Watts can process 9 to 15 GOPS. Thisalone would make extreme landing possible, but the HPSC is designed in such away that this figure can go up significantly. First, those 15 GOPS do notinclude performance benefits that the SIMD engine brings to the table. Second,the processor can work connected to other HPSCs and external devices likespecial-purpose processors, FPGAs, or GPUs. Thus, a future spaceship canpotentially have multiple distributed processors working in parallel withspecialized chips assigned to certain tasks like image or signal processing.

    No matter where humanity’s deep space dreams gonext, we won’t have to wait that long for engineers to know where the currentcomputing power stands. The LEON GR740 is scheduled for delivery to ESA at theend of this year, and after a few additional tests it should be flight ready in2020. The HPSC, in turn, is set for a fabrication phase that should begin in2021 and last until 2022. Testing is expected to take a few months in 2022.

    NASA should get flight-ready HPSC chips by the endof 2022. That means, all other complicating timeline factors aside, at leastthe future of space silicon appears on track to be ready for spaceships takinghumans back to the Moon in 2024.

    Jacek Krywko is a science and technology writerbased in Warsaw, Poland. He covers space exploration and artificialintelligence research, and he has previously written for Ars about facial-recognitionscreening, teachingAI-assistants new languages, and AI in space.

 
watchlist Created with Sketch. Add BRN (ASX) to my watchlist
arrow-down-2 Created with Sketch. arrow-down-2 Created with Sketch.