BRN 3.17% 30.5¢ brainchip holdings ltd

Competitive landscape, page-62

  1. 9,502 Posts.
    lightbulb Created with Sketch. 24655
    Hi @uiux and @BarrelSitter guess who has the Holy Grail of Ai in their back pocket. At least that is what this article considers one shot learning to be:

    Challenges and New Frontiers of AI

    With significantadoption underway in all facets of life and business, the challenges andconcerns around training AI with unbiased data, data scarcity, trust,explainability and privacy are becoming the top concerns for broader adoption.

    · ETCIO

    · June 14, 2021, 08:50 IST

    By Som PalChoudhury

    The phenomenal impact that Artificial Intelligence (AI) is projected to have on our economy and our daily lives is nothing short of astounding. It is predicted that AI will significantly ($15.7 trillion) contribute to the world economy by 2030. While its prominence has magnified its adoption and use-cases, criticisms abound with its adoption resulting in job losses, unintended biases, privacy, surveillance concerns and even the energy-hogging data centres building the AI models. As with any new technology, its abuse versus its safe and productive use with the right sets of ethics and regulations rests on us.

    With significant adoption underway in all facets of life and business, the challenges and concerns around training AI with unbiased data, data scarcity, trust, explainability and privacy are becoming the top concerns for broader adoption. Researchers and thought leaders worldwide are trying to solve them with several new frontiers emerging and being explored. We took a deeper dive to understand these challenges and summarise our learnings here.

    The current AI systems are inherently black-box models with limited explainability that create barriers to adoption, especially in regulated environments like healthcare. This is where Explainable AI (XAI) comes in, as captured in a comprehensive paper by DARPA experts. XAI tries to solve the black-box problem by providing explanations via two approaches; Post-hoc (local explanations) and Ante-hoc (interpretation by design) systems, and tries to turn the black-box into a glass box or at least a semi-glass box. Another method to achieve the glass box is called Interactive Machine Learning (IML) that involves human-in-the-loop, observing trends in the algorithmic loops and making decisions that ultimately help gain a better understanding of the model. Several XAI frameworks and tools are in development, and a plethora of research ongoing.

    Artificial Intelligence research has significantly picked up in India, and our review of patents and research shows a solid research base here in ‘edge AI’ and ‘Federated Learning’. Large tech giants have released edge frameworks orthogonal to the well-entrenched cloud-based AI/ML. Federated learning involves a central server that collates information from many edge-generated models to create a global model without transferring local data for training. It has a hyper-personalised approach, is time-efficient, cost-effective and supposedly privacy friendly as user data is not sent to the cloud.

    At the same time, to accelerate AI atthe edge, a new generation of AI edge chips (neuromorphic and digital-analogueflavours) are upcoming to do much heavier duty training and inferencing at theedge, running orders of magnitude faster and at a significantly lower powerfootprint. The new release of Google Chrome has implemented Federated Learningof Cohorts (FLoC), a Google’s version of Federated Learning which is aninitiative to eliminate the pervasive online trackers and cookies in a privacyand security-conscious world. FedVision is an open-source platform to supportthe development of edge powered computer vision applications as uploadingvideos is a big privacy concern. With over 700 active AI startups in India, weexpect to see some quality initiatives here.

    AutoML has seen significant progress to ensure that data scientists are notstuck in repetitive and time-consuming tasks starting from data cleaning,playing around with different models and hyper-parameters and eventuallyfine-tuning them for best results. AutoML uses an inherent reinforcement learningand recurrent neural network approach so that these models and parameters startwith an initial input or auto-picked, but gets continuously and automaticallyrefined based on results.

    There are a wide variety of platforms in the market today, and we are at Gen 3of AutoML evolution with more verticalised domain-specific platforms. Mostplatforms still select the model and the hyperparameters, which means that thedata scientists still need to do the bulk of the work in data preparation andcleaning, where the majority of time is often spent. Other advanced platformsalso include cleaning, encoding and feature extraction, a must to build a goodmodel quickly, but the approach is template driven and may not always be a goodfit.

    AI practitioners have always been plagued with a paucity of data and hence theeffort to generate acceptable models with reduced datasets or simply theirquest to find more data. Finding more data include public annotated data (e.g.Google public dataset, AWS open data), data augmentation running transforms onavailable data and transfer learning where other similar but the larger datasetis used to train the models. Rapid progress continues on creation of artificialor synthetic data. Synthetic Minority Over-sampling Technique (SMOTE) andseveral of its modifications are used in classic cases where minority data issparse and hence oversampled. Generating completely new data with self-learning(AlphaGo self-played 4.9 million times) and simulation (recreating city trafficscenarios using gaming engines) are more recent approaches to create syntheticdata. Unfortunately, more data also amplifies the resource and time constraintsto train, including the time and effort required to clean, remove noise, removeredundancies, outliers etc. The holy grail of AI training is Few-Shot Learning(FSL), that is, training with a smaller dataset. It is an area of activeresearch, as highlighted in this recent survey paper.

    A vast amount of open-source models, datasets, active collaboration and benchmarkscontinue to accelerate AI development. Open AI’s GPT-3 launch took NLP toanother level with 175 billion parameters trained on 570 gigabytes of text.Huawei recently trained the Chinese version of GPT-3 with 1.1 terabytes ofChinese text. Alphabet subsidiary Deepmind’s AlphaFold had the most significantbreakthrough in Biology with 92.4 percent accuracy in the well-known proteinstructure and folding prediction competition. Cityscapes has built alarge-scale 50 cities dataset of diverse urban street scenes. Beyond image andlanguage recognition, the next frontier of AI is intent understanding fromvideo. While India rose in the AI Vibrancy index from rank 23 to 5 in 2021, alot still needs to be done in terms of collaboration, open-source and Indiaspecific datasets.

    With the growing need for security of sensitive and private information, thereis a call for machine learning algorithms to be run on data that is protectedby encryption. Homomorphic encryption (HE) is a concept that is now being leveragedto train models on data without decrypting it and risking data leaks. Intel isone of the players in this space that has collaborated with Microsoft todevelop silicon for this purpose. With growing interest in research anddevelopment in this field, these HE methods will become more commonplace andadvanced.

    Removing toxicity and biases is the aim of Ethical AI or Responsible AI, butdevelopment is at nascent stages. Google and Accenture have announcedResponsible AI frameworks. European Commission’s white paper on AI focuses ontrust, and the UN AI ethics committee formation is an excellent initiative.

    The evolution of AI is happening at a breakneck pace, and 2021 will be nodifferent.

    The author is Partner, BIF and Arjun Nair, Intern BIF, Junior at BrownUniversity

 
watchlist Created with Sketch. Add BRN (ASX) to my watchlist
(20min delay)
Last
30.5¢
Change
-0.010(3.17%)
Mkt cap ! $563.0M
Open High Low Value Volume
31.0¢ 31.5¢ 30.3¢ $1.138M 3.716M

Buyers (Bids)

No. Vol. Price($)
7 158536 30.5¢
 

Sellers (Offers)

Price($) Vol. No.
31.0¢ 271818 11
View Market Depth
Last trade - 16.10pm 26/04/2024 (20 minute delay) ?
Last
30.8¢
  Change
-0.010 ( 1.25 %)
Open High Low Volume
31.5¢ 31.5¢ 30.3¢ 2309338
Last updated 15.59pm 26/04/2024 ?
BRN (ASX) Chart
arrow-down-2 Created with Sketch. arrow-down-2 Created with Sketch.