Using artificial intelligence inspired by the human brain, a chip developed by BrainChip Holdings, is winning commercial support from large global companies.
For local technology investors, the most exciting thing to come out of the Consumer Electronics Show was the news Mercedes-Benz will use a microchip developed by ASX-listed BrainChip Holdings in its latest electric vehicle.
The Vision EQXX concept car, which claims to be able to travel 1000km on one charge, uses BrainChip’s proprietary neural processing hardware and software.
The explosion in artificial intelligence and the demand for energy efficiency are playing into the hands of BrainChip. David Rowe
The car company said it was attracted to the energy efficiency of the Akida neuromorphic processor developed by the California-based company.
Co-founder and chief executive Peter van der Made, who is based in Perth, says the chip uses 10 times less power than power-efficient alternatives and 1000 times less power than standard data centre architecture.
BrainChip has added about $1 billion to its market capitalisation over the past three months thanks to a series of positive customer announcements, not all of which have been released to the ASX.
The company seems to take a liberal view of what is and what isn’t price-sensitive information. If price sensitivity is correlated with the excitement generated on social media and other stock chat platforms, BrainChip has some work to do.
The shares started to spike late last year following the November announcement that it had entered into a licensing agreement with Japanese semiconductor manufacturer MegaChips.
The agreement, which runs for four years, grants MegaChips a non-exclusive, worldwide intellectual property licence for use in designing and manufacturing its Akida technology into external customers’ systems.
The decision by Mercedes to use BrainChip’s Akida processor in the EQXX became public a week ago. The stock is up 42 per cent since then.
On Monday BrainChip said US client Information Systems Laboratories was developing an AI-based radar research solution for the Air Force Research Laboratory based on its Akida™ neural networking processor.
Notwithstanding the company’s apparent loose interpretation of continuous disclosure obligations, it is clearly a tech stock to watch in 2022 given it is achieving commercial endorsement and is operating in one of the most prospective areas of artificial intelligence.
In AI there are three classes of machine learning: supervised learning, unsupervised learning and reinforcement learning.
When experts talk about machine learning they usually do so from the perspective of supervised learning.
If you want to predict someone’s exam score, you can ask things like how many hours you have studied, or how many hours you have slept, and then you can analyse that to get an idea of what the grades could be.
To represent that in machine learning the data is expressed in columns, with each of the columns in the table representing different features or attributes.
The mathematical function that transforms this into the likely test grade is called matrix multiplications, whereby certain weights are given to each feature in the table.
Greater weight would be given to the time spent studying and less weight given to the time the student slept.
A graphics processing unit (GPU) does matrix multiplications very well. They have a lower processing speed, but can do things in parallel very fast.
BrainChip, Intel and IBM have been finding more efficient ways to design machine learning models using event-based sensors, which will become ubiquitous as the global economy moves to the internet of things.
When applying machine learning to someone playing soccer, the classic machine learning would be to process all the information around the ball, such as the grass, the sky and other factors.
An event-based processing approach saves energy because it only focuses on the moving parts, such as the ball.
At the moment most machine learning processes rely on convolutional neural networks, which is like a moving window that slides across the matrix. Essentially, it finds the patterns that are spatially correlated.
The BrainChip processor works on something called a spiking neural network, which only processes “events” or “spikes” that indicate useful information. This approach, similar to the way the human brain works, is not efficiently represented in GPUs.
According to van der Made the Intel and IBM test chips, including Loihi, Loihi2 and Truenorth are not comparable to BrainChip’s AKD1000 chip.
He says IBM’s Truenorth has no on-chip learning, is a “very large” and is not cost-effective.
Intel’s Loihi chip is comparable in chip size to the AKD1000, but is made in a costly 7nm process while the BrainChip AKD1000 is using a 28 nm standard manufacturing technology, according to van der Made.
“AKD1000 has on-chip convolution and on-chip learning and can be simply configured using standard TensorFlow tools,” he says.
“The AKD1000 is in production and has many application examples for vision, voice recognition, key word recognition and classification of odours and tastes.”
Sign up to our weekly newsletter.
Follow the topics, people and companies that matter to you.
Fetching latest articles
The Daily Habit of Successful People