© Laurent T / Shutterstock.com

Key Points

  • Machine learning refers to the practice of teaching computers to recognize patterns from immense quantities of data supplied to them.
  • AI/machine learning chips are being used in the automobile industry for self-driving cars, in healthcare for medical research, and by climate experts to establish weather patterns.
  • Intel’s Loihi 2 represents neuromorphic technology designed to mirror the human brain; it is used in robotics and sensory detection.

If you’re reading this, you probably get excited about tech. Right now, one of the most exciting things happening in tech is machine learning (also called deep learning). Tech companies have been chasing machine learning for years, and we are finally getting a chance to see what it can do.

Tons of tech companies from startups to giants like IBM are working toward machine learning and artificial intelligence. To achieve their goals, these companies have been developing processors designed for AI and deep learning. So, in the spirit of excitement, let’s look at the 8 most exciting machine learning chips and the companies that make them.

Let’s get started!

NVIDIA A100 SXM GPU 40GB GPU Boards Full Warranty
$9,748.00
  • Using Nvidia’s NVLink board, the NVIDIA A100 SXM GPU can interact with 4 to 16 GPUs at up to 600GB/s for the highest application performance.
  • This GPU can also be partitioned into 7x GPU instances enabling businesses to rapidly accommodate changes in demand
  • Performance using the Ampere architecture is increased by a factor of 20X compared to the prior generation
  • GPU Memory Bandwidth: 1,555GB/s
  • Max TDP Power: 400W
We earn a commission if you make a purchase, at no additional cost to you.
03/03/2023 09:29 am GMT

It turns out the most powerful thing powering AI and machine learning systems is actually a GPU. Nvidia’s A100 GPU is an absolute powerhouse with a mind-numbing 80 GB of memory. That number is not a typo—this thing is built to crunch through some of the most massive data sets on Earth.

So, why a GPU? Well, GPUs can handle a lot more complex data than traditional CPUs. The Nvidia A100 can make calculations blisteringly fast without sucking lots of power. It is also more efficient at working through large data sets which reduces latency, a key factor in the effectiveness of machine learning.

For now, this beast is mostly used in data centers and lab environments. At nearly $10k, it is pretty much unattainable to the public. But, it does get us excited for the future of Nvidia’s GPU lineup.

You can see it for yourself on Amazon here.

Most Exciting Enterprise Chip: IBM Telum Chip

Fintech might not be the sexiest thing in tech, but it is very important. IBM’s new Telum chip is designed to combat fraud using machine learning. Basically, the chip has incredibly low latency allowing it to detect fraud in real-time. Since detecting fraud can be time-consuming and computationally intensive, this could be a game changer for Fintech applications.

The chip has 8 processor cores that all run at 5.5 GHz, so it’s extremely fast. Each core can access its own dedicated L2 cache of 32MB. This memory is pooled together, so the chip technically has 256MB of total memory in the cache. It can also scale up to 32 chips using wafer-scale integration (more on that later).

Most Exciting Neuromorphic Chip: Intel Loihi 2

Before we talk about this next chip, we should talk about what Neuromorphic means. Neuromorphic computing is a process of producing chips and machines that think the way humans do organically. Right now, machines use calculations to simulate the function of the human brain. Neuromorphic chips would mirror the function, not simulate it.

Enter the Loihi 2 chip, the open-source-supported neuromorphic chip from Intel. This chip has been used in many edge technologies, including robotics, sensory detection, and scene understanding. The technology it powers is leading edge, but the chip itself is a breakthrough.

It was constructed using Intel’s own 4nm manufacturing process. Intel has also developed Lava, open-source software that works with Loihi systems. Numerous companies have been able to use this software/hardware to develop neuromorphic applications.

Best Mobile AI Chip: Google Tensor

While most machine learning innovation is geared toward large-scale data sets, there is also a movement for the use of machine learning in everyday applications. This is where the Google Tensor chip comes in. We covered this chip in our list of the most powerful mobile processors, but let’s recap.

The Google Tensor chip is Google’s first crack at a proprietary SoC chip for their Pixel devices. SoC basically means that everything a smartphone needs to function — CPU, GPU, etc. — is all packed into one chip. For Tensor, Google also integrated some machine learning processes.

This has led to innovations in camera features and speech recognition. The Google Tensor chip is not pushing the limit like a lot of the other chips on this list. In fact, Google is also working on its Edge and Cloud TPU products for larger-scale problems. Still, it is an exciting step toward ML in the palm of your hand.

Most Exciting New Tech Chip: GroqChip

In 2017, a group of former Google engineers formed the start-up company, Groq. It was a very secretive endeavor when it first started, but now, we understand why.

Groq is trying to revolutionize the way that chips are made. The GroqChip or Tensor Streaming Processor is designed to streamline computing for ML and AI applications.

Typical chip architecture used a complex interplay of control circuits, multifunctional cores, and multilayer caches. Groq claims to have created a “simple” method where all control functions are done by a single compiler. The compiler is a small piece of the overall chip leaving more room for memory and arithmetic units.

The memory streams through these arithmetic units consistently which equals very low latency. It also uses machine learning to recognize patterns and cache that data for future use. The GroqChip is mostly geared toward automotive and data center applications. The company is still young so it will be interesting to see what they do in the next decade.

Most Exciting Wafer-Scale: Cerebras WSE-2

Wafer-scale integration has been another way that companies are trying to innovate computing. And we can’t talk about wafer-scale without talking about the Cerebras WSE-2. But, before we talk about the Cerebras WSE-2, we should define wafer-scale.

In the traditional chip-making process, manufacturers use a chip fab, which is a sort of printing press for semiconductors. This is how manufacturers like TSMC and Samsung make singular chips. With wafer-scale, multiple chips are printed onto a circular disk called a wafer. They are interlinked and able to share streams for massive amounts of computing power. The IBM Telum processor is also capable of wafer-scale.

But, the Cerebras WSE, or Wafer-Scale Engine, is different. Because of the design, there is no bottlenecking like what can be found on GPUs like the A100. It is able to work in tandem because the chips are all integrated into the same wafer-scale processor.

To date, it is the largest and fastest processor ever built. It is about the size of a dinner plate and dwarfs the nearest competition in low latency performance. To put it in perspective, The WSE-2 has about 2.6 trillion transistors. The Nvidia A100, the next most powerful AI processor, has 54.2 billion transistors.

We are talking about computing power that has yet to be seen on Earth. If that’s not exciting, we aren’t sure what is!

Machine learning AI chips
The AI/machine learning chips being made today are generally all proprietary technologies and are therefore not on the market for purchase.

©BeeBright/Shutterstock.com

What Do These Chips Do

We’ve gone over some of the biggest AI/ML chips in the game right now and the companies that make them, but what do they actually do? Let’s break it down.

Machine Learning Defined

Before we start, we need a quick definition of what machine learning is. In simple terms, machine learning basically allows a computer to recognize patterns in extremely large data sets. Engineers and scientists feed these machines tens of thousands of examples of whatever data they would like the machine to recognize. Then the computer builds on these examples to make guesses.

Nearly every industry in tech relies on some form of machine learning. We also encounter it all the time in our daily lives. Spotify recommendations, facial recognition software, and point-of-sale applications all use machine learning. For instance, a machine can’t tell if a song is sad, but it can recognize the BPM, pitch, and wavelengths of a sad song and make an educated guess based on prior examples.

Now, there are probably some engineers somewhere pulling their hair out reading our explanation. This is only meant to be in layman’s terms and not an exact definition of how it works.

Use Cases

The use cases for machine learning are vast. In this list alone, we have seen a myriad of chips that provide different solutions to different problems.

Right now, the biggest advancements are not available to the public. We see ML in everyday life, sure, but the kind of technology we have mentioned here just isn’t practical for everyday use.

That said, let’s go over a few industries where these chips are making an impact.

Automotive

Self-driving cars have been one of the science fiction concepts that are rapidly becoming scientific fact. Alas, we aren’t there yet. For all of ML’s ability to recognize patterns, machines simply cannot think as effectively or as fast as the human brain. But, chipmakers are working to change that.

Providing low-latency solutions will bring us ever closer to self-driving vehicles as a norm. We are still a ways away from that future, though.

Healthcare

Medical science has benefited greatly from machine learning. When medical researchers and doctors are working toward vaccines or understanding human biology, they have to crunch an incredible amount of data. Like with most things, nothing is a substitute for the human mind.

However, having a machine that is designed to recognize patterns in medical research, create simulations based on that data, and filter it, is invaluable. Human beings might be intelligent, but we simply cannot go through tens of thousands of data points realistically — machines can. This makes ML a very helpful tool for top researchers around the world.

Climate Solutions

Climate change is an unfortunate truth that no one can deny. But, with ML, there is hope for scientists to study patterns in nature and develop solutions. Climate scientists have used supercomputing to establish weather patterns, run climate simulations, and forecast futures. Of course, the power needed for such computations is immense.

Most of the AI/machine learning chips that are being developed today are being built for efficiency as well as power. It is still an issue to deal with and there are other methods to combat climate change. ML systems could be a valuable tool in the arsenal of climate scientists.

Up Next

If you liked this article, be sure to check out the rest of what History Computer has to offer. We think you should start with the articles below.

The Most Exciting AI/Deep Learning Chips and the Companies That Make Them FAQs (Frequently Asked Questions) 

What is machine learning?

Machine learning is a process of teaching a computer to recognize patterns and make guesses when interacting with human input. Basically, a machine is fed tons and tons of examples and, over time, it is able to paint a picture of whatever concept you want it to recognize.

For instance, a machine cannot recognize whether a song is sad or not, but it can recognize other data about a song. BPM or pitch are all things that a machine can process and use to make guesses about what makes a song sad. That, in essence, is machine learning.

Is AI real?

The concept is very real, but the execution is not so much. What a lot of people call AI today is honestly just pattern recognition or based solely on input. To stay in a musical example, if Spotify makes a playlist for you with a bunch of sad songs, it is probably because you enjoy sad music. Recognizing those patterns makes an educated guess.

If, for instance, you played a song that wasn’t sad, a machine might not notice. This is because it is only using past examples to make estimations. True AI would be a machine being able to recognize music by feeling like we do. It is something that is very far from our current capabilities.

What is wafer-scale intergration?

Wafer-Scale integration refers to the ability to scale processors to increase computation power. With wafer-scale, multiple chips are linked on a single disk called a wafer. This allows those processors to work together with low-latency as they are directly linked and not bridged. So, they are less prone to bottlenecking. Wafer-scale processors are able to handle massive amounts of computation to tackle the biggest problems we have.

Are AI/machine learning chips available to the public?

In some cases, yes. The Google Tensor chip can be found in the Pixel 6 lineup. But, for most of the chips on this list, personal use isn’t really practical. For now, it is mostly used by developers in partnership with the companies listed above.

What are these chips used for?

Lots of things, but mostly anything that requires large-scale data computation, like machine learning.

About the Author

Follow Me On:

LinkedIn Logo

More from History-Computer

  • AI Multiple Available here: https://research.aimultiple.com/ai-chip-makers/
  • IBM Newsroom Available here: https://newsroom.ibm.com/2021-08-23-IBM-Unveils-On-Chip-Accelerated-Artificial-Intelligence-Processor
  • Android Authority Available here: https://www.androidauthority.com/google-tensor-3060818/
  • Forbes Available here: https://www.forbes.com/sites/karlfreund/2021/09/30/intel-announces-neuromorphic-loihi-2-ai-hw-and-lava-sw/?sh=11ba1c783fec
  • Business Wire Available here: https://www.businesswire.com/news/home/20210930005258/en/Intel-Advances-Neuromorphic-with-Loihi-2-New-Lava-Software-Framework-and-New-Partners
  • Groq Spec Sheet Available here: https://groq.com/GroqDocs/Product%20Spec%20Sheet%20-%20GroqChip%E2%84%A2%20Processor.pdf
  • Cerebras Available here: https://www.cerebras.net/product-chip/
  • The Verge Available here: https://www.theverge.com/2020/5/14/21258419/nvidia-ampere-gpu-ai-data-centers-specs-a100-dgx-supercomputer
  • CNBC Available here: https://www.cnbc.com/2017/04/20/ex-googlers-left-secretive-ai-unit-to-form-groq-with-palihapitiya.html
  • (1970)