
© graphicINmotion / Shutterstock.com
Since its introduction in 1958, the integrated circuit or computer chip has been one of mankind’s crowning achievements. Machine learning and AI are the current buzzwords du jour in Silicon Valley, but progress is slow in practical applications.
The world’s largest computer chip is a solution to a problem, one which Silicon Valley’s Cerebras hopes might be an ideal fit for the task at hand. The Wafer-Scale Engine is a feat of engineering, a single large sheet of silicon.
Its intended purpose is to be a dedicated processing unit meant for AI training. Current models and conventional methods using CPUs and GPUs can take weeks, but the WSE promises to take just minutes.
Cerebras Systems

©Michael Vi/Shutterstock.com
The company finds its home in Sunnyvale, California. Since its formation in 2015, Cerebras has gone on to successful rounds of startup funding, with its series E round concluding in 2019. Their current valuation is $2.4 billion, and satellite facilities in Japan and India have made them a worldwide operation.
Cerebras finds itself as the vanguard for AI, developing purpose-built machines for machine learning and the training of AI models. The company is currently poised to make major breakthroughs in how AI models are trained, with their supercomputer Andromeda combining sixteen of their enormous computer chips for extreme speeds.
The Wafer-Scale Engine
The largest computer chip in the world came about because of inefficiencies in using CPUs and GPUs to train AI models. The theory states that those pieces of hardware are built for a specific use case. As such, repurposing them for machine learning is inefficient.
The Wafer-Scale Engine is similar in function to a tensor processing unit, or an AI-centric integrated circuit. It is on its second iteration, with the WSE-2 serving 850,000 cores and 2.6 trillion transistors on an 8.5-inch sheet of silicon.
Data parallelization is the primary means of how it computes. It uses multiple parallel streams across its hundreds of thousands of cores to execute instructions at a time. Memory throughput is enormously fast, with the 40GB of onboard SRAM transferring at 20 petabytes per second.
One WSE-2 is capable of training up to 20 billion parameters, a staggering amount, and does so in a matter of minutes. Effectively, it is able to train and build AI models denser and more complex than what use cases allow for. Andromeda, Cerebras’s supercomputer, merges 16 of the WSE-2 chips into a single unit for an exponential increase in its parallelization.
The WSE-2 powers Cerebras CS-2, a purpose-built AI computer. The CS-2 provides the performance of an AI cluster in a package that is far smaller than its competitors while providing greater performance.
What Is It Used For?
The Wafer-Scale Engine is intended for AI models. These aren’t the simple sort of entry-level users can run on a commercially available GPU or CPU. Instead, where the WSE excels is in its ability to run massive AI models.
Current AI models are on average in the hundreds of millions for parameters. Some outliers, like Google’s implementation of a trillion parameter model, require vast resources to train.
The WSE’s use case is speeding this process up. While specialized processing units exist for AI models, they don’t have the same throughput as the WSE. TPUs might accomplish a task in days that the WSE might take minutes to resolve. It does depend on the complexity of the model.
A Brief History of the Computer Chip

©NIMEDIA/Shutterstock.com
Computer chips, or integrated circuits, were first devised by Texas Instruments engineer Jack Kilby in 1958. Kilby had proposed the concept of a monolithic structure containing a semiconductor encased in a series of silicon wafers a year prior.
Integrated circuits were previously conceptualized by Werner Jacobi. There was little interest internationally in developing his idea into full-blown production. The Air Force quickly employed the integrated circuit after its introduction.
Future technology employed by the military quickly adopted the new innovation. Computer chips would also end up in the Apollo space mission, just a scant 11 years later. Fairchild Semiconductor was instrumental in developing Kilby’s idea into something more instantly workable.
It took the original design’s germanium wafers and moved them to the more efficient silicon wafers in use today. Integrated circuits revolutionized electronics, allowing for smaller and smaller transistors and semiconductors to be loaded onto wafers.
Fast forward to today, and this marvelous piece of technology is in all facets of life. Computer chips are in everything from your smartphone to your car. They effectively shaped the form of computing for years to come. Things went from primitive computation machines being massive installations to the modern laptop being the size of a notebook today.
Computer Chips and AI Modeling
Training AI models is a difficult task for modern hardware. Effective AI training relies on a few different factors. As such, high throughput on a computer chip is one of the most vital components. High throughput is how fast a chip is able to process data and transmit it.
This expedites the task of training a model with the datasets and countless parameters needed to make an effective model. GPUs handle this task in the consumer and hobbyist space. These aren’t as suited for the task as one would hope.
In the business end of the spectrum, tensor processing units in the cloud can be used. This keeps the need for specialized hardware to a minimum in a business campus, but they still require time to process the multiple tasks and data sets to train a model.
Cerebras and the WSE are operating in the same vein, but due to the entire system being integrated into a single chipset, it is able to crunch the numbers and processes in a more efficient manner. Ideally, there are multiple fast cores on the computer chip which allow for parallel processing and multiple streams of data to be utilized.
Enterprise-grade server CPUs working with GPUs can accomplish this task, but system-on-a-chip, or SoCs, can make this a much faster process. This is thanks in part to the reduced travel time, even on something as massive as the Wafer-Silicon Engine.
What’s Next for Cerebras?
Cerebras is still growing, having received multiple rounds of funding. The manufacturer allows for the use of Andromeda in a variety of industries. Andromeda sees use in health, energy, government, financial services, and social media sectors. Their renting out of cloud-provisioned Andromeda time is a massive boon for those looking to bolster their AI development.
Cerebras has recently begun engaging in work with Green AI Cloud, one of the premier super compute platforms in Europe. The WSE2 provides an alternative to more mainstream GPUs like the Nvidia A100, allowing for Green AI Cloud to maintain a negative carbon footprint while allowing users to leverage its power.