“TeraFLOPS” means “one billion floating-point operations per second.” It’s a representation of computing power that is mostly used in supercomputing. However, it sees some usage in the consumer world as a measure of power in graphics processing units (GPUs). Let’s examine what a teraFLOPS is and why it’s essential in computing.
What Is a Floating-Point Operation?
To understand teraFLOPS, we have to understand FLOPS in general. A floating-point operation is any computing equation that uses floating-point arithmetic, which is a whole lot of math words that may not make sense to you at first glance. Let’s examine what these words mean and how they relate to computers.
A floating-point representation is a methodology used to calculate equations with very large or very small numbers accurately. In a fixed-point operation, the decimal point in the numbers stays in one place. However, this can be limiting when using numbers with hundreds of digits. Thus, we use floating-point operations, where the decimal point “floats,” and we can move it to more correctly represent numbers.
Floating-point numbers are shown as a fixed-point mantissa, the number representing the significant digits of the figure, followed by an exponential equation showing how many places to move the decimal point to get the actual number it’s meant to be. Numbers in this format have a base, showing how much the value of each decimal place changes — which is always assumed to be even — and a precision that shows how many digits the mantissa should have.

©Pixels Hunter/Shutterstock.com
So, we can write 0.1 as 1.00×10-1 using base 10 and a precision of 3. So, these are the same numbers. However, the first number is fixed-point while the second is floating-point.
Something to note is that floating-point numbers are not unique. 1.00×10-1 and 0.1×101 have the same value, even though they’re different visually. We call numbers with a nonzero leading number in the mantissa “normalized” floating-point figures. If you mandate that an operation use normalized numbers, they will be unique. However, requiring a normalized figure means it is impossible to represent the number 0.
What Is a Rounding Error in Floating-Point Operations?
Floating-point operations suffer from a rounding error, an incorrect calculation from rounding a number up or down. Since these equations use a fixed-point mantissa, rounding is a necessary evil you can’t avoid. So, you have to be able to minimize the rounding error in your calculations.
Precision in Floating-Point Operations
As we’ve mentioned above, floating-point numbers have a base and precision. When computers use these figures for equations, they can use many different values for each. The three main types of floating-point operations in computing are half-precision, single-precision, and double-precision.
Half-precision floating-point operations use 16 bits to store the figures. These calculations sacrifice accuracy for resource usage. They use far less power but are subject to a larger rounding error than higher-precision equations.
Single-precision floating-point operations require 32 bits to store the numbers. These are your standard equations in computing. They represent a good marriage of resource use and accuracy. However, if you need your calculation to be absolutely on-point, this format may not be the best choice.
Double-precision floating-point operations use 64 bits to store the figures. These are the most accurate number formats you can use now in consumer computing. While higher-precision alternatives such as double-extended and quadruple-precision calculations exist, these are exclusive to supercomputers. Most computers on the market can barely handle double-precision equations!
Most computer programs that use floating-point operations will use all three precision types, depending on the calculation and how accurate it needs to be. Double-precision operations use a lot of resources. So, it’s hard to maintain speed in the teraFLOPS when you’re storing such large numbers.
However, there are times, such as when calculating scientific data (i.e., medical calculations), when single-precision formats are not precise enough. Computers performing these equations must be appropriately robust, or they could be wrong.
What Is a FLOPS?
FLOPS stands for “floating-point operations per second” and measures computing power. FLOPS is singular and plural, much like the word moose, and the S stands for “seconds.” Thus, the S is necessary; you cannot call it a “FLOP.”
FLOPS have a metric prefix that tells you how many operations they represent. The most common amounts are as follows:
Unit of Measurement | How Many Operations It Represents |
---|---|
kiloFLOPS (kFLOPS) | 1,000 floating-point operations per second |
megaFLOPS (MFLOPS) | 1 million floating-point operations per second |
gigaFLOPS (GFLOPS) | 1 billion floating-point operations per second |
teraFLOPS (TFLOPS) | 1 trillion floating-point operations per second |
petaFLOPS (PFLOPS) | 1 quadrillion floating-point operations per second |
exaFLOPS (EFLOPS) | 1 quintillion floating-point operations per second |
We use FLOPS to measure the operating capacity, theoretical and practical, of central and graphics processing units (CPUs and GPUs). Simply put, when a computer can perform an exceptional amount of floating-point operations, it has more power. We usually use these calculations in scientific calculations, analytics, and 3D image processing.
While FLOPS is not the only measure of a computer’s power for most users, it represents an effective and simple way to compare relative computing capacity. However, there are many factors you should consider when buying a PC or component.
How Do FLOPS Relate to Supercomputers?

©Gorodenkoff/Shutterstock.com
FLOPS are a common statistic that measures the relative power of supercomputers. Since these builds are traditionally mainly used for scientific research, they perform many floating-point operations! Thus, the ability to perform trillions of these calculations per second makes this data exceptionally valuable for understanding the performance peaks of high-performance computers.
However, supercomputers no longer calculate in teraFLOPS; they have reached the point of using petaFLOPS, or quadrillions of floating-point operations per second. We refer to the power of a supercomputer as either the Rmax or Rpeak. The former refers to the highest achieved power, while the latter is the theoretical peak computing speed.
Fastest Supercomputers in FLOPS
Supercomputer Name | Cores | Rmax (PFLOPS) | Rpeak (PFLOPS) |
---|---|---|---|
Supercomputer Fugaku | 7,630,848 | 442.01 | 537.21 |
Summit | 2,414,592 | 148.60 | 200.79 |
Sierra | 1,572,480 | 94.64 | 125.71 |
Sunway TaihuLight | 10,649,600 | 93.01 | 125.44 |
Perlmutter | 706,304 | 64.59 | 89.79 |
Selene | 555,520 | 63.46 | 79.22 |
Tianhe-2A | 4,981,760 | 61.44 | 100.68 |
JUWELS Booster Module | 449,280 | 44.12 | 70.98 |
HPC5 | 669,760 | 35.45 | 51.72 |
Frontera | 448,448 | 23.52 | 38.75 |
How Do FLOPS Relate to Gaming?
3D graphics processing uses floating-point operations. Thus, games that have 3D graphics will be performing an exceptional amount of calculations per second just to put the images on the screen. Consequently, the amount of equations the graphics card can manage will influence the quality of the game visuals.
However, it is essential to remember that the FLOPS of a system is not the only measure of power that games will need. Having a massive amount of floating-point operations per second will not ensure the best gaming experience. So, it’s crucial to consider all factors when building a gaming PC or choosing a console.
Unlike supercomputers, consumer electronics now measure their power in teraFLOPS. Let’s compare the relative power of various graphics cards and consoles in teraFLOPS.
Graphics Processor | Rpeak (TFLOPS) |
---|---|
Xbox Series X GPU | 12.15 |
PlayStation 5 GPU | 10.29 |
NVIDIA GeForce RTX 4090 Ti | 95.42 |
AMD Radeon RX 7900 XTX | 61.42 |
Intel Arc A770 | 19.66 |
As you can see, the console GPU market is significantly behind the PC one regarding FLOPS. The current NVIDIA high-end cards are around nine times faster than the PlayStation 5 and Xbox Series X. However, as we’ve said, this is not a single-statistic power indicator. Many factors go into a gaming system’s overall strength. However, FLOPS is an excellent starting point for determining what components to purchase.
The image featured at the top of this post is ©Wirestock Creators/Shutterstock.com.