What is the difference between mainframes and supercomputers?

April 6, 2020

What do you picture when you hear the word mainframe or supercomputer? Most people think of a very similar picture, which is not surprising if you look at the definitions of the words. According to the Oxford Dictionary, a mainframe is:

main·frame: /‘mān,frām/ (noun)
a large high-speed computer, especially one supporting numerous workstations or peripherals

And a supercomputer is:

su·per·com·put·er: /ˈso͞opərkəmˌpyo͞odər/ (noun)
a particularly powerful mainframe computer

So, based on the Oxford definition a supercomputer is just a powerful mainframe computer. But is this actually the case?

So what is the difference?

Mainframes are powerful computers that have handled mission-critical business workloads for decades; they have been in use since the 1950s. Mainframes have been replaced with servers for many applications, but are still common in industries like banking, telecommunications, and retail. Many new mainframes, like the IBM z15, are only about the size of a supercomputer.

A supercomputer is a multi-node system that uses parallel processing to run a program or simulation at extremely fast speeds. Supercomputers can be as small as two computers (even laptops) or as big as a warehouse, or bigger.

According to GeeksforGeeks, the main differences between supercomputers and mainframes are:

Supercomputer Mainframe
Supercomputers are used for large and complex mathematical computations. Mainframes are used as storage for large databases and serve a maximum number of users at a time.
Some of the fastest supercomputers operate at hundreds of quadrillions of Floating-point Operations Per Second (FLOPS). On the other hand, mainframes generally only operates at tens of millions of these operations.
Both supercomputers and mainframes will generally run a Linux based OS.

What are supercomputers used for?

Jack Dongarra, a distinguished Professor of Computer Science at the University of Tennessee, gives us a good idea of what supercomputers actually do.

“Say I want to understand what happens when two galaxies collide. I can’t really do that experiment. I can’t take two galaxies and collide them. So I have to build a model and run it on a computer. Or in the old days, when they designed a car, they would take that car and crash it into a wall to see how well it stood up to the impact. Well, that’s pretty expensive and time consuming. Today, we don’t do that very often; we build a computer model with all the physics and crash it into a simulated wall to understand where the weak points are.” Jack Dongarra, Professor of Computer Science at the University of Tennessee

Supercomputers allow for scientists and engineers to run complex simulations in (relatively) short time. This allows for less time in the field doing what may be an extremely expensive (or dangerous) experiment or test.

In fact, the world’s second fastest supercomputer running at about 95 Peta-FLOPS at the Lawrence Livermore National Laboratory is air-gapped, meaning that it is not connected to any external network (the Internet), and running classified simulations on the United States nuclear stockpile.

What about mainframes?

A mainframe may be the symbol of a bygone era to most, but most people have used or at least interfaced with a mainframe at one point or another. If you’ve ever received change back at a retailer with a debit card or used an ATM, then you have used a mainframe.

IBM says, “Today, mainframe computers play a central role in the daily operations of most of the world’s largest corporations. While other forms of computing are used extensively in business in various capacities, the mainframe occupies a coveted place in today’s e-business environment. In banking, finance, health care, insurance, utilities, government, and a multitude of other public and private enterprises, the mainframe computer continues to be the foundation of modern business.”

So, supercomputers and mainframes are both considered high-performance computers. However, they can have drastically different uses – from banking and retail transactions for mainframes, to nuclear weapon simulations and weather simulations on supercomputers. Access to an amazing amount of computing power allows scientists, engineers, and businesses not only to save time and money, but also peace of mind that whatever they are running will happen at amazing speeds.


About the author

Gregory Manley

Gregory Manley is a sophomore at Colorado School of Mines where he is majoring in Computer Science with a minor in Mining Engineering. He is the owner of iTech News and a contributor for Section’s Engineering Education Content Program. His management of iTech News has led him to work with many brands on writing technology focus articles.

This article was contributed by a student member of Section's Engineering Education Program. Please report any errors or innaccuracies to enged@section.io.