Blog > Mainframe > Mainframe vs. Supercomputer: Yes, There’s a Big Difference

Mainframe vs. Supercomputer: Yes, There’s a Big Difference

Authors Photo Christopher Tozzi | August 10, 2022

Are mainframes the same thing as supercomputers? Not at all. Ask a person on the street to explain the difference between mainframes and supercomputers, however, and you might hear the two terms conflated.

There is, of course, no official definition of a mainframe, supercomputer or most other types of computer, for that matter. Definitions are in the eye of the beholder.

If I were a descriptive grammarian, I might say that mainframes really are the same thing as supercomputers, because some people describe them that way.

But to imply that these two types of computers are the same thing is to overlook the important and unique roles that both mainframes and supercomputers play in the IT world.

Commodity servers vs. mainframes vs. supercomputers

To understand those unique roles, you must recognize the key distinguishing characteristics of each type of computer.

On the surface, this can be challenging, because there is a wide variety of mainframe and supercomputer hardware out there. It’s not as if all computers with a certain amount of memory, or a certain type of processor automatically qualify as either a mainframe or a supercomputer.

Instead of basing the definition solely on hardware profiles, you must take history and use cases into account as well when thinking about the differences between mainframes and supercomputers.

Personally, I like to think about modern infrastructure as involving three major categories of server* hardware. They include:

1. Commodity servers

These are the relatively inexpensive servers that comprise the bulk of data centers today. They entered widespread use in the 1990s.

Commodity servers usually run either a Linux distribution or Windows. They may be clustered together to form very powerful computing environments, but they can also run individually. They may host applications either on bare metal or through virtual machines or containers. Most commodity servers have processors based on the x86 chip architecture, but there are exceptions.

2. Mainframes

Mainframes are the powerful computers that have handled mission-critical business workloads for decades. They came into use in the 1950s, long before commodity servers were conceived. Commodity servers have taken over some mainframe workloads in recent decades, but mainframes remain essential in industries like banking and insurance.

Mainframes come in many sizes – modern ones are about the size of a refrigerator, which makes them not that much larger than commodity servers – and can be powered by different families or processors. Some of the mainframes that are still in use today are decades old; others are brand-new. Most mainframes run either a mainframe-native operating system, like z/OS, or a Linux distribution tailored for mainframes.

IBM mainframes

3. Supercomputers

Last, but not least are supercomputers. Again, there’s no hard-and-fast definition for a supercomputer. In general, however, you can define supercomputers in opposition to mainframes and commodity servers: A supercomputer is a computer with so much processing power that mainframes and commodity servers don’t come close to matching it.

Supercomputers tend to be designed for academic or research purposes, rather than for hosting workloads that you’d find in a typical business. They were built starting in the 1960s, and competition remains intense today to claim the title of most powerful supercomputer. Virtually all supercomputers run a form of Linux.

How mainframes and supercomputers differ

Essentially, then, the differences between mainframes and supercomputers boil down to the fact that mainframes are slightly older, not as stupendously powerful, and more important for business.

Mainframes are also a more important fixture in mainstream computing. Again, unlike supercomputers that are designed for research purposes, mainframes host business-critical workloads – just as they have been doing since the 1950s. They fill a niche that neither commodity servers, with their considerably smaller pools of resources, nor supercomputers, which are highly specialized (and cost-prohibitive for the average organization), can.

Read our whitepaper: Getting the Most Out of Your Mainframe

*Admittedly, even the term server is problematic, because technically any kind of device could be used for server applications. You could host a website on your iPad, for example, if you really wanted to. But when I say server, I am thinking of computers that were designed first and foremost to host large-scale server workloads, like running a database or a distributed application.