In the world of technology, sometimes the greatest wonders are hidden deep inside steel boxes, where tens of thousands of processors perform the magic of computing. We’re talking about supercomputers, creations of the human mind that make the boldest scientific discoveries and technological advancements possible.

In this article, we explore what a supercomputer is, its power, and its future.

What is a supercomputer?

A supercomputer is a powerful machine capable of performing the most complex computing tasks — from weather forecasting to scientific simulations and artificial intelligence research. Per second, it performs billions or trillions of operations.

Supercomputers are built with specialized hardware and software configurations, enabling them to handle parallel data processing and provide unprecedented performance.

Difference between ordinary computers and supercomputers

Unlike regular computers, supercomputers consist of not just one, but thousands or millions of processors or nodes that work simultaneously to solve complex issues. They also have specialized architectures and cross-connection technologies that allow efficient data exchange between nodes, ensuring smooth collaborative work on tasks.

Tasks that would take a regular PC a week to complete can be done by a supercomputer in hours. Its primary goal is to perform the maximum number of computations in the minimum amount of time.

The history of supercomputers

First machines

In 1960, Control Data Corporation (CDC) built the CDC 6600. It performed over a million operations per second. This machine was used for scientific research and played a crucial role in the development of weather forecasting, nuclear physics, and aerospace engineering. The CDC 6600 laid the foundation for modern supercomputers and inspired other companies to develop their own machines.

Supercomputer
Cray-1 supercomputer by Seymour Cray

Vector supercomputers

In the 1970-80s, vector supercomputers became popular. These machines were designed to handle complex maths operations and were used in such fields as physics, chemistry, and engineering. Cray-1 was one of the most famous vector supercomputers of that time. It could run up to 250 commands and had a unique design that set it apart from other computers. Cray-1 was used by many institutions, including the National Center for Atmospheric Research and Los Alamos National Laboratory.

Vector supercomputers
Connection Machine, which was created thanks to the Danny Hillis researches at MIT

Massively parallel supercomputers

They emerged in the 1990s. These machines were designed to perform many computations simultaneously and were used for tasks such as weather forecasting, genome sequencing, and new drug discovery. Connection Machine was one of the most popular massively parallel supercomputers at the time. It had up to 65,536 processors and could execute up to 131,000 commands. Connection Machine was used by many organizations, namely the National Security Agency and the National Center for Supercomputing Applications.

Massively parallel supercomputers
Frontier, the world’s first exaflop computer

TOP 500 supercomputers of today

Since 1993, there exists a top 500 list, which has been updated twice a year. Currently, the most powerful supercomputer in the world is Frontier, with a capacity of 1.194 exaflops. Aurora is in second place (585 petaflops). Eagle by Microsoft is third (561 petaflops). Modern supercomputers are essential for accelerating scientific research and pushing the boundaries of what’s possible.

How supercomputers performance is measured

Supercomputers’ power is measured in FLOPS (Floating-point Operations Per Second).

Initially, performance was measured in megaflops, then gigaflops, and later teraflops. Since 2008, petaflops have been used. Now, there are even machines with exaflop capabilities.

Supercomputer performance

1 megaflop1,000,000 flops
1 gigaflop1,000,000,000 flops
1 teraflop 1,000,000,000,000 flops
1 petaflop 1,000,000,000,000,000 flops
1 exaflop 1,000,000,000,000,000,000 flops

Types of supercomputers

Supercomputers are divided into two major categories:

General-purpose supercomputers. They come in three subcategories:

  • With vector processing. They use vector or array processors — the opposite of scalar processors, which can only process one element at a time. They function efficiently as a central processor capable of quickly executing math operations on massive datasets.
  • Cluster. A chain of interconnected computers that function as a whole. These may be parallel clusters, director-based clusters, two-node clusters, or multi-node clusters. A well-known example is a cluster of computers with Linux OS and open-source software for parallelism. Grid Engine by Sun Microsystems and OpenSSI are also examples of cluster computing.
  • Commodity. Consist of many standard computers connected by high-speed, low-latency local networks.

Special-purpose supercomputers

These are designed for specific tasks or purposes. They often use specialized integrated circuits (ASICs) that provide exceptional performance. Some relevant examples include Belle, Deep Blue, and Hydra, designed for chess, as well as Gravity Pipe for astrophysics and MDGRAPE-3 for protein structure calculations and molecular dynamics.

Reasons to use supercomputers

  • Weather and climate research. To predict the impact of extreme weather events and understand climate patterns.
  • Oil and gas drilling. To collect vast amounts of geophysical seismic data for identifying and developing oil reserves.
  • Aviation and automotive industries. For flight simulator development and simulated automobile environments, as well as using aerodynamics to achieve the lowest aerodynamic drag coefficient.
  • Nuclear fusion research. For creating nuclear fusion reactors and virtual environments for testing nuclear explosions and ballistic weapons.
  • Medical research. To develop new drugs, cancer treatments, therapies for rare genetic diseases, COVID-19 research, and studying the genesis and evolution of epidemics and diseases.
  • Real-time application development. To support the operation of online games during tournaments and create new games.
  • High-performance computing (HPC). It allows synchronizing large-scale computations across multiple networked supercomputers. As a result, complex computing with large data masses much quicker than it would take on regular computers.

The future of supercomputers

Currently, the focus is on the race to achieve exaflop computing capabilities. Exaflop supercomputers are expected to create a highly accurate model of the human brain, including neurons and synapses, which will have a profound impact on neuromorphic computing.