Thursday, March 23, 2017

Supercomputer

Links

Supercomputer
Multiple Responses
1.
A supercomputer is a computer with a high-level computational capacity compared to a general-purpose computer. Performance of a supercomputer is measured in floating-point operations per second (FLOPS) instead of million instructions per second (MIPS). As of 2015, there are supercomputers which can perform up to quadrillions of FLOPS.

Supercomputers were introduced in the 1960s, made initially, and for decades primarily, by Seymour Cray at Control Data Corporation (CDC), Cray Research and subsequent companies bearing his name or monogram. While the supercomputers of the 1970s used only a few processors, in the 1990s machines with thousands of processors began to appear and, by the end of the 20th century, massively parallel supercomputers with tens of thousands of "off-the-shelf" processors were the norm.

As of June 2016 the fastest supercomputer in the world is the Sunway Taihu Light, with a Linpack benchmark of 93 petaFLOPS(PFLOPS), exceeding the previous record holder, Tianhe-2, by around 59 PFLOPS.

Supercomputers play an important role in the field of computational science, and are used for a wide range of computationally intensive tasks in various fields, including quantum mechanics, weather forecasting,climate research, oil and gas exploration, molecular modeling (computing the structures and properties of chemical compounds, biological macromolecules, polymers, and crystals), and physical simulations (such as simulations of the early moments of the universe, airplane and spacecraft aerodynamics, the detonation of nuclear weapons, and nuclear fusion). Throughout their history, they have been essential in the field of cryptanalysis.

Systems with massive numbers of processors generally take one of the two paths: in one approach (e.g., in distributed computing), hundreds or thousands of discrete computers (e.g., laptops) distributed across a network (e.g., the Internet) devote some or all of their time to solving a common problem; each individual computer (client) receives and completes many small tasks, reporting the results to a central server which integrates the task results from all the clients into the overall solution. In another approach, thousands of dedicated processors are placed in proximity to each other (e.g., in a computer cluster); this saves considerable time moving data around and makes it possible for the processors to work together (rather than on separate tasks), for example in mesh and hypercube architectures.

The use of multi-core processors combined with centralization is an emerging trend; one can think of this as a small cluster (the multicore processor in a smartphone, tablet, laptop, etc.) that both depends upon and contributes to the cloud.

2.
A supercomputer is a computer that performs at or near the currently highest operational rate for computers.  Traditionally, supercomputers have been used for scientific and engineering applications that must handle very large databases or do a great amount of computation (or both). Although advances like multi-core processors and GPGPUs (general-purpose graphics processing units) have enabled powerful machines for personal use (see:desktop supercomputer, GPU supercomputer), by definition, a supercomputer is exceptional in terms of performance.

At any given time, there are a few well-publicized supercomputers that operate at extremely high speeds relative to all other computers. The term is also sometimes applied to far slower (but still impressively fast) computers. The largest, most powerful supercomputers are really multiple computers that perform parallel processing. In general, there are two parallel processing approaches: symmetric multiprocessing (SMP) and massively parallel processing (MPP).

As of June 2016, the fastest supercomputer in the world was the Sunway TaihuLight, in the city of Wixu in China. A few statistics on TaihuLight:
  • 40,960 64-bit, RISC processors with 260 cores each.
  • Peak performance of 125 petaflops (quadrillion floating point operations per second).
  • 32GB DDR3 memory per compute node,  1.3 PB memory in total.
  • Linux-based Sunway Raise operating system (OS).

Notable supercomputers throughout history:
The first commercially successful supercomputer, the CDC (Control Data Corporation) 6600 was designed by Seymour Cray. Released in 1964, the CDC 6600 had a single CPU and cost $8 million — the equivalent of $60 million today. The CDC could handle three million floating point operations per second (flops).

Cray went on to found a supercomputer company under his name in 1972. Although the company has changed hands a number of times it is still in operation. In September 2008, Cray and Microsoft launched CX1, a $25,000 personal supercomputer aimed at markets such as  aerospace, automotive, academic, financial services and life sciences.

IBM has been a keen competitor. The company's Roadrunner, once the top-ranked supercomputer, was twice as fast as IBM's Blue Gene and six times as fast as any of other supercomputers at that time. IBM's Watson is famous for having adopted cognitive computing to beat champion Ken Jennings on Jeopardy!, a popular quiz show.

Top supercomputers of recent years:
Year
Supercomputer
Peak speed
 (Rmax)
Location
2016
Sunway TaihuLight
93.01 PFLOPS
Wuxi, China
2013
NUDT Tianhe-2
33.86 PFLOPS
Guangzhou, China
2012
Cray Titan
17.59 PFLOPS
Oak Ridge, U.S.
2012
IBM Sequoia
17.17 PFLOPS
Livermore, U.S.
2011
Fujitsu K computer
10.51 PFLOPS
Kobe, Japan
2010
Tianhe-IA
2.566 PFLOPS
Tianjin, China
2009
Cray Jaguar
1.759 PFLOPS
Oak Ridge, U.S.
2008
IBM Roadrunner
1.026 PFLOPS
Los Alamos, U.S.
1.105 PFLOPS
In the United States, some supercomputer centers are interconnected on an Internet backbone known as vBNS or NSFNet. This network is the foundation for an evolving network infrastructure known as the National Technology Grid. Internet2 is a university-led project that is part of this initiative.

At the lower end of supercomputing, clustering takes more of a build-it-yourself approach to supercomputing. The Beowulf Project offers guidance on how to put together a number of off-the-shelf personal computer processors, using Linux operating systems, and interconnecting the processors with Fast Ethernet. Applications must be written to manage the parallel processing.

3.
The fastest type of computer. Supercomputers are very expensive and are employed for specialized applications that require immense amounts of mathematical calculations. For example, weather forecasting requires a supercomputer. Other uses of supercomputers include animated graphics, fluid dynamic calculations, nuclear energy research, and petroleum exploration.

The chief difference between a supercomputer and a mainframe is that a supercomputer channels all its power into executing a few programs as fast as possible, whereas a mainframe uses its power to execute many programs concurrently.

4.
Extremely fast data processing-oriented computer whose number crunching power is (presently) measured in hundreds of billions of floating point operations (gigaflops). Supercomputers rely on parallel-processing technology and can use only a few but very complex programs in modeling economy behavior, nuclear reactions, meteorological and neurological phenomenon, etc. First supercomputer (Cray-1) was made in 1976 by the US engineer Roger Cray (1925-1996).

No comments:

Post a Comment