A supercomputer is the type of computer more powerful and faster than exists now. How are you machines are designed to process huge amounts of information quickly and are dedicated to a specific task or application is beyond the particular use, rather they engage in:
1. Search oil fields with large seismic database.
2. The study and prediction of tornadoes.
3. The study and prediction of anywhere in the world.
4. The development of models and projects for the creation of aircraft, flight simulators.
We must also add that supercomputers are a relatively new technology, hence its use has not crowded and is sensitive to changes. It is for this reason that its price is very high with over 30 million and the number that is produced each year is small.
Concept
Supercomputers are the type of more powerful computers and faster than existing in a given time. Are large, the biggest among its peers. They can process huge amounts of information quickly and can execute millions of instructions per second, is intended for a specific task and have a very large storage capacity. They are also the most expensive bearing a cost that could exceed $ 30 million. Because of its high manufacturing cost is very little for a year, there are even some that are manufactured only by request.
They have a special temperature control to dissipate the heat that some components can reach. It acts as an arbiter of all applications and controls access to all files, so does the input and output operations. The user is directed to the central computer of the organization when required processing support.
They are designed for multiprocessing systems, the CPU is the processing center and can support thousands of users online. The number of processors that can have a supercomputer depends mainly on the model, can have from about 16 processors up to 512 (such as an NEC SX-4 of 1997) and more. As belonging to the class of supercomputers can be named: The Cray 1, Cyber, Fujitsu, etc.
Some History
- MANCHESTER MARK (1948)
The first supercomputer British laid the foundations for many concepts still used today.
In modern terms had a RAM (random access memory) of just 32 positions or 'words'. Each word consisted of 32 bits (binary digits), which means that the machine had a total of 1024 bits of memory.
The RAM technology is based on the cathode ray tube (CRT). The CRTs were used to store bits of data on phosphorus-laden areas of the screen, appearing as a series of glowing dots on it. The electron beam of the CRT can efficiently handle this load and write a 1 or 0 and read it later as required.
In mid-1950 Britain was behind the U.S. in the production of high performance computers. In the fall of 1956 Tom Kilburn (co-designer of the Manchester Mark I) had initiated an effort known as the computer MUSE (microsecond).
The design specifications included the desire for speed next to a command instruction per microsecond and the need to add a large number of peripherals of various types. They also required storage capacity would have immediate access than any that was available then.
Special techniques were used that eventually included what are currently known as multiprogramming, scheduling, spooling, interruptions, pipelining, interleaved storage, autonomous transfer units, paging and virtual storage, techniques not yet established.
In 1959 the computer had been renamed as the Atlas and was later developed as a union between Manchester University and Ferranti company of Tom Kilburn. Atlas was launched on December 7, 1962. It was felt it would be the most powerful computer in the world. Was 80 times more powerful than Meg / Mercury and 2400 times more powerful than the Mark 1.
The first IBM computer in Daresbury, an IBM 1800, arrived in June 1966 and acted as a control computer and data transfer for the synchrotron NINA, then the main experimental service. It was quickly followed by the first IBM mainframe computer at Daresbury, the IBM 360/50 which started service in July 1966. This was replaced by an IBM 360/65 in November 1968.
During the early years the main task was to provide computing power to groups of High Energy Physics Laboratory working. The computer was very different in those days. The usual way to tell the computer to do was work on punch cards (although some stalwarts still insisted to the paper tape of 5 holes). Typically one preparing a work on punch cards and placed them on a high slider. Then an operator would take the load of punch cards and slide off the line printer that had occurred.
The loading and unloading time was measured at least tens of minutes. The mean time between failures towards the end of the 60 was a day. However these computer failures were 'ignored' by users who were waiting for the slide to reappear, and scored only a slight delay in the operation speed. The NAS/7000 (an IBM 'Clonico') was installed in June 1981.
This gave a huge increase in power and accuracy compared to previous systems.
The Cray 1 supercomputer was the first "modern".
One of the reasons that the Cray-1 was so successful was that I could perform more than one hundred million arithmetic operations per second (100 Mflop / s).
If today, following a conventional process, try to find a computer in the same speed using PCs, need to connect 200 of them, or you could simply buy 33 Sun4s.
CONVEX C-220 AND THE REVOLUTION UNIX
The arrival of UNIX qualitatively changed the way scientists addressing computer problems. First is a flexible way of providing power to your computer, rapidly changing hardware market and a crucial way to the changing requirements of scientific applications to users. New components can be added simply, or increased power as needed.
Intel
The 64-node Intel iPSC/860 has called RX. Each node has a clock of 40 MHz and 16 Mbytes of memory. The direct-connect hardware allows data transfer from node to node 2.8 Mbytes / second. There are 12 Gbytes of disk locally attached Ethernet connections to a workstation Sun-670MP User Access.
A maximum single-node performance of 40 MFLOPS offers a total of more than 2.5 Gflops for the whole machine. The software to make programming easier include: Fortran and C compilers through.
In 1995
Set of 26 "workstations" running under the UNIX system and software capable of running independently or work with data transfer over a high speed switch.
Computer IBM SP2
It consists of 24 nodes P2SC (Super Chip), plus another 2 processors width oldest node located in two racks. Only the second of which is shown in the photograph. Each node has a clock of 120 MHz and 128 Mbytes of memory. Two New High Performance Switches (TV3) are used to connect nodes together. Data storage consists of 40 Gbytes of fast disks locally attached Ethernet and FDDI networks for user access.
A maximum single-node performance of 480 MFLOPS offers a total of over 12 Gflops for the whole machine.
A PowerPC RS/6000 workstation is connected to the SP2 to the system of monitoring and management of hardware and software.
Supercomputer SOVIET
Just as there was a space race and arms should not surprise anyone who had a career also supercomputer. The high-performance software for the Soviet Union were, of course, taken in secret. The information here is, and sadly probably still fairly vague.
- FAMILY BESM
Computer numerical series of "high performance"
The BESM-6 was designed in 1965 by a group of engineers working on the SALebedev Institute of precise mechanics and computer equipment (ITMiVT in Russia).
Production started in 1967 by the "Ground SAM (SAM stands for" Computer-Analytical Machines Machines") in Moscow. The basic configuration includes CPU, 192 KB of core memory, magnetic drums, tape drive patented magnetic, teletype machines, typewriters (with parallel interface), alphanumeric printers and card readers and recorders / paper tape. There were about 350 copies to the early 80s. The latest configurations including standard strips 1 / 2 inch magnetic disk drives from IBM clone, VDUs in series, plotters, etc., mostly imported or cloned the original hardware.
Today the design of supercomputers is based on 4 important technologies:
- Technology of vector registers, created by Cray, the father of supercomputing, who invented and patented several technologies that led to the creation of machines ultra-fast computer. This technology allows the execution of many arithmetic operations in parallel.
- The system known as M.P.P. the initials of Massively Parallel Processors or massively parallel processing, which involves the use of hundreds and sometimes thousands of microprocessors tight.
- The technology of distributed computing: clusters of computers for general use and relatively inexpensive, local networks interconnected by low latency and high bandwidth.
- Quasi-Supercomputer: Recently, with the popularization of the Internet, have emerged distributed computing projects in which special software exploit the idle time of thousands of personal computers to perform large tasks for a low cost. Unlike the last three categories, the software that runs on these platforms must be able to divide the tasks into separate calculation blocks are not assembled or communicated by several hours. In this category stand out
HIGHLIGHTS
- Speed of Processing: Billions of floating point instructions per second.
- Users at a time: Up to thousands in large network environment.
- Size: Require special facilities and air conditioning industry.
- Ease of use: Only for specialists.
- Clients usual: Large research centers.
- Social Penetration: Practically nil.
- Social impact: Almost zero but supercomputers could not do things like predicting the weather a decade away or solve very complex calculations that can not be solved by hand.
- Park installed: Less than a thousand worldwide.
- Cost: up to tens of millions each.
Conclusion
Supercomputer. Computer calculation capabilities far beyond those common to the same period of manufacture.
They are very expensive, so its use is limited to military agencies, government and businesses. They usually have scientific applications, especially real-life simulations.
Some are known Blue Gene supercomputers, Seymour Cray, Deep Blue, Earth Simulator, MareNostrum, etc..
Supercomputers are often planned according to some of the following four models:
* Vector registers.
* System M.P.P. or Massively Parallel Processors (Massively Parallel Processors)
* Distributed computing technology.
* Quasi-Supercomputer.
The most common uses for supercomputers include weather forecasting, complex 3D animations, fluid dynamic calculations, nuclear research, oil exploration, etc.