terça-feira, 1 de setembro de 2015

Supercomputer: know the history of these super machines.



Supercomputer is a computer with high-speed processing and large memory capacity. It has applications in research areas that large amount of processing is required, such as military research, science, chemistry and medicine. Supercomputers are used for very complex calculations and intensive tasks, such as quantum physics involving problems, mechanical, weather, climate research, molecular modeling (computing the structures and properties of chemical compounds, biological macromolecules, polímeos and crystals) and physical simulations, as a simulation airplanes in wind tunnels, simulation of the detonation of weapons nuclearese research on nuclear fusion.

The first supercomputers were created in the 1960s by Seymour Cray. Seymour Cray founded his own company, Cray Research in 1970 and dominated the market for supercomputing for 25 years (1965-1990).

In the 70s the University of Illinois set up in conjunction with the Burroughs Corporation ILLIAC IV, a supercomputer made famous by the dimensions.

In the early 80s, it came the Cray-XMP which reached 1 gigaflop a breakthrough and both times.

Currently all supercomputers are produced with components of computers we use at home with the difference of having thousands gave components working as one. Instead of using just one HD IDE or SATA hard drives they use hundreds of working together as a huge HD enabling the read / simultaneous recording of information coming at very high data transfer rates, a very similar system with RAID technology.

These computers are divided and organized into various modules that are called nodes, each node consists of one to four processors, a certain amount of RAM and cache. All nodes are interconnected by a network interface making work together as a single system with a processing thousands of times greater than our computers.

Science fiction addressed the subject in a novel called "Colossus".

Today, supercomputers are manufactured by companies such as Supermicro, NEC, SUN (this was bought by Oracleem 2010), IBM, HP, Apple Inc., and etc. The updated list of the 500 most power computer systems


Characteristics:

The main features of supercomputers are:


  • Processing speed: trillion floating point operations per second (TFlops). According to the Top500 list of Nov / 2011, it is clear that the manufacturers of (super) computers tend to call their own supercomputer products (supercomputer) those with more than 80 TFlops processing (68th position to the 1st), and server (server) processing those with between 25 and 80 TFlops (500th position to 67th);
  • Size: require special cooling facilities and systems;
  • Difficulty of use: chosen by experts;
  • Usual customers: large research centers;
  • Social penetration: virtually zero;
  • Social impact: very important in the research area, from the moment that provides calculations at high speed, allowing, for example, analyze the order of the genome, pi, complex numbers, the development of calculations for physical problems that require a very low margin of error, etc.
  • Parks at: least one thousand anywhere in the world;
  • Cost: Currently (2010) to hundreds of millions of dollars each (~ US $ 225mm Cray XT5 the);
  • Parallel vector processor (PVP) networks workstations (NOW)



                            Titan: the fastest supercomputer in the world


Types processors:

Systems composed of a few powerful processors. The interconnection is done in general by a switching matrix (crossbar) high throughput. The memory is shared, and the systems may be classified as a multiprocessor. Usually they do not use cache memory, using this function for a large number of vector registers and an instruction buffer. Examples: Cray C-90 (up to 16 processors), Cray T-90 (up to 32 processors), Fujitsu VPI 700 (maximum of 256 processors). The NEC SX-6 is also a PVP, and the Earth Simulator, which is a NEC SX-6, has been the number 1 in the list of 500 most powerful machines in the world with 5120 processors. Currently the most powerful supercomputer in the world is called "K Computer" installed in Japan and has 548,352 processing cores.


Symmetric multiprocessor (SMP)


The symmetric multiprocessors are systems made up of commercial processors connected to a shared memory, and can also be classified as A multiprocessor. They are used widely cache memory and all processors have equal access to the bus and shared memory. They are easier to program than machines which communicate by message exchange, since fashion programming approaches that take place in conventional systems, but has the disadvantage of using an interconnection bus (allowing only one transaction at a time). This limitation can reduce the scale of this class of systems, making commercial systems are generally limited to 64 processors. Examples: IBM R50 (up to 8 processors), SGI Power Challenge (maximum of 36 processors), Sun Ultra Enterprise 10000 (up to 64 processors) and HP / Convex Exemplar X-Class (up to 32 nodes of 16 processors each).


Massively parallel machines (MPP)


The MPPs (Massively Parallel Processors) are NORMA multicomputer built with thousands of commercial processors connected by a high-speed network. High performance is achieved with the large number of processors. The fact that the exchange of messages becomes more difficult to program in cases where the memory is shared. Examples: Intel Paragon (up to 4000 processors), Connection Machine CM-5 (up to 2048 processors), IBM SP2 (maximum of 512 processors) and Cray T3D (up to 2048 processors).


Machines with distributed shared memory (DSM)


In DSM (Distributed Shared Memory) systems, even with the memory being distributed among the nodes, all processors can access all memories. The only address space, data sharing and the cache coherency control are achieved with software. NUMA systems can be interwoven with distributed memory, or systems STANDARD (with local memories), where memories can be connected via network adapters (AR) to a specific network interconnection, which allows access to remote memories. The machine, in both cases, is considered CC-NUMA or SC depending on an implementation of cache coherency. Example: Origin SGI (maximum of 512 processors).
Networks of workstations (NOW)


Workstation networks (NOW)


Workstations networks (NOW - Network of Workstations) are made up of multiple workstations connected by some traditional technology network such as Ethernet and ATM. In practice LANs are used in the execution of parallel applications. Can be seen as machines NORMA low cost or no cost if the network already exists, ie, this is a significantly cheaper solution compared to MPPs. The clear disadvantage that is seen on a network of workstations is the fact that traditional networks typically used only in smaller tasks (to share files and access remote printers, for example), and are generally not optimized for communication operations a parallel implementation. The result is a high latency in these operations, which compromises the performance of the machine as a whole. They are mainly used in educational institutions for the study of parallel and distributed processing. Example: workstations connected by Ethernet technology.

Nenhum comentário:

Postar um comentário