Every few years, we hear about a new supercomputer that is the fastest machine to date. Recently, we’ve heard a lot about IBM’s Sequoia, which held the throne at the top of the list. However, last month, Sequoia was unseated by a new supercomputer: Titan.

Titan, like Sequoia, is an American made super computer; but it has a number of notable differences. According to The Economist, Titan was built to be an “open” super-computing platform. While IBM spends much of its time cranking through secret nuclear calculations, Titan will be available for scientific research to bidders, in the same way that astronomers can bid for time on telescopes.

Another notable difference between Sequoia and Titan is that Sequoia got most of its power from Central Processing Units, or CPUs, while Titan utilizes both chains of CPUs and Graphical Processing Units, or GPUs. GPUs differ from CPUs in that they excel at crunching graphical calculations. If a complex problem is able to be reduced down to triangular graphical calculations, the chain of GPUs operate much more efficiently than an equivalent CPU setup.


And that is a key concept. Supercomputers like Titan are only useful for problems that can be broken up into tasks that can be run in parallel. In the same idea as an assembly line, if a problem can be broken up into discrete chunks, it can leverage the power of a supercomputer. But not all problems have this characteristic. Making a child takes about nine months, no matter how many women you employ.

So computer scientists are tasked with the challenge of converting their simulation into GPU-compatible calculations that can be spread out into separate pieces. This is not always an easy task. What happens if a problem can’t be broke up in this way? Titan uses similar graphics cards that everyday gamers have on their home machines, so you’ll probably be as well off to crunch the numbers yourself.

ALSO READ
Self-driving cars are coming, but are we ready?

>
Share This