June 24, 2014
HP’s New Super Processing Core
I’ve always had a fondness for computers. While I’ve never been truly tech-savvy enough to label myself a true expert of any kind, I’ve always had a finger to the pulse of technology. It fascinates me how quickly technology expands and evolves. I remember a very specific moment when I was a boy, pining over the newest Beast Wars game. I remember looking at the side of the box and feeling utterly despondent at the obscene memory requirement: 60 megabytes. There was no way in heaven or on earth that my dad was going to let me chew up such a crazy amount of memory for a single game (the full system requirements can be found here, if anyone else is interested in a trip down nostalgia lane). Thankfully, if the reviews are any indication, I wasn’t missing out on anything special. Things have definitely changed since then, growing steadily with the occasional leap forward, and HP’s newest invention — “The Machine” — might be one such leap.
No, not that machine.
According to the initial reports, this machine has the capability to rip through 640 terabytes of data … in one billionth of a second. Yes, billionth, with a “b”. I can’t even comprehend a unit of time that small. For all intents and purposes, it may as well be instant. It’s not a single workstation, server, phone, computer, or any other device built for one such purpose. It is, according to HP, all of these things at once, a combination of technology designed to eventually utilize the concept known as the Internet of Things, which is essentially a massive supernetwork designed to connect devices of all sorts through a singular network. Such a structure would require an infrastructure capable of handling insane amounts of data without slowing anything down, which is exactly one use that HP touts for The Machine.
Instead of using a few generalized cores, The Machine operates from clusters of more specialized ones, bound together with silicon photonics instead of the more traditional copper wiring. This not only requires less energy, but boosts the speed of the overall device as well. It also features resistors that are capable of retaining information, even if power is lost. All these add up to faster — way faster — computational power, and not just for big super mainframes either. HP is convinced that the technology utilized by The Machine can be easily translated for use in laptops and even smartphones. Martin Fink, the head technology officer, spoke on this miniaturization during a speech given at Discover, in which he said that smartphones built using this technology would be able to hold 100 terabytes of memory. My PC doesn’t even hold one terabyte.
Aside from the obvious boost in power to our personal devices, the future potential applications for this technology are staggering for science, business, and medicine as well. Doctors and researchers could compare massive amounts of data with one another in mere seconds, despite being on opposite sides of the world.
HP expects the first prototype samples to surface around 2015, with the potential for devices running on The Machine by 2018.
Image Credit: Thinkstock