The computer worm Stuxnet broke out of the tech underworld and into the mass media this week. It’s an amazing story: Stuxnet has infected roughly 45,000 computers. Sixty percent of these machines happen to be in Iran. Which is odd. What is odder still is that Stuxnet is designed specifically to attack a computer system using software from Siemens which controls industrial facilities such as factories, oil refineries, and oh, by the way, nuclear power plants. As you might imagine, Stuxnet raises big, interesting geo-strategic questions. Did a state design it as an attack on the Iranian nuclear program? Was it a private group of vigilantes? Some combination of the two? Or something else altogether?
But it’s worth pausing to contemplate Stuxnet on its own terms, and understand why the tech nerds were so doomsday-ish about it in the first place. We should start at the beginning.
A computer worm is distinct from a virus. A virus is a piece of code which attaches itself to other programs. A worm is a program by itself, which exists on its own within a computer. A good (meaning really bad) worm must do several things quite subtly: It must find its way onto the first machine by stealth. While a resident, it must remain concealed. Then it must have another stealthy method of propagating to other computers. And finally, it must have a purpose. Stuxnet achieved all of these goals with astounding elegance
The Stuxnet worm was first discovered on June 17, 2010 by VirusBlokAda, a digital security company in Minsk. Over the next few weeks, tech security firms began trying to understand the program, but the overall response was slow because Stuxnet was so sophisticated. On July 14, Siemens was notified of the danger Stuxnet posed to its systems. At the time, it was believed that Stuxnet exploited a “zero day” vulnerability (that is, a weak point in the code never foreseen by the original programmers) in Microsoft’s Windows OS. Microsoft moved within days to issue a patch.
By August, the details of Stuxnet were becoming clearer. Researchers learned troubling news: The virus sought to over-ride supervisory control and data acquisition (SCADA) systems in Siemens installations. SCADA systems are not bits of virtual ether—they control all sorts of important industrial functions. As the Christian Science Monitor notes, a SCADA system could, for instance, override the maximum safety setting for RPMs on a turbine. Cyber security giant Symantec warned:
Stuxnet can potentially control or alter how [an industrial] system operates. A previous historic example includes a reported case of stolen code that impacted a pipeline. Code was secretly “Trojanized’” to function properly and only some time after installation instruct the host system to increase the pipeline’s pressure beyond its capacity. This resulted in a three kiloton explosion, about 1/5 the size of the Hiroshima bomb.
As the days ticked by, Microsoft realized that Stuxnet was using not just one zero-day exploit but four of them. Symantec’s Liam O’Murchu told Computer World, “Using four zero-days, that’s really, really crazy. We’ve never seen that before.”
Still, no one knew where Stuxnet had come from. A version of the worm from June 2009 was discovered and when the worm’s encryption was finally broken, a digital time stamp on one of the components (the ~wtr4141.tmp file, in case you’re keeping score at home) put the time of compilation—the worm’s birthday—as February 3, 2009.
More at:
No comments:
Post a Comment