that is just amazing....i was just yesterday reading a paper on the potential threat of terrorist disruption of our networks...i'd come to the conclusion that we cannot rely upon the "trust" of the end user to maintain security update patches and firewalls, there is just too much at stake
sign me up..
>>The payoff should be huge. Smarter networks will foster a new generation of distributed software programs that preempt congestion, spread out critical data, and keep the Internet secure, even as they make computer communications faster and more reliable in general. By expanding the network as quickly as possible, says Peterson, the PlanetLab researchers hope to restore the sense of risk-taking and experimentation that ruled the Internet’s early days. But Peterson admits that progress won’t come easily. “How do you get an innovative service out across a thousand machines and test it out?”
It helps that the network is no longer just a research sandbox, as the original Internet was during its development; instead, it’s a place to deploy services that any programmer can use and help improve. And one of the Internet’s original architects sees this as a tremendously exciting trait. “It’s 2003, 30 years after the Internet was invented,” says Vinton Cerf, who codeveloped the Internet’s basic communications protocols as a Stanford University researcher in the early 1970s and is now senior vice president for architecture and technology at MCI. “We have millions of people out there who are interested in and capable of doing experimental development.” Which means it shouldn’t take long to replace that Buick.
Baiting Worms
The Achilles’ heel of today’s Internet is that it’s a system built on trust. Designed into the Net is the assumption that users at the network’s endpoints know and trust one another; after all, the early Internet was a tool mainly for a few hundred government and university researchers. It delivers packets whether they are legitimate or the electronic equivalent of letter bombs. Now that the Internet has exploded into the cultural mainstream, that assumption is clearly outdated: the result is a stream of worms, viruses, and inadvertent errors that can cascade into economically devastating Internet-wide slowdowns and disruptions.
Take the Code Red Internet worm, which surfaced on July 12, 2001. It quickly spread to 360,000 machines around the world, hijacking them in an attempt to flood the White House Web site with meaningless data—a so-called denial-of-service attack that chokes off legitimate communication. Cleaning up the infected machines took system administrators months and cost businesses more than $2.6 billion, according to Computer Economics, an independent research organization in Carlsbad, CA.
Thanks to one PlanetLab project, Netbait, that kind of scenario could become a thing of the past. Machines infected with Code Red and other worms and viruses often send out “probe” packets as they search for more unprotected systems to infect. Dumb routers pass along these packets, and no one is the wiser until the real invasion arrives and local systems start shutting down. But in theory, the right program running on smart routers could intercept the probes, register where they’re coming from, and help administrators track—and perhaps preempt—a networkwide infection. That’s exactly what Netbait, developed by researchers at Intel and UC Berkeley, is designed to do.
This spring, the program showed how it can map a spreading epidemic. Brent Chun, Netbait’s author, is one of several senior researchers assigned to PlanetLab by Intel, which helped launch the network by donating the hardware for its first 100 nodes. Chun ran Netbait on 90 nodes for several months earlier this year. In mid-March, it detected a sixfold spike in Code Red probes, from about 200 probes per day to more than 1,200—a level of sensitivity far beyond that of a lone, standard router. The data collected by Netbait showed that a variant of Code Red had begun to displace its older cousin.
As it turned out, there was little threat. The variant turned out to be no more malignant than its predecessor, for which remedies are now well known. But the larger point had been made. Without a global platform like PlanetLab as a vantage point, the spread of a new Code Red strain could have gone undetected until much later, when the administrators of local systems compared notes. By then, any response required would have been far more costly.
Netbait means “we can detect patterns and warn the local system administrators that certain machines are infected at their site,” says Peterson. “That’s something that people hadn’t thought about before.” By issuing alerts as soon as it detects probe packets, Netbait could even act as an early-warning system for the entire Internet.
Netbait could be running full time on PlanetLab by year’s end, according to Chun. “Assuming people deem the service to be useful, eventually it will get on the radar of people at various companies,” he says. It would then be easy, says Chun, to offer commercial Internet service providers subscriptions to Netbait, or to license the software to companies with their own planetwide computing infrastructures, such as IBM, Intel, or Akamai.<< |