Larry is the associate editor of technology for
Apple Mac users have received a rude awakening over the past month: Their computers suddenly seem more vulnerable to cyber attacks. Not only have the machines been targeted by a group of malicious software (malware) programs—collectively known as Flashback or Flashfake—but those programs have also succeeded in turning hundreds of thousands of Macs into virtual zombies manipulated en masse to attack other computers. The rate at which Macs are being infected by “bot” malware and recruited into botnets has declined over the past week, but Flashback has added Apple fans to the long list of users thinking more about how to secure their computers.
Unfortunately, these high-profile Mac attacks are just the latest salvo in an ongoing effort by cyber attackers to take control of computers and steal information they can then sell for a profit. The botnet problem, in particular, is not only growing, it’s evolving, according to computer scientists and engineers at Veermata Jijabai Technological Institute (VJTI) in Mumbai, India. Whereas much of the work done to combat botnets has focused on those that follow a centralized command-and-control structure with a single “bot-master” directing the attack, newer botnet iterations have more of a Hydra-like peer-to-peer structure that’s harder to stop, the researchers say.
The VJTI researchers claim to be developing a “two-pronged” approach to securing computers and networks that can defend against either the command-and control or peer-to-peer approach to botnets. (pdf) The first line of defense in the VJTI model is a software algorithm designed to protect individual nodes within a network—PCs, routers, servers. If a problem is detected on one of the nodes, the issue is escalated to a second line of defense—a separate software algorithm running on the network itself that checks incoming and outgoing traffic for signs of malware.
Of course, plenty of companies that make computer security software—including Symantec and McAfee (owned by Intel), to name a few—sell network intrusion detection and prevention systems. The VJTI researchers claim their dual node/network security algorithms set their software apart because they’re able to defend against more than one type of botnet and are less likely to set off false alarms that disrupt legitimate network traffic. This remains to be seen, given that VJTI’s work is still in the lab. Still, their approach is worth a look.
The VJTI node software monitors parameters on those devices, looking for any lag in response time, an unbalanced output-to-input data traffic ratio, or other signs of a possible bot infection. The software was written so that over time it will “learn” how a particular node is used—the typical addresses of inbound and outbound data, for example—so that it can more accurately spot unusual activity.
If the node software detects something suspicious, it will trigger the network security software, which analyzes information being transferred to and from the network as a whole, in search of known malware being passed around or patterns of activity indicative of a botnet.
“The chances of false positives are reduced because of the two-pronged strategy, since a system alarm is raised only if both the standalone node and network algorithms detect an anomaly in system usage and network traffic flow,” says Manoj Thakur, a former VJTI computer science student who participated in the research before graduating in 2009.
Such an approach is designed to be used across a wide variety of devices, Thakur says. The node-level algorithms would have to adapted to run either on Macs or PCs, he adds, but the network-level algorithm would work the same way regardless of whether the network has PCs, Macs or other computers.
The VJTI researchers are now trying to determine whether their software will be effective in large networks with massive volumes of data traffic and can adapt to new types of bots and botnets as they emerge. For their work to have an impact, it will also have to function in real time and also provide some way of quarantining potentially dangerous data traffic for further inspection.
“The results of the simulations performed on a limited scale look promising so far,” Thakur says. “The effectiveness of this approach will ultimately be determined by how this technique performs for larger networks with high network traffic volumes.”
For computer users trying to defend their precious data from cyber poachers, such research can’t come out of the lab quickly enough.
Image courtesy of Jitalia17, via iStockphoto.com