Skip to main content

Software behaving badly: Machine learning could resolve issues raised by multicore processors

This article was published in Scientific American’s former blog network and reflects the views of the author, not necessarily those of Scientific American


What personal computers have gained in speed with the introduction of multicore processors that split up workloads they may be losing in reliability. This is because software applications are written to execute different actions in a specific order. When different pieces of code are processed out of order (thanks to multiprocessors' division of labor), it may cause computers to crash, leaving office workers, researchers, students, gamers and other users staring at a frozen screen.

Whereas the use of such parallel processing systems began in supercomputing environments as a way to more quickly crunch scientific data, the technology is now common on personal computers, which can operate dual-core or even quad-core processors (meaning each processor will split workloads in half or quarters to speed things up). Even as you type an e-mail, your computer is working in the background to check for new messages, an action that requires parallel processing.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


This processing issue was less of a problem with older, single-processor systems that worked sequentially, as opposed to today's parallel-processing systems, says University of Washington at Seattle assistant professor of computer science and engineering Luis Ceze. "Now programmers have to worry about parallelism in order to take advantage of the processors that are going in everything from computers to cell phones," he says. "If you have a program and you give the same input to different computers at the same time, you may get different results. This means people get less reliable games, software and operating systems."

Writing software to run across multiprocessors is more difficult than it was to do so for sequential-processor systems because programmers now have to worry about multiple workloads executing on the processor at the same time. The speed at which the information travels can be affected by tiny changes, such as the distance between parts in the computer or even the temperature of the wires, leading information to arrive out of order.

The key to solving this problem is writing software that can properly manipulate multiple processors simultaneously. Ceze and his colleagues are exploring ways to use machine learning to teach computers to recognize when a program has been executed improperly and flag these situations so that programmers can analyze the software for bugs. "My goal is to make writing for multicore systems as easy and reliable as it is to write for sequential systems," he says.

They have developed a way to get modern, multiple-processor computers to behave in predictable ways, by automatically parceling sets of commands and assigning them to specific places. Ceze and several colleagues from the university's Safe MultiProcessing Architectures (SAMPA) group are presenting their proposed fix for this parallel-processing problem next week at the International Conference on Architectural Support for Programming Languages and Operating Systems in Pittsburgh.

Image ©iStockphoto.com/ Andrey Volodin

Larry Greenemeier is the associate editor of technology for Scientific American, covering a variety of tech-related topics, including biotech, computers, military tech, nanotech and robots.

More by Larry Greenemeier