About the SA Blog Network



Opinion, arguments & analyses from the editors of Scientific American
Observations HomeAboutContact

Software behaving badly: Machine learning could resolve issues raised by multicore processors

The views expressed are those of the author and are not necessarily those of Scientific American.

Email   PrintPrint

University of Washington, multicore, processorWhat personal computers have gained in speed with the introduction of multicore processors that split up workloads they may be losing in reliability. This is because software applications are written to execute different actions in a specific order. When different pieces of code are processed out of order (thanks to multiprocessors’ division of labor), it may cause computers to crash, leaving office workers, researchers, students, gamers and other users staring at a frozen screen.

Whereas the use of such parallel processing systems began in supercomputing environments as a way to more quickly crunch scientific data, the technology is now common on personal computers, which can operate dual-core or even quad-core processors (meaning each processor will split workloads in half or quarters to speed things up). Even as you type an e-mail, your computer is working in the background to check for new messages, an action that requires parallel processing.

This processing issue was less of a problem with older, single-processor systems that worked sequentially, as opposed to today’s parallel-processing systems, says University of Washington at Seattle assistant professor of computer science and engineering Luis Ceze. "Now programmers have to worry about parallelism in order to take advantage of the processors that are going in everything from computers to cell phones," he says. "If you have a program and you give the same input to different computers at the same time, you may get different results. This means people get less reliable games, software and operating systems."

Writing software to run across multiprocessors is more difficult than it was to do so for sequential-processor systems because programmers now have to worry about multiple workloads executing on the processor at the same time. The speed at which the information travels can be affected by tiny changes, such as the distance between parts in the computer or even the temperature of the wires, leading information to arrive out of order.

The key to solving this problem is writing software that can properly manipulate multiple processors simultaneously. Ceze and his colleagues are exploring ways to use machine learning to teach computers to recognize when a program has been executed improperly and flag these situations so that programmers can analyze the software for bugs. "My goal is to make writing for multicore systems as easy and reliable as it is to write for sequential systems," he says.

They have developed a way to get modern, multiple-processor computers to behave in predictable ways, by automatically parceling sets of commands and assigning them to specific places. Ceze and several colleagues from the university’s Safe MultiProcessing Architectures (SAMPA) group are presenting their proposed fix for this parallel-processing problem next week at the International Conference on Architectural Support for Programming Languages and Operating Systems in Pittsburgh.

Image © Andrey Volodin


Rights & Permissions

Comments 12 Comments

Add Comment
  1. 1. jtdwyer 5:37 pm 03/12/2010

    I think I remember this problem being addressed by several alternative strategies in mainframe computers and operating systems in the late 1970s. I also recall something about those who do not learn from history are doomed to repeat it (faster and faster). These modern computers are really something.

    Link to this
  2. 2. mikecimerian 11:08 pm 03/12/2010

    Faster and faster also means entering time dilation relativistic lag with the computer. Determinism is lost in the process … this could create a "critter hole", some binary soup from which could emerge proto AIs. :-)

    Link to this
  3. 3. tulcak 1:27 pm 03/14/2010

    duh, really? what a freaking BORING article. why don’t you talk about how bad it was driving to work and what idiots all the other drivers are or how "rush hour" sucks. the U.S. is stuck in the "do nothing" duldrums. nothing is going to change because all people (like the author) want to do is COMPLAIN instead of taking ACTION. the U.S. is on its way down brother…

    Link to this
  4. 4. MrPeach 6:26 pm 03/15/2010

    I program multithreaded applications and it’s not f-ing rocket science. We have mutexes and signals that make it all work right, if you program correctly.
    So is this guy working on something that can make poorly written programs work correctly? If so, I have a bridge to sell you.

    Link to this
  5. 5. gmale 6:50 pm 03/15/2010

    interesting, I believe the problem described is a race condition.

    Link to this
  6. 6. jtdwyer 9:02 pm 03/15/2010

    Most existing software is single threaded: originally designed for, written and tested on systems with a single processor. Those programs expected processing events to occur in the sequence in which they were specified by the program.

    Not knowing exactly how this ‘multicore’ capability is implemented (it wasn’t explained – I’m guessing that it is multiple processors on a single chip), portions of an individual, single threaded process may be simultaneously executed, leading to processing errors.

    If the application (and especially the operating system) is not in control of process dispatch serialization this could be a very difficult problem to solve. If this is the case I’d say it was the result of an invalid presumption on the part of processor designers. But I’m just guessing, since it wasn’t fully explained.

    Link to this
  7. 7. Erik Witz 8:14 am 03/16/2010

    UNIX has been multi-threaded for decades

    Link to this
  8. 8. jtdwyer 10:32 am 03/16/2010

    Erik Witz – Unix was originally a single tasking port of the Burroughs mainframe Multics multitasking operating system. You’re correct that multitasking support was added back around 2-3 decades ago.

    A task is a unit of work that is independently dispatchable by an operating system. A multitasking operating system can manage the concurrent execution of multiple tasks or jobs.

    Threads refer to the number of independent processes a task or job spawns. The benefit of a task being multi-threaded is that processing of a its independent threads may continue while some threads are delayed waiting for the completion of a long duration task, such waiting for user input or completion of a file access (which may require a physical disc operation, for example).

    Each thread is independently dispatchable to a processor for instruction execution by a multi-tasking operating system, which may or may not be able to support multiple processors. On a single processor system, despite all the many tasks that may be concurrently dispatchable by the operating system, only one task is actually executing instructions at any time. The operating system is just one of those tasks.

    Operating system management of multiple concurrent instruction executions on multiple processors is more difficult than supporting multitasking of jobs.

    The article mentioned parallel processing `super computers’. This is quite another beast, as even a single task may cause concurrent execution of a vectored array instruction to be executed on multiple processors. In addition, for many years mainframes have been leveraging parallelism in serial instruction execution by prefetching instructions and data, etc. If, by some chance the `multicore’ processor architecture is allowing parallelism in the execution of a task beyond that supported by the operating system, unexpected results may occur.

    Could even have something to do with Toyota’s accelerator problem…

    Link to this
  9. 9. mikecimerian 11:10 pm 03/16/2010

    Object programming languages are not really adapted to parallelism. I can’t see how object hierarchies can work with the "rendez-vous" concept used in languages like ADA, for instance. Current state of things seem to point either to BLOB or Thread. Both in the same context does not compute.

    Link to this
  10. 10. eco-steve 1:41 pm 03/17/2010

    The screen usually freezes because of spurious clicking where multitasking programs get confused when the wrong job gets the click. The solution is to click on the start button, then close the session, then click on restart as owner and wait until the desk-top gets reloaded. Then all the jammed-up code will have been cleared out of memory for a fresh start. This avoids having to shut down Windows. Try to be user-friendly to your operating system!

    Link to this
  11. 11. Quinn the Eskimo 8:01 pm 03/17/2010

    A good start to error-free processing is getting a good computer–and avoiding anything by Microsoft.

    Consider: 99.999% of all malware is aimed at Microsoft products.

    Get M$ Free! And Live Free! And…

    Google "Dancing Monkey Boy"

    Link to this
  12. 12. asokasb 5:48 pm 03/26/2010

    This is where Linux comes into play unlike games in Microsoft.

    Linux was built on a solid base and processor management unlike Microsoft.

    Unfortunately the real weakness in Linux is that there aren’t enough games built for Linux.

    It is high time Linux Gamers come out and beat Microsoft in speed, and functionality.

    They should storm the market now and sell them fast.

    Do not give the games in Linux free!

    Link to this

Add a Comment
You must sign in or register as a member to submit a comment.

More from Scientific American

Email this Article