Skip to main content

It's Your Virtual Assistant, Doc. Who Is Watson?

This article was published in Scientific American’s former blog network and reflects the views of the author, not necessarily those of Scientific American


Ever since IBM supercomputer Watson beat Jeopardy! champions Brad Rutter and Ken Jennings, there’s been a lot of talk about putting the computer’s question-and-answer capabilities to real applications.

In addition to consuming massive amounts of information, the supercomputer has been trained to understand literary references, interpret linguistic nuance, generate hypotheses, perform analysis, and score its own answers for likelihood of accuracy. All of these abilities enable Watson to make reasoned judgments, a skill hitherto attributed exclusively to human beings.

One of the most talked-about potential applications for Watson is in the area of healthcare, where this ability to make a reasonable conclusion based on vast amounts of evidence becomes particularly useful. How many times have you walked into a doctor’s office and been given more than one possible diagnosis for a set of symptoms?


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


Watson’s "deep question answering" ability—or DeepQA—is borne out of marrying traditional knowledge-based AI with Natural Language Processing. Indeed, what Watson has above and beyond most computers that exist today is its ability to understand natural language, language that everything in this humanoid world, including medical records, textbooks, academic journals and research papers, is written in.

"One of the key problems with understanding natural language [for a computer] is to figure out exactly what a particular word means in the context in which it’s being used," explains Eric Brown, one of the lead researchers of the Watson project who is involved with the design and implementation of DeepQA architecture. "Another challenge is piecing together the elements of a sentence because what we really want to get at is the semantics."

"I actually relate this very much to what you learn in middle school grammar—where you basically diagram sentences," Brown continues. "That’s one of the first steps that the computer program has to do: not only break the sentence down to the tokens, but also diagram the sentence into the subject, the verb, the object, the preposition, and so on."

After a lot of basic grammar processing to get a deeper understanding of what the sentence means, the computer goes on to identify entities or named entities in a sentence, such as persons, places, and organizations. In the case of medical applications, these entities could be the names of diseases, symptoms, drugs or therapies. Finally, the relationships between these entities and their interactions with the verb and the object are determined. The semantic meaning of a sentence is thus derived, followed by analytics that uses this information to evaluate possible answers to questions.

A Google search box does not understand such plain language, as illustrated by the millions of irrelevant items that show up on our search results, regardless of the number of quotations and special characters we use to specify them.

This exclusive capability allows Watson to analyze large volumes of unstructured content and text from multiple sources, thus extracting value and drawing conclusions from data that is so far largely unconnected, and hence, often irrelevant on its own. "Rather than try and curate that data, manually structure it, and do a lot of knowledge engineering on it, we prefer to leverage it the way it naturally occurs in its text form," says Brown. "Building the system on top of unstructured data was really an important design decision because it allows the system to leverage information the way humans actually communicate and report it."

In addition to these unique abilities, Watson has the strength of a traditional supercomputer: the ability to store and access large amounts of data and stay constantly updated with the backing of 10 racks of servers and over two thousand processor cores—a skill that is highly useful in a field like medicine where new research renders previous knowledge obsolete every day.

While Watson was restricted to its self-contained data for the purposes of Jeopardy! (where its human counterparts had only their anthropomorphic hard drives to rely on), for real-world applications, the supercomputer could draw additional data from the Internet, making its repertoire of knowledge virtually limitless.

With the World Wide Web at its disposal, there is also a lot of flexibility in exploring different data and content sources. Herbert Chase, a professor of Clinical Medicine at Columbia University and consultant to the Watson project, has suggested the possible use of blogs and social media to supplement the computer’s resources, likening such anecdotal information to doctor’s office conversations in the real world.

With so much to choose from, how would a computer distinguish between reliable and unreliable information? “A benefit of gathering as much data as possible is that if you do come across a source that is inaccurate, hopefully you have enough information from other sources that counters it,” says Brown. “Another strength of the overall Watson approach is its machine learning: the ability of the system to learn the importance of different kinds of evidence and evidence sources, and to automatically determine how reliable one source is over another.”

Watson learns just like human beings learn. "Just like you or I would take a practice test and figure out this is how to answer these kinds of questions," explains David Gondek, who leads the Watson strategy team and works on developing machine-learning algorithms and infrastructure. During its Jeopardy! training, Watson learned by listening to the correct response to each question and adjusting its thinking in real time.

Similarly, the computer would learn to weigh its evidence sources. While evaluating new information from a source, for example, it would compare it against all existing content representative of the overall source. As Brown explains, "If it’s a medical journal, we would use past issues for the system to learn how to interpret that content. So even if a new issue came out, it would have some background knowledge on how to read it."

Because of this reliability measure, Watson doesn’t just answer a question; it also tells you how confident it is in its answer. It generates a large number of possible answers to a single question, each of which is attached to a score and confidence level based on a series of reasoning methods. This is quite unlike search engines or regular databases, which simply spit out a list of relevant (or irrelevant) responses.

"The problem with a lot of database systems is that they just give you a large set of possibilities, which you then have to go through. They don’t say, ‘Here’s my answer and I’m 95% sure or 5% sure’ [as Watson would]," says Gondek. "And in medicine, that’s really important, because if I type in any three symptoms, I am going to get back hundreds of possible diseases."

Another advantage is the computer’s open mind. According to Chase, the many alternatives Watson can provide by evaluating multiple possibilities would allow better differential diagnoses, thereby preventing the “anchoring” tendencies that physicians often display by getting too attached to a diagnosis.

This series of likely answers provided by the supercomputer when presented to a thinking, reasoning human being would enable a computer-to-human dialog, allowing a more comprehensive and thorough analysis than one arrived at by computer or human alone. "We want to keep Watson doing what computer systems are good at, which is going through a large amount of information—more than anything a person can hold in their head," says Gondek. "And the doctor is part of the process that the human being is good at: deep understanding, experience, and intuition."

"We very much think of this as a decision support technology so Watson is not making decisions, but is providing information so that humans make better decisions," adds Brown. "A physician shouldn’t just be satisfied by looking at the answer, but should go into the reasoning and underlying evidence [that the computer provides] to support that answer."

A related application of the technology would be to recommend treatments tailored to an individual, based on patient history, allergies, and other conditions. In addition, it could track treatment and progress, constantly updating recommendations as the patient’s condition changes, or as new information becomes available.

IBM has partnered with Nuance Communications whose speech recognition and Clinical Language Understanding package can complement Watson’s analytical capabilities for the purposes of diagnosis and treatment. Speech recognition technology could give Watson the added functionality to understand a question verbally before giving concise answers. Another capability being considered is image recognition, which would allow the supercomputer to perform diagnosis based on images and scans.

The underlying platform that was used to build Watson is the UIMA framework, which allows multi-modal or plug and play analytics. In other words, it can operate on arbitrary data types. "So far the focus has been on text," says Brown, due to the applications that Watson has been involved with. "But the overall architecture makes it fairly easy to integrate analytics to understand different data formats like images – and integrate image analysis into that."

Researchers at Columbia University Medical Center and the University of Maryland School of Medicine are studying the best possible uses of the Watson technology in healthcare.

"We do believe it’s important to work directly with healthcare providers and understand the problems they’re dealing with, including the workflows that would work for them," says Brown. "Ultimately we can deploy the technology in a way that they can successfully leverage."

Regardless of how it is eventually implemented, the possibilities for Watson’s potential role in medicine in these early stages seem endless.

Photo credit for the Watson image: IBM via AFP getty images

About the author: Karthika Muthukumaraswamy is a technology writer and blogger based in Philadelphia. She writes on various aspects of math, science, new media, journalism and technology. She handles communications and public relations for the Society for Industrial and Applied Mathematics and is a regular contributor to the Huffington Post.

The views expressed are those of the author and are not necessarily those of Scientific American.

 

Karthika Muthukumaraswamy is a science and technology writer and blogger based in Philadelphia. She writes on various aspects of math, science, new media, journalism and technology. She handles communications and public relations for the Society for Industrial and Applied Mathematics and is a regular contributor to the Huffington Post.

More by Karthika Muthukumaraswamy