As smart as IBM’s Watson supercomputer may have seemed while defeating two former Jeopardy champions , it wouldn’t be able to hold a conversation with or speak intelligently to the attendees at its own conference, according to artificial intelligence (A.I.) experts who spoke at MIT Monday.
“Although Watson is a tremendous engineering achievement, there are some things it can’t do,” said Patrick Henry Winston, a professor and former director of the Massachusetts Institute of Technology’s (MIT) Artificial Intelligence Laboratory. “For example, if there was a conference about Watson, Watson couldn’t attend. It would have nothing to say about itself. It can’t participate in discussions about how it works.”
Related story – IBM’s Watson lands health care job after Jeopardy! victory
Winston was among dozens of researchers who spoke at MIT’s Computation and the Transformation of Practically Everything symposium , which is part of the school’s 150-year anniversary celebration this year. The symposium continues today.
Winston pointed out that after computer scientists, such as James Slagle, began producing A.I. programs in the early 1960s, the scientific community and the public believed computers would have general intelligence within a few years. That didn’t happen.
“Apparently what we forgot or overlooked is the idea that it’s much harder to produce programs that have common sense than it is to produce programs that behave at expert levels in very narrow technical domains,” he said.
IBM’s Watson computer can answer questions posed in natural language in near real time. Unlike mainframe-style supercomputers of the past, Watson is made up of 90 IBM Power 750 Express servers powered by eight-core processors — four in each machine, for a total of 32 processors per machine. The servers are virtualized using a Kernel-based Virtual Machine (KVM) implementation, creating a server cluster with a total processing capacity of 80 teraflops. A teraflop is one trillion operations per second.
However, what Watson lacks is the ability to connect life experiences to form cohesive thoughts, which is what gives humans their cognitive ability, Winston explained.
Ed Lazowska, who holds the Bill & Melinda Gates Chair in Computer Science & Engineering at the University of Washington, also took a shot at Watson.
Lazowska noted that after Watson’s initial victory in February on Jeopardy, the machine was then handily defeated soon thereafter by Rep. Rush Holt, (D-N.J.), during a technology demonstration on Capital Hill. Holt, a nuclear physicist and five-time “Jeopardy” winner, beat the computer with a score of $8,600 to $6,200.
“It shows we need more physicists in Congress. Rush is the only one,” Lazowska quipped.
While Watson may not be able to have an intelligent conversation, its appearance on “Jeopardy” heralded a sea change in A.I. brought about by multicore processors, clustered computing and sophisticated computer management software.
The computational power that got man to the moon in the late 1960s is now “embodied in Furby “. “Admittedly, not the best use of that computational power,” Lazowska said.
Ten years ago, one IT administrator was needed to manage 250 servers. Today, that person can manage thousands of servers. For example, Microsoft’s Azure cloud computing platform requires only 12 support people for 35,000 servers divided between two continents, Lazowska said.
The exponential power behind computing has allowed the Internet to have a dramatic impact on our lives, more so than anything else in the past 40 years, he said. Over the next several years, consumers will see that power used to create smart homes, smart healthcare, smart robots and smart cars capable of reactive decision making, Lazowska said.
The key to future computer development is “system building,” where instead of technologists developing technology within their fields of expertise, they work in teams that include a variety of disciplines.
“When speech recognition and vision people get together, they’re able to build system that provides dramatically greater capabilities than they could do on their own,” he said.
Anant Agarwal, a professor in the MIT Electrical Engineering and Computer Sciences Department, said computers need to be more like humans if they’re going to be able to take advantage of engineering advances.
Agarwal said his department’s vision is to build a processor with hundreds or even thousands of cores, something that could be a reality in as little as four years.
One major obstacle to building multi-thousand-core processors is controlling heat generation from the circuitry. One way to control heat generation is the keep the cores as close to the DRAM memory as possible, thereby shortening the circuitry and reducing the time for heat to build. Another way is to balance application performance with hardware capability.
“We need to rethink compilers, operating systems, architectures, how we program computers; The first thing you want to do is to have applications be able to communicate their goals to the operating system,” Agarwal said. “What we need is self-aware computation.”
That would be like having an application “tell” an operating system what it needs in terms of processing power, and then having the OS balance that need with the needs of other applications running at the same time.
Agarwal noted that a human heart can communicate exactly what the body needs at any given point in time function optimally.
“If you’re a good runner, your body temperature actually goes down the longer you run,” he said. “If you’re a computer, the longer you run the higher the temperature goes. So the question is, why can’t computers become more like humans?”
Lucas Mearian covers storage, disaster recovery and business continuity, financial services infrastructure and health care IT for Computerworld. Follow Lucas on Twitter at @lucasmearian or subscribe to Lucas’s RSS feed . His e-mail address is email@example.com .