For predictions on robotics and AI for 2014, I consulted with a former colleague, David Dodds, who now resides in British Columbia. He has completed extensive work in these fields and provided a number of predictions for 2014. With his permission, I decided to share these with you. My comments appear interspersed in blue italics.

Other than extensive discussions with David, the closest I got to AI was a paper I delivered in 2006 when I was contemplating the implications of Searle’s Chinese Room experiment.

Chatfield, W. Hugh; Understanding Instruction Books and Programs,
Proceedings of 2006 International Conference on Artificial Intelligence (ICAI'06/ISBN #:1-932415-98-X/CSREA),
Editor: Hamid R. Arabnia, pp.: 621-627, Las Vegas, USA, 2006.

Robotics and AI Predictions

Since the Boston Dynamics BigDog robot has successfully shown it is able to run as well as walk and bound, to carry 400 lbs of supplies, to see and follow humans in the field including bushes, and to climb up steep and slippery embankments, I expect that further refinement of its balance system, motor and sensor programs will allow it to distinguish things and objects in its environment even more subtly, and to exhibit additional
interesting physical capabilities. I think it would be really interesting if they were to add an articulated arm hand unit.

Google seems to be buying up robotics companies.  Boston Dynamics is their 8th purchase in the last 6 months.  One might wonder what they have up their sleeve?

Adding the (fairly new technology) cognitive processing-based processors to its already decent navigation and control capability will allow it to generate more complex “plans” and to likely think ahead a few steps beyond what it is doing/”supposed to be doing” (according to its internal plans). This would, metaphorically, move it from a checkers playing system to a chess playing system, with all the gain in benefits from being able to step up its ‘game’. It is quite possible that trials of speech input from humans to command GENERAL actions will occur in 2014. (As I have written to the author of “Our Final Invention” book, it would be interesting to make use of a herd of BigDogs as light tanks, equipped with mine detection equipment and light weaponry. They could be dropped by paraglider from aircraft and touch down on rooftops without a big “here we come” clamour).

The book David refers to is “Our Final Invention: Artificial Intelligence and the End of the Human Era by James Barrat. 

Since Microsoft has further refined the 3D-space game sensor which makes it now able to sense and see face gestures and gaze in a camera and sonic multi-sensor based system, others outside Microsoft will immediately start adding new ‘skills’ (uses) to the sensor unit, by adding additional programming and hooking it up to cloud based preexisting complex AI software.

In the event that someone adds an updated thermal view to the visible light camera sensing, then the MS sensor would be able to easily see a human’s heartbeat, perhaps be able to sense blood pressure without physical contact, and check out iris size – a giveaway to attention and to stress. Wouldn’t surprise me at all if a version of BigDog is being tested with this sensor or one very like it.

The original SIRI system, not the stripped-down one that has become known to the public, will likely be enhanced in the lab, although announcements to the public about it in 2014 might not happen. The military is quietly (not particularly publicly) working away to further enhance the abilities of the original SIRI. If IBM or the milboys combine Watson technology with the most advanced current version of the original SIRI, then one has a machine system which makes a considerable step towards Artificial General Intelligence (AGI), aka human level intelligence.

No, it hasn’t actually reached AGI yet. How do we know that? Any sufficiently intelligent factotum would make its presence known in the world, regardless of what Draconian intricate security secret measures the builders take.

I was intrigued watching the trailer for the movie “Her” which tells a tale of a relationship between a man and his personal instantiation of the the first AI operating systems in his phone – sort of a super SIRI.  The notion of possible relationships between humans and the various artificial life forms they create are certainly something we have little experience with so far. 

Coincidentally, I had at the same time, just read the novel “Blade Runner” or as the novel was previously entitled, “Do Androids Dream of Electric Sheep?”  The novel by Philip K. Dick is quite different from the movie, as it explores android/human relations in some detail.

Haptic imaging sensors will be tested in 2014. Already there are quite dextrous human hand factotums here and there in labs. Also take a look at the evolving physical  capabilities of the NASA humanoid robot. Far from the simple microswitch-based ‘touch’ sensors on the fingers nowadays, there are flexible circuit boards and haptic sensors, which have small enough elements such that they can be packaged as a matrix and hence a haptic based ‘image’ can be formed.

There is also the possibility of refining time of flight laser range imaging sensors with femtosecond and faster clocks, such that a transparent but flexible surface can be placed in very light contact with objects in the real world, and a fine grain texture obtained from such sensors in the fingers of those highly articulated hands. The real problem to overcome is developing the inverse kinematics needed to programmatically operate such a complex articulation.

Watson-type technology is experimentally enhanced such that it can read books – read them, not just scan them in. This would be a bootstrapping cognitive system. It might not be advertised to the public in 2014, but it is already in development in the lab.

And then there’s actual genetic hardware which learns like biological systems do.  This is not the overstated ‘learning’ talked about in the so-called learning machines. These are little more than fixed algorithms doing math, which have neither understanding of what they are doing nor of mathematics itself.  This is why the Watson machine reading algebra and geometry textbooks is really quite interesting. If Watson tech is combined with the late model actual genetic processing hardware, then the combined system should be able to teach itself and have a genuine understanding of mathematics. Then the galaxy is its oyster.

I am reminded of Ray Kurzweil’s prediction that by 2045, we will have created a non biological intelligence that is 1 billion times more powerful than all human intelligence today.  I also am reminded of Robert J. Sawyer’s fascinating WWW trilogy about the spontaneous emergence of an intelligence in the web and its possible relationship with humans.  Will such powerful intelligences treat humans as irrelevant?

Gary Marcus, professor of psychology at N.Y.U. and visiting cognitive scientist at the new Allen Institute for Artificial Intelligence, recently wrote an essay suggesting we still have some way to go with “deep learning computers.

An objective-awareness machine/factotum will be announced and demoed in 2014. (I demoed the beginnings of one half a decade ago at a conference.)

3D ICs with adaptive capabilities will be announced. Already a French company is fabricating actual 3D circuits for sale in the real world. Further improvements in masks, connectivity concepts and materials processing will allow those devices to become manufactured with recovery and adaptation capabilities.

Dodds can be found in many AI forums on LinkedIn

Share on LinkedIn Comment on this article Share with Google+
Around the Web
More Articles