Cyber carers
- 2 February 2006
Colin Jervis
Director and Healthcare Consultant
Kinetic Consulting Ltd
My previous article ended by reassuring healthcare workers that robots, even though they are developing nicely, were not about to take over their jobs. Though some robots can walk, they still need to talk, think and be accepted to challenge humans as carers.
ASIMO, Honda’s prototype ‘walking’ robot, is still being developed for this eventual purpose. “Honda’s vision is to develop ASIMO to a level where it can truly help people,” says Honda’s William de Braekeleer. “Helping the elderly and infirm is part of this dream, but there is still much to do to reach our objective. Our focus is now on developing its intelligence capabilities.”
In a 1950 article in “Computing Machinery and Intelligence” Alan Turing argued that one day machines would think like humans. Turing suggested that we could consider a machine intelligent if its responses were indistinguishable from those of a human. This was the origin of the Turing Test.
Thinking and communicating
Unfortunately, even the most powerful supercomputers are no match for the human brain and are a long way off passing the Turing Test. Many have predicted that the increases in processor power would eventually allow the brain’s interconnections to be replicated, citing the historically accurate Moore’s Law, which states the speed of transistors doubles every 18 months. Others counter with Wirth’s Law, which states software gets slower more rapidly than hardware gets faster – just because hardware is becoming more advanced, it doesn’t mean that work is being done more quickly.
Artificial Intelligence research has not lived up to its promise of creating a conscious machine. While chess-playing computers can beat humans, at the game of Go—which demands more than the ability to calculate the best of a range of responses—even medium strength human players still beat machines.
The development of natural language to allow robots to converse is similarly challenging. I asked one such AI-based chat program, www.intellibuddy.com, how the New Year was: “I like to meet new people. The new year is the same as ever,” she said. I asked her in what way it was the same: “Yes, it’s much the same,” she replied. The rest of the conversation continued in much the same way. Talks with other chatbots end up at similar dead ends.
Computers have difficulty in dealing with multiple levels of meaning and nuance that even a small child might understand. This has led researchers to argue robots need to develop by experience and the Cog and Cyc projects illustrate two different approaches to bringing that about.
Cog and Cyc
Doug Lenat began Cyc in 1984. It is not a robot, but rather a knowledge base of fundamental information: facts, rules of thumb and heuristics for learning about objects and everyday life. It currently contains about 200 thousand terms. Cyc is intended to normalize and minimize the amount of knowledge needed to develop common sense. When Cyc has attained this basic level, it will be ready to tackle more complex problems, like the processing of natural language.
Rodney Brooks of Massachusetts Institute of Technology’s Artificial Intelligence lab has taken a different approach to machine learning. Cog learns about the world through its senses by using eye, head and hand. By creating a humanoid robot and allowing it human experiences, researchers believe robots are more likely to develop brains and capabilities similar to ours.
Social, legal and moral consequences
In one of my favourite books, ‘I, Robot’, Isaac Asimov conceived of machines with morals. His robots like nothing better than to sit and analyse the ethical implications of their actions. Could robots achieve this level of moral sophistication? Or would we constantly be in fear of them?
Westerners have a negative preconception of robots, perhaps because of the way Hollywood has portrayed them. Interestingly, to most Japanese people, robots are friendly and benign; there are several robots on the market in Japan designed to help look after and comfort the elderly, for instance.
Even if robots succeed in improving their poor image in the West, their widespread deployment in healthcare would have many legal and moral consequences. For example, who would be responsible if they went wrong—the manufacturer or the NHS? Perhaps the robot would be like any other tool and the doctor or hospital would be responsible for its use.
Ever since Czech writer Karel Capek introduced the term robots (from the word for ‘slave’) in his 1924 play ‘Rossum’s Universal Robots’, mankind has been fascinated by the prospect of technology with a human form. But, many problems remain to be addressed—some think too many—but is that a good reason to stop trying?
The prospect of sending a man to the moon was once a similarly intractable problem, but mankind succeeded and made great technological leaps as a consequence. In striving to construct caring machines we may learn more about what makes us human, and, as a result, become better carers ourselves.
Colin Jervis
Links
Related articles