They learn to speak, write, and do arithmetic. They have a phenomenal memory. If one read them the Encyclopedia Britannica they could repeat everything back in order, but they never think up anything original. They’d make fine university professors.
R.U.R. (1920), Karel Čapek
My first experience with robots was through popular culture and literature when I was a little girl. I was fascinated with the first computers, space and robots: Star wars and R2D2 (first indication of my geekiness), watching many times and dreaming of Blade runner, reading short stories by I.Asimov. Later on, during college, courses on information systems, cybernetics caught my attention, from the cybernetic communication models to cybernetic organisms being described as cyborgs and the larger networks of communication. I was interested in techno-science and feminist-cyborg studies of Donna Haraway and S.Turkle’s cyber-analysis of the robots sociability, her studies on intimate bonds we form with our artifacts (robots and computers), and how they shape who we are. Finally, with the Internet expansion my interests switched to Information and communication technologies and Computer-Mediated Communication, networked and learning systems.
Then, last December at TED Women I’ve reached a “robotic moment” watching a roboticist from MIT, Cynthia Breazeal, who talked about robots in communication technologies: mobile, expressive, performing collaborative tasks, and socially engaging, something that interconnected with my internet studies and research on communication in different contexts.
People interact with robots identically as with their computers. They trust in them and they are emotionally engaged. To find out more about the possibilities of robots and their proliferation in society (in learning, medicine, space, everyday life) as well as the European robotic scene, I was talking with researchers in Cognitive Robotics Sasa Bodiroza and his colleague Guido Schillaci from the Cognitive Robotics Department at the Humboldt University of Berlin.
DR: Welcome to The Scientific American blog. Would you, please, tell my readers a little bit more about yourself? What is your scientific background?
SB: My name is Sasa Bodiroza and I am a PhD student in the Cognitive Robotics Group at the Institute of Informatics, Humboldt University of Berlin. I work together with my colleague Guido Schillaci, under supervision of Prof. Verena Hafner.
I finished my BSc and MSc studies in the Department of Computer Science and Engineering at the School of Electrical Engineering, University of Belgrade. My bachelor and master theses were in the area of fuzzy logic: developing a system for student learning.
GS: Hello, my name is Guido Schillaci and I’m from Palermo, Italy. I’m a Ph.D. student at the Humboldt University Berlin where I’m a member, with Sasa Bodiroza, of the Cognitive Robotics Group supervised by Prof. Verena Hafner. We’re involved in the International Research Network INTRO (INTeractive RObotics) funded by EU.
I have a Bachelor Degree in Computer Engineering and a Master Degree in Computer Engineering for Intelligent Systems, both from the University of Palermo. I studied for one year at the School of Computer Engineering (ETSIIT) of Granada, Spain. My thesis dealt with machine learning techniques for robotics.
DR: What’s your PhD research about?
My research is a part of an international research network INTRO (INTeractive Robotics), a project in the EU 7th Framework Program (FP7). The network consists of four university partners: Umea University, Humboldt University of Berlin, Ben-Gurion University of the Negev and Bristol Robotics Laboratory, and two industry partners: Robosoft and Space Application Services.
I am interested in the use of gestures in human-robot interaction. I focus on dynamic gesture analysis, which includes recognition, learning and synthesis of gestures. Another important aspect are methods for determination of human gesture vocabularies, as well as sets of gestures fit to the particular robot morphology.
The goal of my research is to develop a system for the robot which will be able to understand certain gestures and learn new. The robot will be able to perform gestures which fit to its morphology. I hope this will help in achieving natural and intuitive interaction between robots and people.
I am also interested in attention manipulation and the development of attentional models for robots.
GS: Human-Robot Interaction. In particular Behaviour and Intention Recognition for Human-Robot Learning. My efforts are focused on applying cognitive sciences, neuroscience and developmental sciences theories in the development of cognitive skills for robots for increasing the intuitiveness and efficacy of their interaction with humans.
DR: There are over one million household robots, and 1.1 million industrial robots, operating worldwide. Do you see the further development and the implementation of the robotic systems who understand and perform certain gestures in everyday life application, such as in learning processes, child or health care? Or maybe some other applications in Human-Robot Interaction?
SB: Certainly. Current computer and robotic systems require a person to go through user manuals and learn how to use them. Developing a robot, which could recognize and perform gestures will at least partially resolve this problem, enabling more natural and intuitive interaction.
However, gesture recognition and synthesis in robots has a broader application. A PhD student in our group, Siham Al-Rikabi is doing research in natural language processing. She is looking at the use of sign language symbols by robots and performing translation between spoken or written and sign language.
There are other applications – gesture recognition and synthesis will make a daily interaction between humans and robots easier, more intuitive and more natural. In example, we could have a robot waiter at a bar serving us drinks (by the way, check out the Roboexotica – a meeting in cocktail robotics, held annually in Vienna), a service robot at home, an entertainment and nursing robot in elderly homes and with children. We shouldn’t use how to operate new systems – in this case robots – but we should adapt them to the way which is the most intuitive for us.
GS: Gestures are important non-verbal communicative tools. The ability of understanding and performing gestures would definitively improve the quality and the complexity of the interaction. At that level, a robot could be present in everyday life and useful for several applications: health care, as you said, collaboration tasks, service applications, etc.
DR: You’ve mentioned the neuroscience and developmental sciences theories being implemented in your project. How much your research area and cognitive robotics in general collide with other scientific areas, e.g. social sciences, cybernetics, AI, psychology and others?
SB: Cognitive robotics is a highly interdisciplinary area. Unlike in traditional robotics, researchers come from different areas: computer science and artificial intelligence, computer engineering, robotics, but also experts in usability, psychology, and other fields.
GS: Cognitive robotics is becoming more and more an interdisciplinary field. I constantly try to justify and base my research on cognitive sciences and neuroscience studies.
DR: What are you currently working on? What are your current plans and projects?
SB: Recently we have been working on an implementation of an attentional mechanism in a humanoid robot Aldebaran Nao, as a prerequirement for the joint attention. The mechanism enables Nao to have a simple interaction with people, by detecting them, showing interest in some objects by pointing to them and also losing interest after a while. For interested readers, they can find our paper on this in the Proceedings of the Humanoids ’11 (to appear around October ’11). Next planned addition is a gesture recognition and learning system. I will be going to Israel soon, were I will stay for one semester, working with Prof. Yael Edan.
GS: I’m working with Sasa and Verena on a system for providing a robot with visual attention and attention manipulation skills, two important prerequisite for joint attention. The ability to share the focus of our attention with other individuals is a fundamental social tool which let us communicate and share mental states, a characteristic that differentiate human beings from other animal species.
My future work will focus on providing robots with skills for understanding behaviours and intentions of interacting partners.
DR: Can you tell our readers more about the robot Aldebaran Nao? Does he serve human purposes, is he a humanoid?
SB: The Nao robot is a humanoid robot, around 60 cm tall. It is a research platform and it is used by research groups around the world. Perhaps its most prominent appearance is at the RoboCup, a competition in robot football, where it is used in the standard platform league. Nao can also be used to teach students to fundamentals of robotics.
It is equipped with various input devices and sensors (cameras, microphones, ultrasonic rangers, inertial measurement units, pressure sensors on feet) and actuators in joints. It is also able to reproduce sound and synthesize speech and turn on and off LEDs in the eyes, ears and on the chest and feet. It is able to reproduce gestures using arms. However, it cannot reproduce hand gestures, given that it only has grippers with three fingers in the academic version.
DR: What do you think about the cognitive robotic research and development scene today? How do you see its development in the next ten years? Will the robots assist humans in many areas? Go to space as NASA’s space Robonaut R2?
SB: Cognitive robotics is a relatively young area, which began to develop in the early ’90s. Since then a lot of research groups in cognitive robotics formed. It has come a long way to go until we reach the point of fully autonomous, interactive and social robot, who are capable of joint attention. I think that the research will follow the same directions, focusing on embodiment of the robot, and not just on the control systems, followed by development of new areas, such as the neural control for robots, which is currently done by few groups in the world.
As we learn more about robots and make them more safe for direct interaction with humans the number of use-cases and places where a robot can assist us will definitely increase. Exactly as you mentioned Robonaut 2, which is currently being tested in space.
GS: We’re still far from having robots which can interact naturally and intuitively with people in a real environment. However, a lot of efforts from the scientific community is focused on that: several prerequisites and possible paths towards that have been already identified and progress in some robotics fields is going pretty fast.
It’s difficult to predict where robotics will be in the next 10 years. Some opinions can appear visionary (e.g. the ultimate goal of the RoboCup Foundation is to have by 2050 an autonomous robot soccer team which will win a soccer game against the winner of the most recent World Cup) but, why not, that’s the right mood for approaching the problem!
DR: I know that beside your department there is also the Cognitive Robotics Lab at Technical University in Munich. How is the European cognitive robotics network developed? Do you collaborate with the European Robotics Research Network (EURON)?
SB: There is an active robotics scene in Europe. Beside EURON, there is also EUCogII, a large network which connects researchers in artificial cognitive systems and robotics.
Also, at the last HRI 2011 conference in Lausanne, the amount of published papers was roughly equally divided between Europe, Japan and the USA, which shows that research in this field is very present in Europe.
In our INTRO project, we are collaborating with research groups at three universities and with two industry partners, as it was mentioned before.
GS: The European scientific community on Cognitive Robotics is big and very good. There are lots of research networks promoted by the EU. The project we’re involved in is one of them, which includes four universities and two industrial partners, as we mentioned before.
DR: Are you familiar what your peers in the United States do, e.g. at MIT and other specific departments? Do you follow their work and is there something that you are interested in particular from what they are doing?
SB: I follow their research. I am particularly interested in the work of Prof. Cynthia Breazeal from MIT, director of Personal Robots Group, such as her work on how can we interact with the robots.
GS: Yes. In particular I’m interested on the work of some researchers on social robots, e.g. B. Scassellati from Yale University and C. Breazeal from MIT.
DR: Recently, the first humanoid astronaut robot – Robonaut R2 started tweeting in space implying that the power of Social Web is limitless in a way. Since I am a Social Media researcher I’m curious about your usage of Social Web. How does social media, blogging, social networks figure in your work?
SB: I think that social web provides some good tools for researchers. I use it to follow the research of other people (Academia.edu) and to find recent posts and news (e.g. using Sparks on Google+) about new developments in the area. Services like Twitter, Google+ and Facebook can also make the communication between researchers easier.
I started relying on the cloud for storage and I actively use DropBox and Ubuntu One. I also find interesting news using Google+ Sparks, which is so far an amazing service. There are also some robotics blogs, like IEEE Spectrum’s Automaton, which I read regularly.
GS: Social media are powerful tools for information and ideas exchange. Some of them (like academia.edu or LinkedIn) could be very useful for being updated about the latest researches or publications of other scientists; however, most of the researchers tend to update more often their personal/lab websites, so I always end up with checking them or scientific digital libraries, instead of their social network profiles.
Get 6 bi-monthly digital issues
+ 1yr of archive access for just $9.99