Takako Yoshida - Exploring the harmony of humans, robots, and AI

FACES: Tokyo Tech Researchers, Issue 31

Takako Yoshida - Exploring the harmony of humans, robots, and AI

Issue 31

Associate ProfessorTakako Yoshida

School of Engineering, Department of Mechanical Engineering

Advances in robotics and AI have led to an integration and expansion of interactions between humans and robots. With this, concerns regarding safety, usability, controllability and liability have also arisen. Takako Yoshida studies these issues from the perspectives of psychology and mechanical engineering.

Evolution of human-machine integration

Associate Professor Takako Yoshida

With the rapid development of information technology, robotics, and artificial intelligence (AI), the integration of human and machine continues to evolve. This is seen in advances such as virtual reality (VR)

avatars1, remote-control robotics, wearable computers, and semi-automated self-driving vehicles.

But how do we achieve a sense of embodiment, presence, and immersion with machines? How does interacting with machines affect a human user's body and brain? What is the ideal integration of human and machine? Takako Yoshida focuses on these and other important questions.

"I majored in cognitive and brain science as an undergraduate and later took my doctorate in psychology. I chose psychology because I saw the proliferation of digital devices such as ATM touch panels and thought that if we were to apply concepts from cognitive and brain science, we could create user interfaces2 with even greater functionality."

Yoshida looked back on the path she followed and talked about what led her to her current area of interest. At the time, robotics research was a mechanical engineering pursuit, and AI was a specialty of information scientists. However, as society moves toward coexistence with robots and AI, a wider range of experts, including those from cognitive science, brain science, philosophy, law, and medicine must sit down at the same table to discuss the kind of future we wish to create together. According to Yoshida, "that time has arrived."

She also explains that, to address these issues, research needs to be conducted on three senses: sight, touch, and somatic sense or proprioception, which includes eye and hand movement. Without these senses, it is difficult to create a sense of embodiment and presence. In mechanical engineering and information science, this approach is known as multilmodal3 or crossmodal4 interfaces.

Yoshida has been studying human vision and computer vision with other collaborators, looking at both the human eye and robot and computer systems designed to mimic the functions of the human eye, in order to find the optimum in visual function (i.e., image processing and recognition).

In fact, humans are only capable of perceiving color and shape accurately in an extremely narrow range at the center of their vision. However, we feel that we can clearly see in the periphery. This is because our brain supplements the image with movements of the head and eyes, and associates these with the intention of perceiving an object. "We can take advantage of this fact," she states, "to provide an improved sense of embodiment and presence, thereby creating more natural interactions."

Delay between a remote command and the movement of an avatar causes an abrupt reduction in the sense of embodiment and presence. In internet streaming, methods are being developed to reduce latency, and thereby preserve the sense of embodiment and presence, by providing high resolutions and full color for ranges detectable by the human eye, and low resolutions and low accuracy for other ranges.

A human eye function experiment utilizing eye tracking.

Subject’s view

A human eye function experiment utilizing eye tracking. Humans can only recognize words when their eyes focus on them. Words outside the area of focus are not precisely seen as they believe. In this experiment, text was shown within or outside this area to measure how much the subject could read and how much they miss or imprecisely perceive at their peripheral vision.

Using psychology to create a sense of embodiment with robots

Yoshida is also involved in the area of power support robotics through the Psychiatric & Neurological Disorders Programouter organized by the Japan Agency for Medical Research and Development.

Power support robotics are wearable devices that support body movement for individuals with physiological or neurological dysfunctions. Currently, movement is computer controlled; however, control by brain or muscle signals is also possible. Examples include brain-machine interfaces5 and myoelectric upper limb prostheses.

"The biggest difference between our design and that of other wearable robotics is that our power support robot can be worn while undergoing real-time measurement of cerebral blood flow using functional magnetic resonance imaging (fMRI). This allows us to see the difference in brain state when the robot moves according to and contrary to the intention of the wearer. This further provides a means to quantify the influence robots have on the brain and senses."

Measuring brain state while wearing a power support robot
Measuring brain state while wearing a power support robot

fMRI of an individual wearing a power support robot. Images show the difference in brain activity when the wearer feels and does not feel that the robot moves according to their intention.
fMRI of an individual wearing a power support robot. Images show the difference in brain activity when the wearer feels and does not feel that the robot moves according to their intention.

This unique power support robot employs an artificial muscle system developed by Professor Koichi Suzumori of the School of Engineering. Typical robotic actuators are made of metal; however, this makes them unusable in MRI due to the magnetic field generated by the scanner. Replacing the metal actuators with soft actuators made from silicone fibers solves this problem.

Soft actuators made from silicone fibers containing thin tubes. Limb flexion and extension are achieved by injecting air into or expelling air from the tubes, which expands or contracts the soft actuator.

Yoshida's strength is her ability to quantify human sense and perception through psychological methods and the analysis of data obtained by fMRI.

"I use psychophysical methods, and the correspondence between physics and psychology allows us to understand the physical information required to generate a certain type of sensation."

This approach has revealed that a mere 300-millisecond delay in robotic movement decreases the user's sense of embodiment with the robot.

"To achieve effective integration of human and robot, the device must provide safe and comfortable control, and the information the user wants to convey must be transmitted accurately. Therefore, our group focuses on robotic devices from the perspective of cognitive and brain science. Through this approach, we hope to develop power support robots that function as if they were a part of the wearer's body."

On coexistence with AI

Associate Professor Takako Yoshida

Coexistence with robots and AI in human society is another important area of research for Yoshida. As a principal investigator in the Human-Information Technology Ecosystemouter project funded by the Japan Science and Technology Agency (JST), she engages in joint research with experts in law and philosophy.

"In this project, we explore the question of who responsibility should fall upon when an AI or robot with AI causes an accident — the manufacturer, the user, or the AI itself. Insisting that manufacturers bear responsibility would cause a scaling down of the AI and robotics industries as companies seek to reduce risk. In addition, AIs learn and change through interaction with users; therefore, some argue that it would be unfair to make manufacturers completely liable for mishaps."

Furthermore, wearable robotics and vehicle autopilot systems create circumstances where humans and AI share control over the same functions. The decisions made by a human operator and that made by an AI may not always match, and this can change the focus of responsibility in the event of an accident.

Fundamentally, we might ask, is it even sensible to hold an AI or robot responsible for a mishap?

"Many matters regarding AI and robots have been discussed. How far would an AI or robot have to develop to be considered legally equal to humans? Is it possible for AIs and robots to exhibit common sense and emotion? Would we even want them to have such abilities? Some argue that determining responsibility is just one of the relatively new ideas from Western society, and that it may be possible to resolve these new social issues regarding AI and robots using methods that do not depend on determining the responsible party."

With the term singularity6 being used so often recently, many issues have emerged regarding the integration of humans and machines, and coexistence with robots and AIs.

"Addressing these issues," says Yoshida, "requires not only the viewpoints of law, philosophy, and sociology experts, but also a scientific approach to clarify the influences on the human brain and psychology."

At the conclusion of the interview, Yoshida had some advice for students and early-career researchers. "Many young people tend to worry about which field they should enter, liberal arts or science, engineering or physics. I would advise them not to worry about this. Rather, pursue what you are really interested in, what you want to do for the rest of your life. Careers in science are not the only way this goal can be achieved, especially in the area of human sense and perception that I am engaged in. Many graduates have chosen not to pursue careers in science, but have started their own businesses, or become media artists and writers. I would encourage young people to follow their own paths to their goals. Don't limit yourself to small frameworks that narrow your possibilities."

Members of Yoshida Laboratory
Members of Yoshida Laboratory

1 Avatar

A graphic representation of the user or some variation thereof.

2 User interface

A mechanism that allows users to interact with computers, software, and systems.

3 Multimodal interface

A concept of combining multiple senses such as vision, hearing, touch, smell, and somatic sense (sense of equilibrium, sense of space, etc.).

4 Crossmodal interface

A concept that involves interactions between two or more different sensory modalities such as vision and hearing, vision and touch, and taste and touch.

5 Brain-machine interface

Interface technology that supports communication between the brain and external devices, such as computers, through the detection and extraction of brainwaves or other brain information and the provision of stimulus to the brain.

6 Singularity

The hypothetical point where AI exceeds human intelligence (technological singularity) or brings about a change in the world.

Associate Professor Takako Yoshida

Takako Yoshida


  • 2016Associate Professor, Department of Mechanical Engineering, School of Engineering, Tokyo Institute of Technology
  • 2012Associate Professor, Department of Mechanical Sciences and Engineering, Graduate School of Science and Engineering, Tokyo Institute of Technology
  • 2010Post-Doctoral Fellow, Department of Experimental Psychology, University of Oxford
  • 2005Post-Doctoral Fellow, Vision Science Laboratory, Harvard University
  • 2004Doctor of Letters, Department of Psychology, Division of Behavioral Studies, Graduate School of Letters, Kyoto University

School of Engineering

School of Engineering
—Creating New Industries and Advancing Civilization—

Information on School of Engineering inaugurated in April 2016

School of Engineering

Schools, Departments, and Institute for Liberal Artsouter


The Special Topics component of the Tokyo Tech Website shines a spotlight on recent developments in research and education, achievements of its community members, and special events and news from the Institute.

Past features can be viewed in the Special Topics Gallery.

Published: June 2018