Prospective Students

Hello New World! A new era of technology and entertainment

(left) Motoi Ishibashi、(right) Shoichi Hasegawa

What kind of world will we see when the real and virtual merge?

The Rio 2016 closing ceremony is still fresh in our minds

Motoi Ishibashi, from the creative division Rhizomatiks Research, has participated in numerous collaborative works and performing arts, including the production of performances for the Tokyo 2020 Flag Handover Ceremony, with a variety of creators and artists, and is involved in all processes of projects from the development of globally successful hardware and software through to operation. Associate Professor Shoichi Hasegawa has a goal to use virtual reality (VR) and simulation to create characters that move like humans. These two, each creators of things no one has seen before, sat down for a discussion on the new possibilities of technology and expression.

(Held on June 19, 2019 at Suzukakedai Campus)

The power of live expression

IshibashiSince I was invited for this, I decided to walk through Tokyo Tech's campus. It was the first time in a while. It's changed a lot since I was a student. I noticed there are many female students in this new generation.

HasegawaThere are definitely a lot more female students compared to before. Ishibashi-san, since graduating you've worked on many installations*1 and exhibitions, such as particles and proportion, which are permanently exhibited at the Ars Electronica Center*2. But recently, we see more performances like dance. Do you feel that what you have been doing has changed?

IshibashiTechnically, installations and stage performances are the same. The difference is whether the audience can experience it or not. We think of dance performances as something you experience and not just something you see since the system itself is created together with the dancers. Because of this, recently, I've come to think that even if it can't be experienced directly, it should be watched in the best way possible. I think this is why our stage performances have increased.

Shoichi Hasegawa

HasegawaVideo is also quite effective. Maybe this is the effect of YouTube, but I think it can convey a lot.

IshibashiWe are very conscious of this, and that's why we've tried to create ways where it can, to some extent, be viewed as a work of art.

HasegawaI don't think it's possible to plan everything from one to ten. You don't just create a sketch and then have people dance according to it, right?

IshibashiThere are many ways it can be made. For example, you can have a dancer move while holding a drone and use motion capture to record it. If you fly the drone according to that movement, it is like the dancer is making it move and is dancing with the drone.

HasegawaIt's amazing how you have been able to create theater and dance. I find techniques that are created on the spot and in real time fascinating. This is because animations only recreate movements that have been premade. But a VR character, for example, needs to react to our movements. It is difficult to make it respond in real time, as facial expressions and lines of sight change when people approach. So I'm amazed that you have been able to realize this as a kind of improvised stage play. Also, your way of showing it, since it is a video. I think it is incredible how you use real-time and video.

Illumination installation entitled particles. A flashing light source floats in the air creating an illusory afterimage. Grand Prix runner-up at the 2011 Ars Electronica, Interactive Art Category.
Illumination installation entitled particles. A flashing light source floats in the air creating an illusory afterimage. Grand Prix runner-up at the 2011 Ars Electronica, Interactive Art Category.

In proportion, images are projected onto a moving model operated by a robotic arm and laser pico projector. Installation for the "Shonen Yo Ware ni Kaere" music video by the Yakushimaru Etsuko Metro Orchestra.
In proportion, images are projected onto a moving model operated by a robotic arm and laser pico projector. Installation for the "Shonen Yo Ware ni Kaere" music video by the Yakushimaru Etsuko Metro Orchestra.

IshibashiIn our case, our performances are live. It depends on the case, but with Perfume's performance at the 2018 NHK Kohaku Uta Gassen, we knew there would be about 50 million people watching on TV and not just the 1,500 or so people at the venue. It is a live broadcast for everyone, including viewers. People know it is a real-time process and that it is not pre-recorded or edited, which makes it more impressive. Live events have power.

HasegawaThat's true. Also, it is designed to deal with accidents that arise with live performances.

In border, the movement of the personal mobility WHILL and an omni-wheel cart are controlled with precision. Five dancers and ten spectators wearing AR/VR goggles present a completely new performance in perfect harmony.
In border, the movement of the personal mobility WHILL and an omni-wheel cart are controlled with precision. Five dancers and ten spectators wearing AR/VR goggles present a completely new performance in perfect harmony.

IshibashiIf something happens, we try to fix it at that moment, but we also have a backup plan. Because it's real-time, our creators are also nervous. And since certain technologies can only be handled by a certain person, missing just one person can really affect the outcome.

HasegawaIn our lab as well, often only one person knows how to handle a certain device. Despite this, every time I look, the number of measurement controls is amazing. I'm so impressed that communication is done without any mistakes. If the failure rate is 0.01%, it won't work, so your number must be even lower, right?

IshibashiI haven't measured it. But since everything is linked, I'm sure the number must pass.

How do you move the things that move the heart?

24 drones performed with 24 drones and 3 dancers. Machine learning is incorporated to add efficiency and safety to the movement of the drones resulting in beauty to the overall composition.

24 drones performed with 24 drones and 3 dancers. Machine learning is incorporated to add efficiency and safety to the movement of the drones resulting in beauty to the overall composition.

HasegawaPerformances which until now centered on inorganic materials have started to incorporate dancers, allowing the works to take on new qualities. Is there something you are pursuing in regards to movement or expression?

IshibashiFor example, even if I move, there is little variation in expression, and the possibilities cannot be fully utilized. Only professionals can move in a way that looks enjoyable and pleasing. When the director/choreography MIKIKO composes the choreography and gives direction to the dances, it becomes clear that "if there is movement like this, it will be like this picture." And the output is enriched.

HasegawaWhen creating human movements in the VR world, animators do that work. They know that "if it moves like this, viewers will feel like this." Actors and dancers can express emotions in the same way with movement. There is something called biological motion where light points are attached to the body and it is shot in the dark, and emotion is conveyed only by the motion of the lights. However, we have to implement this for virtual characters.

IshibashiAre animators still making such movements by hand?

HasegawaYes. 3D motion is usually key frame*3 animation, where the joint angle and way of opening the hand are adjusted little by little to create movement. However, a different movement cannot be made suddenly. While quality of movement is achieved by animators, we also have it react in real time to user movements. My laboratory is working on how to achieve both of these.

IshibashiWe also create things like cart movement and light patterns, and we use programming to produce movements and timings that look beautiful. And there are times when we add finishing touches bit by bit, as you would do by hand in animation. For example, if I have a dancer walk with a motion capture marker on her head, and then move a cart according to that trajectory, slightly hesitant movements can be produced. Such noise and fluctuations can be very touching.

HasegawaIf you observe the strange movements of a robot, you can see the mechanisms behind its control system. You can see the creator's hard work and intention, and what is seen and what is judged are conveyed, which makes it endearing.

IshibashiOn the other hand, if you take full control of the cart and arrange its movements precisely, it feels good. Creators experience such joys.

HasegawaWhen there is optimal control, you can see that the work is done by someone who has really found the right answers.

IshibashiIn one of our works, we flew 24 drones. During the test, things went awry because data transfer was slow. The movement was out of control and became something we could not have created. We were supposed to stop it, but for a while, we were just mesmerized by the movement (laughing).

HasegawaIt suddenly began to move like a group of living animals (laughs).

IshibashiHow do you make characters move?

HasegawaThere are many ways, but it is not like creating CG. We use physical simulation for control, so it is more similar to a robot. With real robots, there are limitations such as with torque and range of motion, but with VR, there is no cost regardless of how many simulations are done, and there is infinite power. However, if the power is infinite, it is no longer a human, so first of all, we developed a simulator that easily allows us to set human-like parameters. But, this is only for the physical body.

IshibashiSo, from there, you want to implement controls so that it moves like a human.

HasegawaI am working on a simulation for common human movements such as sensing the outside world, determining the purpose of actions, and movement. If these are implemented as controls, it will come to life. I am also imitating things I know from psychology such as how fast eyes move, so I'm trying to create a simulator that moves like a human, half based on science and half based on our staff's intuition. First, finding something of interest and reaching out for it — that creates characteristics of living things. We are producing a demonstration that includes this mechanism, so I think that if the character is able to move it's line of sight according to our movement, and it looks back when we wave our hand, or if mutual exchange is possible, it would be possible to evoke an emotional response from the viewer.

Into an era when humans themselves change

IshibashiRecently, some artists are making promotional videos using VR. For example, if there were a character that could react to our line of sight, it would be totally different. So, if someone looking at the screen claps their hands, one of the characters that is singing reacts.

HasegawaHumans can't ignore it when their line of sight and the timing match, so it's interesting in terms of creating a sense of presence. The psychological distance also changes depending on whether the eyes move or not. If a character is able to look at us, we will start to respect their personal space and recognize them as a person, not an object. When human-like characters appear in the future, the VR industry will become more active. Virtual YouTubers (VTubers) have been gaining popularity since last year, and I find this very interesting. Even when there is no performer, I think it's possible now to create characters with some intelligence. But their movements need to be technically improved.

IshibashiIn the future, the real and virtual will be merged. At that time, we will have to keep improving the presentation technology even in the entertainment world. However, as technology evolves, new presentation and expression methods will continue to be created, so I think the hurdles that creators face will get lower. Today, if you have an idea, it is much easier to create videos and music than in the past.

Display technology that prevents the "Mona Lisa Effect," where there is eye contact from any direction. By having only the eyes on the front layer and making a duplicate, the movement of the eyes can be changed according to the viewpoint to clearly indicate who is being spoken to among many people, resulting in a natural conversation.
Display technology that prevents the "Mona Lisa Effect," where there is eye contact from any direction. By having only the eyes on the front layer and making a duplicate, the movement of the eyes can be changed according to the viewpoint to clearly indicate who is being spoken to among many people, resulting in a natural conversation.

Development of technology where the VTuber itself automatically reacts. Natural reactions are captured, such as "shaking when hit by an object" and "receiving a thrown item," in addition to the movement of the performer.
Development of technology where the VTuber itself automatically reacts. Natural reactions are captured, such as "shaking when hit by an object" and "receiving a thrown item," in addition to the movement of the performer.

HasegawaDissemination and understanding have become much faster. When VR becomes more common in the future, in other words, when media evolves, the information received will be closer to the actual experience, and it will be communicated more quickly and over a wider area. According to the media researcher Marshall McLuhan, people began to write after having verbal exchanges, and began to think about things in terms of sentences according to order. Later, television restored the way of thinking where users directly feel things based on video and audio. When McLuhan saw children who had grown up in the television era, he realized that they were different from himself. I think we must recognize that the generations raised in the internet and VR worlds have been educated and think in a way that is different from ourselves.

By grasping and moving a force sensor, it is possible to operate two points in VR space to grab an object. Hasegawa was the first to conduct research on presenting that sense of reaction force.
By grasping and moving a force sensor, it is possible to operate two points in VR space to grab an object. Hasegawa was the first to conduct research on presenting that sense of reaction force.

IshibashiThere are definitely some amazing things in this generation. Our company holds workshops for middle and high school students. We assigned roles such as sound, dance, programming, and writing based on their desires and motives, formed teams, and had them give presentations. They picked things up quickly and could handle the assignment. It is essential to develop such human resources to survive the future.

HasegawaI think that our way of communicating will continue to change in the future. Similar to VTubers, with VR, the body can be replaced. Different communication is possible using a different body from your own in a different world. If such an era comes about, it will be interesting to see how humans will change. Also, I think it would be good to create this world using our own will instead of being controlled by a system. For example, with SNS, you can configure settings so that you only receive the information you want to see. This will be possible in the VR and AR worlds. So, if there were a mechanism where the optimal settings for each person can be known and controlled, and it could be operated properly, I think it would have great potential and make people happier.

Using a physics engine, computer-generated characters controlled by VTubers can realistically interact with their environment. When a character is hit with an object, it performs a natural teetering motion, then gradually re-assumes the operator's posture, as if regaining its balance.

Using a physics engine, computer-generated characters controlled by VTubers can realistically interact with each other. When objects or bodies collide, their actions deviate from the remote commands, reacting naturally according to physical dynamics and without penetrating, then gradually returning to remote control.

Left: Current method  Right: New method (Does not penetrate)
Left: Current method Right: New method (Does not penetrate)

Computer-generated characters controlled by Oculus
Computer-generated characters controlled by Oculus

Your world comes from what you "like"

HasegawaIshibashi-san, why did you decide to follow your current path?

IshibashiThere was a Robocon class in the old Department of Control & Systems Engineering (now School of Engineering), and it was fun to scrape aluminum to make robots at a time when there were no 3D printers. I've always liked making things with my hands.

Motoi Ishibashi

HasegawaMe too! I also took that class! [laughing]

IshibashiReally? That feeling motivated me to advance to the International Academy of Media Arts and Sciences (IAMAS) after graduation. It was a vocational school that combined teaching technology and expression, and it was mostly practice. From there, I began creating things that can be shown to people.

HasegawaThere are many people at Tokyo Tech who really love both technology and "monotsukuri" (creating things). Since such students and faculty are gathered here, new and interesting things are always being created. After coming to Tokyo Tech, I experienced the joy of thinking about something I was interested in from start to finish, and then creating it with my hands. I definitely want younger generations and students to have the same experience. Instead of just being given an assignment, I want them to know the joy of creating something starting from their own motivation through to completion. Tokyo Tech is a place where people can do that.

IshibashiSince the way of communicating has changed, you can now take action yourself. For example, it's possible for high school students to send an email to you, right? That was impossible in my days, so I think now is a great time, and if you are interested in something, it is best to just try it. By the way, after I got involved in dance performances with my work, I also started learning dance with my colleagues in my private life. [laughing] I think it's good to try different things and discover new worlds and get closer.

*1 installations

Venue for a global media art festival in Linz, Austria.

*2 Ars Electronica Center

Contemporary art technique and work where objects and equipment are installed in the exhibition space such that the entire space can be experienced as an artwork.

*3 key frame

CG technique that uses animation techniques to specify points of change in the shape and position of an object every few frames, and a video is created by compensating for the gaps.

Shoichi Hasegawa

Shoichi Hasegawa

Associate Professor, Institute of Innovative Research, Laboratory for Future Interdisciplinary Research in Science and Technology, Tokyo Institute of Technology

1997: Graduated from Tokyo Institute of Technology, School of Engineering, Department of Electrical and Electronic Engineering. 1999: Completed master's degree at the Department of Computational Intelligence and Systems Science, Interdisciplinary Graduate School of Science and Engineering, and joined Sony Corporation in the same year. 2000: Became a research assistant at Tokyo Institute of Technology, Precision and Intelligence Laboratory. 2006: Acquired Doctor of Engineering. 2007: Became an associate professor at The University of Electro-Communications, Department of Mechanical Engineering and Intelligent Systems. 2010: Worked as an associate professor at Tokyo Institute of Technology, Precision and Intelligence Laboratory, and has been in his current position since 2016. Engages in research on virtual humans, physical simulation, force haptics, human interface robots, and entertainment engineering. Received the Euro Graphics 2004 Best Paper Award, the Virtual Reality Society of Japan (VRSJ) Outstanding Paper Award, Contribution Award, etc. Supervises undergraduate and graduate majors in Information and Communications Engineering at the School of Engineering.

Hasegawa Laboratory

Researches virtual reality (VR), augmented reality (AR), simulation, human interfaces, human computer interaction (HI, HCI), etc., with the aim of creating an information environment where humans can enjoy natural, happy, and creative lives. Research is conducted in consideration of new applications such as entertainment, games, and media art. "Hapbeat," where the sound of music is conveyed to the whole body through strings wrapped around the body, and "NUIBOT," which makes stuffed animals into robots through sewing and programming, are also creations of Hasegawa Laboratory.

Hasegawa Laboratoryouter

Shoichi Hasegawa × Motoi Ishibashi

Motoi Ishibashi

Motoi Ishibashi

Rhizomatiks Research Director

1999: Graduated from the Tokyo Institute of Technology, Department of Control and Systems Engineering. 2001: Graduated from the International Academy of Media Arts and Sciences (IAMAS). Since 2015: Together with Daito Manabe, has been co-president of the R&D and Art divisions of Rhizomatiks Research. He is involved in various fields including advertising projects, artworks, workshops, music videos, and installations in addition to his main focus on device hardware production. He has participated in many projects by providing hardware technical support for collaborations with various artists in Japan and overseas including dance performances using industrial robots. He has received numerous awards including the Ars Electronica, Cannes Lions, and the Japan Media Arts Festival.

SPECIAL TOPICS

The Special Topics component of the Tokyo Tech Website shines a spotlight on recent developments in research and education, achievements of its community members, and special events and news from the Institute.

Past features can be viewed in the Special Topics Gallery.

Published: 2019

Contact

Public Relations Division, Tokyo Institute of Technology

Email pr@jim.titech.ac.jp