Speakers and Summaries

 

Naoko_Abe.jpg

Naoko Abe

Postdoctoral Fellow, LAAS-CNRS in Toulouse, France

On the use of Kinetography Laban with humanoid robots

(Joint talk with Paolo Salaris

In this talk we discuss around the implementation on a simulated humanoid platform of a simple "Tutting Dance" (involving only arm movements) which is described by the Kinetography Laban. This notation is able to describe complex human movements by a sequence of symbols. As a consequence, it is suitable as high level language for humanoid robot programming and control. However, its implementation on a humanoid robot is not straightforward, mainly because "rules" of the notation are based on human movements and not on humanoid ones. For example, one basic rule of the notation says that the displacement of a part of the body occurs on the "shortest way" that does not seem to be saying shortest path. Of course, humans execute the own "natural" motion but from a robotic point of view, the following question is mandatory: which is the principle underlying this rule if any exists? During this talk we point out what in the Kinetography Laban is missing for humanoid robots but also the richness of the notation in terms of human movement description.

Naoko Abe received her Bachelor and Master degrees in Sociology from Paris Descartes University. She obtained a PhD degree in Sociology from Ecole des Hautes Etudes en Sciences Sociales (School for Advanced Studies in Social Sciences) at Paris in 2012. Her PhD research was carried out in collaboration with RATP (Parisian Public Transportation Authority) from 2008 to 2012 on subway user’s behavior by combining ethological-social-choreographic analysis method of human behavior. She obtained an advanced teaching and notation certificate in Kinetography Laban in 2011 from Conservatoire de Paris. She is currently a Postdoctoral Fellow at the LAAS (Laboratory for Analysis and Architecture of Systems) at Toulouse since April 2014. Her research interests include dance notation system and its computation, motion segmentation, emotional and social aspect in human motion.


 

BarakovaPhoto.jpg

Emilia Barakova 

Assistant professor, Eindhoven University of Technology, Netherlands

Robots with an Attitude: Games and interaction scenarios with robots enabled by Laban movement analysis 

We present a framework for enabling human-robot interaction that includes expressing and interpreting of emotions and mental states between humans and robots. Games or everyday interaction scenarios have been used for the design and the testing of these interactions.

In the interaction scenario, either extraction/recognition of the expressive state, as sensed in human movements takes place, or emotional, submissive and dominant behaviors are modelled on a robot or other object with expressive capabilities, and the resulting human behavior is analyzed. The outcomes of the experiments are analyzed by independent certified movement analysts (CMA’s) with the aim to automate this process, i.e. make it accessible to robots. We argue that LMA based computer analysis can serve as a common language for expressing and interpreting emotions in movements between robots and humans, and in that way it resembles the common coding principle between action and perception that exists in humans and primates and is embodied by the mirror neuron system. This could be a basis to a simulated mirroring mechanism, which might enable the robots with complex forms of social learning.

Emilia I. Barakova is affiliated with the Department of Industrial Design at the Eindhoven University of Technology, The Netherlands. She has: M.Sc. in Electronics and Automation from Technical University of Sofia (Bulgaria) and PhD in Mathematics and Physics from Groningen University (The Netherlands, 1999). She has worked at RIKEN Brain Science Institute (Japan), the GMD-Japan Research Laboratory (Japan), Groningen University (The Netherlands), the Eindhoven University of Technology (The Netherlands), Starlab (Belgium), and the Bulgarian Academy of Science. Barakova is an Associate editor of Journal of Integrative Neuroscience, and Personal and Ubiquitous Computing and has organized international conferences and has served as a program chair of IEEE and ACM conferences.

She is an author of over 100 scientific papers, conference proceedings, and one book. Barakova has expertise in modeling social behavior, social robotics, functional brain modeling for applications in robotics, on learning methods, and human centered interaction design. Her recent research is on modelling social and emotional behavior for applications in social robotics and robots for social training of autistic children.


 

Sarah_Jane_Burton.jpg

Sarah Jane Burton

Professor, Sheridan College in Ontario, Canada

Laban Notation and Affective Movement for Robots and other Near-living creatures 

(Joint talk with Dana Kulić)

Human beings are movers, even before birth. We move to discover the world, to try to understand it, to reach out to communicate with our environment, and to express our feelings. Over the past several years, we’ve been working as an interdisciplinary team of engineers, actors, and dancer/choreographers to explore whether we can determine what it is about particular movements that conveys specific emotions, and whether we can use this knowledge to recognize affective expressions from movement and generate recognizable affective movement on mechanisms with varying embodiments. In this talk we will describe our efforts, using Laban Movement Analysis, to generate compact and informative representations of movement to facilitate the analysis.

We will first describe our approach to correlate Laban motif notation with quantitative measures of movement obtained from motion capture. Within a motion capture environment, a professional actor reproduced prescribed motions, imbuing them with different emotions. These data were analyzed in two ways: novel machine learning techniques were developed to reduce the high-dimensional data to a much lower number of salient features, and Laban coding by a CMA was compared with automated quantification of relevant Laban dimensions. The results suggest that machine learning can be used to accurately identify a greatly-reduced subset of the motion capture data necessary to accurately identify which emotion the actor was conveying, and hence which part of the data contains the emotional content. Also, there was a strong correlation between results from the automatic Laban quantification and the CMA-generated Laban quantification of the movements.  Based on these results, we will present our recent work on developing approaches to systematically identify movement features most salient to affective expressions and to exploit these features to design computational models for the automatic recognition and generation of affective movements. We describe a comprehensive framework for understanding which features of movement convey affective expressions, the automatic recognition of affective expressions encoded in movements, and adapting pre-defined motion paths to overlay affective content. The proposed framework is being validated through cross-validation and perceptual user studies. As we continue to make progress in answering the broader questions, we see great potential for these results in fields including robotics, interactive art, animation and dance/acting training.

Sarah Jane Burton, a professor in the Department of Visual and Performing Arts at Sheridan College in Ontario, Canada, received the B.A. in Dance at Butler University, Indianapolis, IN, the M.A.L.S. in Movement and Dance at Wesleyan College, CT, and the certification as a Laban Movement Analyst in New York, NY, in 1994. As a movement and dance specialist, she choreographs and coaches for both live and filmed productions, and has taught and performed in France and Ghana. Previously Ms. Burton was an actor/dancer on Broadway and with various professional dance and theatre companies in the U.S.A. Her research interests focus on the inner/outer connection of intention and expression. She is currently involved in an on-going project that is investigating motion characteristics used to convey affect that will form a database of exemplar movements for motion generation. 


 

Tom_Calvert_1.jpg

Tom Calvert

Emeritus Professor, Simon Fraser University in Surrey, Canada

The challenges of translating dance notation into human figure animation or robotic movement

Since the early 1970’s there has been keen interest in using notation systems, such as Labanotation, to specify the movement of articulated figures. While it is quite easy to translate simple gestures from notation to human figure animation or robotic movement, it is remarkably challenging to translate complex movement where changes in support are involved. These challenges will be illustrated by describing the design and development of the LabanDancer prototype that translates a Labanotation score to a key framed animation of a human figure. This prototype was developed in collaboration with the New York based Dance Notation Bureau.

Tom Calvert is Emeritus Professor in the School of Interactive Arts and Technology at Simon Fraser University in Surrey, BC, Canada. He has degrees in electrical engineering from University College London (B.Sc.), Wayne State University (MSEE) and Carnegie-Mellon University (Ph.D.). He has been on the faculty at SFU since 1972 with appointments in Computing Science, Engineering Science and Kinesiology and has held administrative various positions including Dean and VP Research. His research interests centre on digital media, human computer interaction, human figure animation and computer systems for dance choreography. His work on computer animation has resulted in the Life Forms system for dance choreography and SFU spin-off company Credo Interactive Inc. markets and develops this software. 


 

Worawat_Choensawat.jpg

Worawat Choensawat 

Assistant professor, Bangkok University, Thailand

Autonomous Dance Avatar for Generating Stylized Dance Motion from Simple Dance Notations

(Joint talk with Kozaburo Hachimura & Minako Nakamura)

When producing the animation of a body motion from the dance notation, the dance knowledge is a key for accomplishing high-quality movement. This knowledge enables the dancer to know how to perform the correct movement from a movement notation score. This talk presents an approach for automatically simulating a CG animation from Labanotation scores. We achieve this goal by the integration of a CG animation with a dance-style interpretation module which is called an autonomous dance avatar. In our experiment, we implemented an autonomous dance avatar to perform Japanese stylized traditional dance Noh plays. The result shows that the autonomous dance avatar can reproduce Noh play satisfactory from Labanotation after it has been trained with the style of Noh play.   

Worawat Choensawat received his Dr. Eng degree from School of Science and Engineering, Ritsumeikan University in 2012. During his stay in Ritsumeikan University, he was a research assistant in the Global COE program of Digital Humanities Center for Japanese Arts and Cultures. Currently he is an assistant professor in the School of Science and Technology, Bangkok University, Thailand. His current research area is multimedia applications for human body movement analysis and dance.


 

Henner_Drewes.jpg

Henner Drewes

Dancer and Scholar, Folkwang University of the Arts in Essen, Germany

MovEngine – Developing a Movement Language for 3D Visualization and Composition of Dance

MOVement-oriented animation Engine (MovEngine) is a 3D animation tool, which was originally developed within a research project at Salzburg University (Austria) from 2008 until early 2013. One of the objectives in this project was to create a computer application which aids research in re‑constructing and re‑composing dance through animated movement sequences, utilizing a movement language based on existing systems of movement notation. In addition to the original focus on historic research, the employed technical approach may be also applied in a variety of other contexts, e.g. in creating learning tools and visual aids for choreography.

While the implementation of MovEngine is still in progress and is currently continued within M.A. studies in Movement Notation/Movement Analysis at Folkwang University of the Arts in Essen (Germany), new and exciting insights emerge on differences between traditional systems of movement notation and the requirements for a comprehensive language for the visualization of dance movement. 

Henner Drewes is a dancer and scholar specialized in representation methods for movement and dance. Following his studies of Eshkol-Wachman Movement Notation and Kinetography Laban he obtained a PhD at the University of Leipzig with his dissertation “Transformations – movement in notation systems and digital processing". Since 1994 Henner Drewes has been teaching notation and movement at several universities in Europe and Israel. In 2006 he was granted the Dance Sciences Award NRW for his proposed project “From Notation to Computer Generated 3D Animation". Together with Claudia Jeschke he initiated a research project in the Department for Dance Studies at Salzburg University in 2008, developing the MovEngine 3D animation tool. Currently he teaches Kinetography Laban and coordinates a M.A. Movement Notation/Movement Analysis study programme at the Folkwang University of the Arts in Essen.


 

Ganesh_Gowrishankar.jpg

Ganesh Gowrishankar 

Senior Researcher, CNRS-AIST in Tsukuba, Japan

From watching to understanding: how humans predict consequences of observed action kinematics

Our social skills are critically determined by our ability to understand and appropriately respond to actions performed by others. It is widely believed that action understanding is a hierarchical process that enables human to infer the intention and emotion behind, and predict the goal and consequences of, actions they observe by others, and where each process can have a cascading effect on the others. In this study we examined one component of action understanding - outcome prediction, to understand the computational mechanisms that enable humans to predict the outcome of observed actions. First, using a novel behavioral paradigm utilizing sports professionals, we exhibit a causal relation between outcome predictability and motor performance; an increase in a professional’s ability to predict actions observed in another performer leads to a progressive deterioration of the professional’s own motor performance. Next, we analyze the motor deteriorations to exhibit that while the throwing controller (skill) of the professional remains unchanged, the outcome prediction of his own action (forward predictor) is affected by watching others. The results exhibit that outcome prediction of observed actions is enabled by implicit motor simulations using the same neural mechanisms utilized to predict the outcome of one’s own action, for predicting the outcome of action observed in others. This indicates that what we infer from another’s action is what we would have ourselves done. 

Ganesh Gowrishankar received his Bachelor of Engineering (first-class, Hons.) degree from the Delhi College of Engineering, India, in 2002 and his Master of Engineering from the National University of Singapore, in 2005, both in Mechanical Engineering. He received his Ph.D. in Bioengineering from Imperial College London, U.K., in 2010 with the thesis titled ‘Mechanism of motor learning: by humans, for robots’. He worked as an Intern Researcher with the Computational Neuroscience Laboratories, Advanced Telecommunication Research (ATR), Kyoto, Japan, from 2004 and through his PhD. Following his PhD he worked at the National Institute of Information and Communications Technology as a Specialist Researcher till December 2013. Since January 2014, he has joined as a CR1 Researcher at the Centre National de la Recherche Scientifique (CNRS), and is currently located at the CNRS-AIST joint robotics lab (JRL) in Tsukuba, Japan. He is a visiting researcher at the National Institute of Advanced Industrial Science and Technology (AIST) in Tsukuba, Centre for Information and Neural Networks (CINET) in Osaka, ATR in Kyoto and the Laboratoire d'Informatique, de Robotique et de Microélectronique de Montpellier (LIRMM) in Montpellier. His research interests include human sensori-motor control and learning, robot control, social neuroscience and robot-human interactions. 


 

Kozabura_Hachimura.jpg

Kozaburo Hachimura

Professor, Ritsumeikan University in Kyoto, Japan

Autonomous Dance Avatar for Generating Stylized Dance Motion from Simple Dance Notations

(Joint talk with Worawat Choensawat & Minako Nakamura)

When producing the animation of a body motion from the dance notation, the dance knowledge is a key for accomplishing high-quality movement. This knowledge enables the dancer to know how to perform the correct movement from a movement notation score. This talk presents an approach for automatically simulating a CG animation from Labanotation scores. We achieve this goal by the integration of a CG animation with a dance-style interpretation module which is called an autonomous dance avatar. In our experiment, we implemented an autonomous dance avatar to perform Japanese stylized traditional dance Noh plays. The result shows that the autonomous dance avatar can reproduce Noh play satisfactory from Labanotation after it has been trained with the style of Noh play.   

Kozaburo Hachimura received his BS, MS and Ph.D degrees in Electrical Engineering from Kyoto University in 1971, 1973 and 1979, respectively. He was a research assistant at National Museum of Ethnology, Osaka during 1978-1983, and an associate professor at Kyoto University during 1984-1994. He is currently a Professor of Department of Media Technology, College of Information Science and Engineering, Ritsumeikan University. His current interests include image databases, graphics systems for human body movement, and virtual reality.  


 

HRP_3.jpg

HRP2-14 

Humanoid Robot 

HRP-2 is the robotic platform for the Japanese Humanoid Robotics Project (HRP). The robot was designed and integrated by Kawada Industries, Inc. together with the Humanoid Research Group of the National Institute of Advanced Industrial Science and Technology (AIST). The external appearance of HRP-2 was designed by Mr. Yutaka Izubuchi, a mechanical animation designer. HRP-2’s height is 154 cm and its mass is 58 kg, including batteries. It has 30 degrees of freedom (DOF). About twenty copies of the robot exist today. Only one (HRP2-14) left Japan to serve as a research platform for the robotics teams at LAAS-CNRS in Toulouse. CNRS acquired HRP2-14 in 2005 in the framework of the joint French-Japanese laboratory JRL. 


 

Dana_Kulic_2.jpg

Dana Kulić

Associate Professor, University of Waterloo, Canada

Laban Notation and Affective Movement for Robots and other Near-living creatures

(Joint talk with Sarah Jane Burton)

Human beings are movers, even before birth. We move to discover the world, to try to understand it, to reach out to communicate with our environment, and to express our feelings. Over the past several years, we’ve been working as an interdisciplinary team of engineers, actors, and dancer/choreographers to explore whether we can determine what it is about particular movements that conveys specific emotions, and whether we can use this knowledge to recognize affective expressions from movement and generate recognizable affective movement on mechanisms with varying embodiments.  In this talk we will describe our efforts, using Laban Movement Analysis, to generate compact and informative representations of movement to facilitate the analysis.

We will first describe our approach to correlate Laban motif notation with quantitative measures of movement obtained from motion capture. Within a motion capture environment, a professional actor reproduced prescribed motions, imbuing them with different emotions. These data were analyzed in two ways: novel machine learning techniques were developed to reduce the high-dimensional data to a much lower number of salient features, and Laban coding by a CMA was compared with automated quantification of relevant Laban dimensions. The results suggest that machine learning can be used to accurately identify a greatly-reduced subset of the motion capture data necessary to accurately identify which emotion the actor was conveying, and hence which part of the data contains the emotional content. Also, there was a strong correlation between results from the automatic Laban quantification and the CMA-generated Laban quantification of the movements. Based on these results, we will present our recent work on developing approaches to systematically identify movement features most salient to affective expressions and to exploit these features to design computational models for the automatic recognition and generation of affective movements. We describe a comprehensive framework for understanding which features of movement convey affective expressions, the automatic recognition of affective expressions encoded in movements, and adapting pre-defined motion paths to overlay affective content. The proposed framework is being validated through cross-validation and perceptual user studies. As we continue to make progress in answering the broader questions, we see great potential for these results in fields including robotics, interactive art, animation and dance/acting training. 

Dana Kulić received the combined B.A.Sc. and M.Eng. degrees in electro-mechanical engineering, and the Ph.D. degree in mechanical engineering from the University of British Columbia, Vancouver, BC, Canada, in 1998 and 2005, respectively. From 2002 to 2006, she was a Ph.D. degree student and a post-doctoral researcher at the CARIS Lab at the University of British Columbia, developing human-robot interaction strategies to quantify and maximize safety during interaction.  From 2006 to 2009, she was a JSPS Post-doctoral Fellow and a Project Assistant Professor at the Nakamura Laboratory at the University of Tokyo, Tokyo, Japan, working on algorithms for incremental learning of human motion patterns for humanoid robots. She is currently an Associate Professor at the Electrical and Computer Engineering Department at the University of Waterloo, Waterloo, ON, Canada. Her research interests include human motion analysis, robot learning, humanoid robots, and human-machine interaction.


 

Jean_Paul_Laumond.jpg

Jean-Paul Laumond

Directeur de Recherche, LAAS-CNRS in Toulouse, France

Dance Notations and Robot Motion

Robots move to act. Actions are defined and operate in the physical space. At the same time motions originate in the robot motor control space. How to express actions in terms of robot motions?  This is the fundamental robotics issue of inversion. Motions in dance is a form a nonverbal communication aiming at expressing emotions or social interactions. How to express emotions in terms of body motions? Roboticists and choreographers ask similar questions. Both communities are developing their own approaches of body motion descriptions. What common foundations underly these descriptions in the quest to « write » a motion? Answering the question first requires to learn from each other. This is precisely the aim of the workshop.

Jean-Paul Laumond, IEEE Fellow, is a roboticist. He is Directeur de Recherche at LAAS-CNRS (team Gepetto) in Toulouse, France. His research is devoted to robot motion.

In the 90's, he  has been the coordinator of two  European Esprit projects PROMotion (Planning RObot Motion) and MOLOG (Motion for Logistics), both dedicated to robot motion planning and control. In the early 2000's he created and managed Kineo CAM, a spin-off company from LAAS-CNRS devoted to develop and market motion planning technology. Kineo CAM was awarded the French Research Ministery prize for innovation and enterprise in 2000 and the third IEEE-IFR prize for Innovation and Entrepreneurship in Robotics and Automation in 2005. Siemens acquired Kineo CAM in 2012. In 2006, he launched the research team Gepetto dedicated to Human Motion studies along three perspectives: artificial motion for humanoid robots, virtual motion for digital actors and mannequins, and natural motions of human beings.

He teaches Robotics at Ecole Normale Supérieure in Paris. He has edited three books. He has published more than 150 papers in international journals and conferences in Robotics, Computer Science, Automatic Control and recently in Neurosciences.  He has been the 2011-2012 recipient of the Chaire Innovation technologique Liliane Bettencourt at Collège de France in Paris. His current project Actanthrope (ERC-ADG 340050) is devoted to the computational foundations of anthropomorphic action. 


 

Amy_LaViers_1.jpg

Amy LaViers

Assistant Professor, University of Virginia, USA

Abstractions for Design-by-Humans of Heterogenous Autonomous Behaviors

How do you get a robot to do the disco? Or perform a cheerleading routine? These acts require a quantitative understanding of two distinct movement behaviors and pose new problems for the high-level control of humanoid robots. This talk will discuss the use of movement observation, taxonomy, and expert knowledge, for example, as found in Laban/Bartenieff Movement Studies to facilitate the production of diverse robotic behaviors. A `behavior' will be defined by a set of movement primitives that are scaled and sequenced differently in different behaviors. Methods toward extracting such primitives automatically will be discussed as well as methods that allow for different scaling sequencing schemes. These methods will be applied to real robotic platforms, motivating the fundamental value of high-level abstractions that produce a wide array of behavior. 

Amy LaViers is an Assistant Professor in Systems and Information Engineering at the University of Virginia. She studies movement and dance through the lens of robotics and control theory -- and vice versa. She earned a Ph.D. in Electrical and Computer Engineering from Georgia Institute of Technology in 2013 and a B.S.E. in Mechanical and Aerospace Engineering and Certificate in Dance from Princeton University in 2009. She is currently enrolled in the Laban/Bartenieff Institute for Movement Studies' Certification in Movement Analysis (CMA) program and is also a dancer and choreographer, regularly producing work related to her technical research.


 

Gabriel_A_D_Lopes.jpg

Gabriel A. D. Lopes

Assistant Professor, Delft University of Technology, Netherlands

Natural dynamics for motion generation with emotional content

(Joint talk with Suzanne Weller)

Animal and human motion arrises through the combination of body morphology and the environment that embeds it. Energy is needed to counter both the inertial forces intrinsic to the body and the environmental forces the body is subjected to. Our hypothesis is that in robot motion control changing the energy used to generate a motion alters our perception of a robot’s level of valence, arousal and dominance. Towards verifying our hypothesis we design model-based controllers whose resulting dynamics involve different energy properties. We will present user studies where people were asked to rate the levels of valence, arousal and dominance of the motion of various robotic platforms.

Gabriel A. D. Lopes received M.Sc. and Ph.D. degrees in electric engineering and computer science from the University of Michigan, Ann Arbor, in 2003 and 2007, respectively. From 2005 to 2008, he was with the GRASP laboratory, University of Pennsylvania, Philadelphia. Currently, he is an assistant professor at the Delft Center for Systems and Control, Delft University of Technology, The Netherlands. His current research interests include non-linear control, discrete-event systems, machine learning, and its applications to robotics.


 

Angela_Loureiro.jpg

Angela Loureiro de Souza

Dancer and Laban/Bartenieff practitioner (CMA), France

Laban Movement Analysis – scaffolding human movement

Rudolf Laban (1879-1958) and his colleagues, based on their transversal experience on different creative fields, proposed a complex approach of human movement that values its process of transformation and the plasticity of human beings. What is now called Laban Movement Analysis is composed by five main and interdependent points of view of movement:

  • « Effort » deals with qualitative changes in the use of Time, Space, Weight and Flow. Each one is considered as a graduation between polarities;

  • « Shape » deals with the capacity of changing body shapes, aspect deeply linked to the adaptive aspect of movement;

  • « Body » deals with gesture and posture, as well as their merging, and with developmental movement patterns;

  • « Space » deals with spatial scaffold of movement: dimensions, plans and diagonals, explored in relation to the octahedron, the icosahedron and the cube.

  • « Phrasing » deals with specific ways of connecting, in time progression, the qualitative aspects of movement. It is deeply linked to moving style.

Between Laban Movement Analysis and Kinetography there is a great continuity : both considers  human movement as a complex process of transformation, with different aspects interacting and informing each other.

Angela Loureiro de Souza’s artistical and pedagogical experience has been inspired by R. Laban’s approach of movement since 1978, first within the contemporary dance company Atores e Bailarinos do Rio de Janeiro. She graduated in Laban Movement Analysis by the Laban/Bartenieff Institute of Movement Studies (1995) and in Kinetography Laban by the Conservatoire Nationale Supérieur de Musique et Danse de Paris (1999). Angela Loureiro has lived in France since 1988, where she works with different population groups in private and public institutions. She is author, with Jacqueline Challet-Haas, of « Exercices Fondamentaux de Bartenieff – une approche par la notation Laban », and of « Effort – l’alternance dynamique du mouvement ». Her next project, supported by the Centre National de la Danse, deals with diagonals of body and space.


 

Nicolas_Mansard.jpg

Nicolas Mansard

Senior Researcher, LAAS-CNRS in Toulouse, France

Dynamic Whole Body Motion Generation for the Dance of a Humanoid Robot

(Joint talk with Olivier Stasse)

In October 2012, the humanoid robot HRP-2 was presented during a live demonstration performing fine-balanced dance movements with a human performer in front of more than 1000 people. This success was possible by the systematic use of operational-space inverse dynamics to compute dynamically consistent movements following a motion capture pattern demonstrated by a human choreographer. The first goal of this presentation is to give an overview of the efficient inverse-dynamics method used to generate the dance motion. Behind the methodological description, the second and main goal of this talk is to present the robot dance as the first successful real-size implementation of inverse dynamics for humanoid-robot movement generation. This gives a proof of concept of the interest of inverse dynamics, which is more expressive than inverse kinematics and more computationally tractable than model-predictive control. It is, in our opinion, the topical method of choice for humanoid wholebody movement generation. The real-size demonstration also gave us some insight of nowadays methodological limits and the consequent future needed developments. Tayeb Benarama will give some insights of his artistic methodology in designing the motion for HRP-2.

Nicolas Mansard is a permanent researcher at LAAS-CNRS, Toulouse since October 2008. He is involved in the Gepetto research group, with Philippe Souères, Florent Lamiraux, Olivier Stasse and Jean-Paul Laumond.

His research activities are concerned with sensor-based control, and more specifically the integration of sensor-based schemes into humanoid robot applications. It is an exciting research topic at the intersection of the fields of robotics, automatic control, signal processing and numerical mathematics. His main application field is currently the Humanoid robotics, as it causes serious challenges that are representative of many other robot domains.

Before being in the Gepetto group, He was a post-doc in Tsukuba, and a post-doc at the IA Lab of Stanford University, in the group led by Prof.O.Khatib. Nicolas Mansard has done his PhD. with Francois Chaumette in the Lagadic group, at INRIA Rennes. He was a student of the École Normale Supérieure de Bretagne and did both MSc (Robotics) and BEng (CS) in the INP Grenoble (ENSIMAG).


 

Eliane_Mizabekiantz.jpg

Eliane Mirzabekiantz

Dancer and Benesh choreologist, Conservatoire de Paris, France

Benesh Movement Notation for humanoid robots

Benesh Movement Notation (BMN) is a system for analyzing and recording human movement. It was invented in the late 1940's in London by Rudolf Benesh, a mathematician who studied also music and painting. At the public launch of Benesh Movement Notation in 1955, Rudolf Benesh defined it as an "aesthetic and scientific study of all forms of human movement by movement notation".

BMN was recognised as an important scientific tool when it was included among the technological and scientific discoveries in the British Government pavilion at the Brussels Universal Exhibition in 1958. Though BMN was first, and is still mainly used for recording the skilled movements of dances, the effectiveness of the system as a movement recording tool has been shown by research in the following fields: cerebral palsy, seating, back pain in industry and fundamental movement abilities.

I will present the concept of the system as well as the different attempt using Benesh Movement Notation to serve new technologies.

Eliane Mirzabekiantz is dancer, Benesh choreologist, member of the technical committee of the Benesh Institute in London and co-founding member of the “Centre Benesh”, which is an association for the promotion movement notation in France. She conducts the professional Benesh Notation course at Conservatoire de Paris since 1995.

She was awarded “Ffellow of the Benesh Institute” in recognition of her achievements in France. She is the author of “Grammaire de la notation Benesh”, published by the Centre National de la Danse in 2000. She has received an Order of Chevalier des Arts et des Lettres en 2008 from French Goverment.


 

Minako_Nakamura.jpg

Minako Nakamura

Associate professor, Ochanomizu University in Tokyo, Japan

Autonomous Dance Avatar for Generating Stylized Dance Motion from Simple Dance Notations

(Joint talk with & Worawat Choensawat & Kozaburo Hachimura )

When producing the animation of a body motion from the dance notation, the dance knowledge is a key for accomplishing high-quality movement. This knowledge enables the dancer to know how to perform the correct movement from a movement notation score. This talk presents an approach for automatically simulating a CG animation from Labanotation scores. We achieve this goal by the integration of a CG animation with a dance-style interpretation module which is called an autonomous dance avatar. In our experiment, we implemented an autonomous dance avatar to perform Japanese stylized traditional dance Noh plays. The result shows that the autonomous dance avatar can reproduce Noh play satisfactory from Labanotation after it has been trained with the style of Noh play.   

Minako Nakamura is an associate professor in the Graduate School of Humanities and Sciences (Department of Dance), Ochanomizu University, Tokyo, Japan. She is also a guest researcher of the Art Research Center of Ritsumeikan University, Kyoto/Japan. She is studying the dance technique and structure of Balinese (Indonesian) dance, and also Dance & Technology; Motion capture, Development of “Laban (Labanotation) XML” and “Laban (Labanotation) Editor.”


 

Yoshihiko_Nakamura.jpg

Yoshihiko Nakamura

Professor, University of Tokyo, Japan

 In Between Realism and Symbolism

The symbol system is shared by animals and the human. Compression is the natural strategy of survival for a species for handling full of information of the world. Information for a species is hierarchical and concentric with its body at the origin. If we introduce a new polar coordinate of sybolization, Language of the human is in the distance from the origin. Mathematics and robotics can be the methods of experimental study of the symbolic system. The challenges are, (1) if a language and the origin of the body are functionally connected, (2) if the hierarchy of realism is functionally engineered. This talk discuss the concepts, frameworks, and the current status of the study of symbol system.

Yoshihiko Nakamura is Professor at Department of Mechano-Informatics, University of Tokyo. He received Doctor of Engineering Degree from Kyoto University. Humanoid robotics, cognitive robotics, neuro musculoskeletal human modeling, biomedical systems, and their computational algorithms are his current fields of research. He is Fellow of JSME, Fellow of RSJ, Fellow of IEEE, and Fellow of WAAS. Dr. Nakamura serves as President of IFToMM (2012-2015). Dr. Nakamura is Foreign Member of Academy of Engineering Science of Serbia, and TUM Distinguished Affiliated Professor of Technische Universität München.


 

Shelly_Saint_Smith_1.jpg

Shelly Saint-Smith

Lecturer, Royal Academy of Dance in London, UK

I’m a Dancer, not a Mimic! The Limitations of Imitation.

This presentation will consider the differences between learning a dance through imitation (from another dancer or from video) and learning a dance from notation. It will examine both the advantages and limitations of imitation as a dominant practice, and explore the distinct benefits of Labanotation in analysing and understanding movement.  Discussion will draw upon perspectives on what ‘the dance’ is and where it resides, and will scrutinise the Royal Academy of Dance’s (RAD, UK) professional and educational use of dance notation as part of dance and dance teacher training.

Shelly Saint-Smith, MFA, BA (Hons), is Lecturer in Dance Studies and Programme Manager of the Master of Teaching (Dance) at the Royal Academy of Dance (RAD) in London. She studied Labanotation and directing from score at the University of Birmingham, UK, and The Ohio State University. Shelly teaches notation and Laban studies to undergraduate and postgraduate students, and directs excerpts from dance works for undergraduate modules in performance. In 2010 she was awarded funding to begin the process of documenting and preserving the RAD’s Karsavina Syllabus and has recently contributed to a UK-based research project exploring the value of Laban’s work in 21st century dance education. Shelly is a Fellow of the International Council of Kinetography Laban/Labanotation (ICKL) and was Chair of the ICKL Research Panel from 2008 to 2011.


 

Paolo_Salaris.jpg

Paolo Salaris 

Postdoctoral Fellow, LAAS-CNRS in Toulouse, France

On the use of Kinetography Laban with humanoid robots

(Joint talk with Naoko Abe

In this talk we discuss around the implementation on a simulated humanoid platform of a simple "Tutting Dance" (involving only arm movements) which is described by the Kinetography Laban. This notation is able to describe complex human movements by a sequence of symbols. As a consequence, it is suitable as high level language for humanoid robot programming and control. However, its implementation on a humanoid robot is not straightforward, mainly because "rules" of the notation are based on human movements and not on humanoid ones. For example, one basic rule of the notation says that the displacement of a part of the body occurs on the "shortest way" that does not seem to be saying shortest path. Of course, humans execute the own "natural" motion but from a robotic point of view, the following question is mandatory: which is the principle underlying this rule if any exists? During this talk we point out what in the Kinetography Laban is missing for humanoid robots but also the richness of the notation in terms of human movement description.

Paolo Salaris was born in Siena (Tuscany, Italy) in 1979. He received his Degree in Electrical Engineering from the University of Pisa in 2007. He got the Doctoral degree in Robotics, Automation and Bioengineering at the Research Center "E.Piaggio" of the University of Pisa in June 2011. He has been Visiting Scholar at Beckman Institute for Advanced Science and Technology, University of Illinois, Urbana-Champaign (US-IL) from March to October 2009. He has been a PostDoc at the Research Center "E.Piaggio" from 2011 to 2013. He is currently a PostDoc at LAAS-CNRS in Toulouse, working on motion segmentation and generation for humanoid robots. 


 

Olivier_Stasse.jpg

Olivier Stasse

Senior Researcher, LAAS-CNRS in Toulouse, France

 Dynamic Whole Body Motion Generation for the Dance of a Humanoid Robot

(Joint talk with Nicolas Mansard)

In October 2012, the humanoid robot HRP-2 was presented during a live demonstration performing fine-balanced dance movements with a human performer in front of more than 1000 people. This success was possible by the systematic use of operational-space inverse dynamics to compute dynamically consistent movements following a motion capture pattern demonstrated by a human choreographer. The first goal of this presentation is to give an overview of the efficient inverse-dynamics method used to generate the dance motion. Behind the methodological description, the second and main goal of this talk is to present the robot dance as the first successful real-size implementation of inverse dynamics for humanoid-robot movement generation. This gives a proof of concept of the interest of inverse dynamics, which is more expressive than inverse kinematics and more computationally tractable than model-predictive control. It is, in our opinion, the topical method of choice for humanoid wholebody movement generation. The real-size demonstration also gave us some insight of nowadays methodological limits and the consequent future needed developments. Tayeb Benarama will give some insights of his artistic methodology in designing the motion for HRP-2.

Olivier Stasse is a senior researcher (CR-1) at CNRS-LAAS, Toulouse. He has been an assistant professor in computer science at the University of Paris 13. He received a Ph.D. in intelligent systems (2000) from the University of Paris 6. His research interests include humanoid robots, and more specifically motion generation motivated by vision. From 2003 to 2011, he was with the Joint French-Japanese Robotics Laboratory (JRL) in Tsukuba, Japan. He has been finalist for the Best Paper Award at ICAR in 2007 and finalist for the Best Video Award at ICRA in 2007, and received the Best Paper Award at ICMA in 2006.


 

Gentiane_Venture_1.jpg

Gentiane Venture

Associate Professor, Tokyo University of Agriculture and Technology, Japan

Using Dynamics to Recognize Human Motion

In this presentation we explore the importance dynamics of motion, and how it can be used to develop and to personalize intelligent systems and robots. We propose a framework that uses not only the kinematics information of movements but also the dynamics.

We use the direct measure of the dynamics when available. If not we propose to compute the dynamics from the kinematics, and use it to understand human mo-tions. Finally, we discuss some developments and concrete applications in the field of motion analysis. 

Gentiane Venture has completed an Engineer's degree from the Ecole Centrale of Nantes (France) in 2000 in Robotics and Automation and a MSc from the University of Nantes (France) in Robotics. In 2003, she obtained her PhD from the University of Nantes (France). In 2004 she joined the French Nuclear Agency (Paris, France), to work on the control of a tele-operated micro-manipulator. Later in 2004 she joined Prof. Yoshihiko Nakamura's Lab. at the University of Tokyo (Japan) with the support of the JSPS. In 2006, still under Prof. Nakamura, she joined the IRT project as a Project Assistant Professor. In March 2009, she becomes an Associate Professor and starts a new lab at the Tokyo University of Agriculture and Technology (Japan). Her main research interests include: Non-verbal communication, Human behavior understanding from motion, Human body modelling, Dynamics identification, Control of robot for human/robot interaction, Human affect recognition.


 

Suzanne_Weller.jpg

Suzanne Weller

MSc student, Delft University of Technology, Netherlands

 Natural dynamics for motion generation with emotional content

(Joint talk with Gabriel A.D. Lopes)

Animal and human motion arrises through the combination of body morphology and the environment that embeds it. Energy is needed to counter both the inertial forces intrinsic to the body and the environmental forces the body is subjected to. Our hypothesis is that in robot motion control changing the energy used to generate a motion alters our perception of a robot’s level of valence, arousal and dominance. Towards verifying our hypothesis we design model-based controllers whose resulting dynamics involve different energy properties. We will present user studies where people were asked to rate the levels of valence, arousal and dominance of the motion of various robotic platforms.

Suzanne Weller is a MSc student at the Bio-Mechanical department at the Delft University of Technology. Her research interests include dynamical systems, human movement control, robotics with a focus on humanoid robots. 


 

Katsu_Yamane.jpg

Katsu Yamane

Senior Research Scientist, Disney Research in Pittsburgh, USA

Trajectory Imitation with Humanoid Robots

Imitating human motions with humanoid robots is a difficult problem for various reasons. The robot's kinematics never match that of the human performer both in terms of dimensions and degrees of freedom. Physical limitations of the robot is usually much more severe than the performer's. If the robot is free-standing, i.e., has to balance on its legs, the problem becomes even more challenging because now we also have to consider the dynamics. In this talk, I introduce a few projects that are aimed at enabling a robot to imitate captured performances as closely as possible without any abstraction, simplification, or editing.

Katsu Yamane is a Senior Research Scientist at Disney Research, Pittsburgh and an Adjunct Associate Professor at the Robotics Institute, Carnegie Mellon University. He received his B.S., M.S., and Ph.D. degrees in Mechanical Engineering in 1997, 1999, and 2002 respectively from the University of Tokyo, Japan. His research interests include humanoid robot control and motion synthesis, human-robot interaction, character animation, and human motion simulation. 

Online user: 1