ICMI 2017
19th ACM International Conference on Multimodal Interaction (ICMI 2017)
Powered by
Conference Publishing Consulting

19th ACM International Conference on Multimodal Interaction (ICMI 2017), November 13–17, 2017, Glasgow, UK

ICMI 2017 – Proceedings

Contents - Abstracts - Authors

Frontmatter

Title Page
Message from the Chairs
ICMI 2017 Organization
Supporters and Sponsors

Invited Talks

Gastrophysics: Using Technology to Enhance the Experience of Food and Drink (Keynote)
Charles Spence
(University of Oxford, UK)
Collaborative Robots: From Action and Interaction to Collaboration (Keynote)
Danica Kragic
(KTH, Sweden)
Situated Conceptualization: A Framework for Multimodal Interaction (Keynote)
Lawrence Barsalou
(University of Glasgow, UK)
Steps towards Collaborative Multimodal Dialogue (Sustained Contribution Award)
Phil Cohen
(Voicebox Technologies, USA)

Main Track

Oral Session 1: Children and Interaction

Tablets, Tabletops, and Smartphones: Cross-Platform Comparisons of Children’s Touchscreen Interactions
Julia Woodward, Alex Shaw, Aishat Aloba, Ayushi Jain, Jaime Ruiz, and Lisa Anthony
(University of Florida, USA)
Toward an Efficient Body Expression Recognition Based on the Synthesis of a Neutral Movement
Arthur Crenn, Alexandre Meyer, Rizwan Ahmed Khan, Hubert Konik, and Saida Bouakaz
(University of Lyon, France; University of Saint-Etienne, France)
Interactive Narration with a Child: Impact of Prosody and Facial Expressions
Ovidiu Șerban, Mukesh Barange, Sahba Zojaji, Alexandre Pauchet, Adeline Richard, and Emilie Chanoni
(Normandy University, France; University of Rouen, France)
Comparing Human and Machine Recognition of Children’s Touchscreen Stroke Gestures
Alex Shaw, Jaime Ruiz, and Lisa Anthony
(University of Florida, USA)

Oral Session 2: Understanding Human Behaviour

Virtual Debate Coach Design: Assessing Multimodal Argumentation Performance
Volha Petukhova, Tobias Mayer, Andrei Malchanau, and Harry Bunt
(Saarland University, Germany; Tilburg University, Netherlands)
Predicting the Distribution of Emotion Perception: Capturing Inter-rater Variability
Biqiao Zhang, Georg Essl, and Emily Mower Provost
(University of Michigan, USA; University of Wisconsin-Milwaukee, USA)
Automatically Predicting Human Knowledgeability through Non-verbal Cues
Abdelwahab Bourai, Tadas Baltrušaitis, and Louis-Philippe Morency
(Carnegie Mellon University, USA)
Pooling Acoustic and Lexical Features for the Prediction of Valence
Zakaria Aldeneh, Soheil Khorram, Dimitrios Dimitriadis, and Emily Mower Provost
(University of Michigan, USA; IBM Research, USA)

Oral Session 3: Touch and Gesture

Hand-to-Hand: An Intermanual Illusion of Movement
Dario Pittera, Marianna Obrist, and Ali Israr
(Disney Research, USA; University of Sussex, UK)
An Investigation of Dynamic Crossmodal Instantiation in TUIs
Feng Feng and Tony Stockman
(Queen Mary University of London, UK)
“Stop over There”: Natural Gesture and Speech Interaction for Non-critical Spontaneous Intervention in Autonomous Driving
Robert Tscharn, Marc Erich Latoschik, Diana Löffler, and Jörn Hurtienne
(University of Würzburg, Germany)
Pre-touch Proxemics: Moving the Design Space of Touch Targets from Still Graphics towards Proxemic Behaviors
Ilhan Aslan and Elisabeth André
(University of Augsburg, Germany)
Freehand Grasping in Mixed Reality: Analysing Variation during Transition Phase of Interaction
Maadh Al-Kalbani, Maite Frutos-Pascual, and Ian Williams
(Birmingham City University, UK)
Rhythmic Micro-Gestures: Discreet Interaction On-the-Go
Euan Freeman, Gareth Griffiths, and Stephen A. Brewster
(University of Glasgow, UK)

Oral Session 4: Sound and Interaction

Evaluation of Psychoacoustic Sound Parameters for Sonification
Jamie Ferguson and Stephen A. Brewster
(University of Glasgow, UK)
Utilising Natural Cross-Modal Mappings for Visual Control of Feature-Based Sound Synthesis
Augoustinos Tsiros and Grégory Leplâtre
(Edinburgh Napier University, UK)

Oral Session 5: Methodology

Automatic Classification of Auto-correction Errors in Predictive Text Entry Based on EEG and Context Information
Felix Putze, Maik Schünemann, Tanja Schultz, and Wolfgang Stuerzlinger
(University of Bremen, Germany; Simon Fraser University, Canada)
Cumulative Attributes for Pain Intensity Estimation
Joy O. Egede and Michel Valstar
(University of Nottingham at Ningbo, China; University of Nottingham, UK)
Towards the Use of Social Interaction Conventions as Prior for Gaze Model Adaptation
Rémy Siegfried, Yu Yu, and Jean-Marc Odobez
(Idiap, Switzerland; EPFL, Switzerland)
Multimodal Sentiment Analysis with Word-Level Fusion and Reinforcement Learning
Minghai Chen, Sen Wang, Paul Pu Liang, Tadas Baltrušaitis, Amir Zadeh, and Louis-Philippe Morency
(Carnegie Mellon University, USA)
IntelliPrompter: Speech-Based Dynamic Note Display Interface for Oral Presentations
Reza Asadi, Ha Trinh, Harriet J. Fell, and Timothy W. Bickmore
(Northeastern University, USA)

Oral Session 6: Artificial Agents and Wearable Sensors

Head and Shoulders: Automatic Error Detection in Human-Robot Interaction
Pauline Trung, Manuel Giuliani, Michael Miksch, Gerald Stollnberger, Susanne Stadler, Nicole Mirnig, and Manfred Tscheligi
(University of Salzburg, Austria; University of the West of England, UK; Austrian Institute of Technology, Austria)
The Reliability of Non-verbal Cues for Situated Reference Resolution and Their Interplay with Language: Implications for Human Robot Interaction
Stephanie Gross, Brigitte Krenn, and Matthias Scheutz
(Austrian Research Institute for Artificial Intelligence, Austria; Tufts University, USA)
Do You Speak to a Human or a Virtual Agent? Automatic Analysis of User’s Social Cues during Mediated Communication
Magalie Ochs, Nathan Libermann, Axel Boidin, and Thierry Chaminade
(Aix-Marseille University, France; University of Toulon, France; Picxel, France)
Estimating Verbal Expressions of Task and Social Cohesion in Meetings by Quantifying Paralinguistic Mimicry
Marjolein C. Nanninga, Yanxia Zhang, Nale Lehmann-Willenbrock, Zoltán Szlávik, and Hayley Hung
(Delft University of Technology, Netherlands; University of Amsterdam, Netherlands; VU University Amsterdam, Netherlands)
Data Augmentation of Wearable Sensor Data for Parkinson’s Disease Monitoring using Convolutional Neural Networks
Terry T. Um, Franz M. J. Pfister, Daniel Pichler, Satoshi Endo, Muriel Lang, Sandra Hirche, Urban Fietzek, and Dana Kulić
(University of Waterloo, Canada; LMU Munich, Germany; TU Munich, Germany; Schön Klinik München Schwabing, Germany)

Poster Session 1

Automatic Assessment of Communication Skill in Non-conventional Interview Settings: A Comparative Study
Pooja Rao S. B, Sowmya Rasipuram, Rahul Das, and Dinesh Babu Jayagopi
(IIIT Bangalore, India)
Low-Intrusive Recognition of Expressive Movement Qualities
Radoslaw Niewiadomski, Maurizio Mancini, Stefano Piana, Paolo Alborno, Gualtiero Volpe, and Antonio Camurri
(University of Genoa, Italy)
Digitising a Medical Clerking System with Multimodal Interaction Support
Harrison South, Martin Taylor, Huseyin Dogan, and Nan Jiang
(Bournemouth University, UK; Royal Bournemouth and Christchurch Hospital, UK)
GazeTap: Towards Hands-Free Interaction in the Operating Room
Benjamin Hatscher, Maria Luz, Lennart E. Nacke, Norbert Elkmann, Veit Müller, and Christian Hansen
(University of Magdeburg, Germany; University of Waterloo, Canada; Fraunhofer IFF, Germany)
Boxer: A Multimodal Collision Technique for Virtual Objects
Byungjoo Lee, Qiao Deng, Eve Hoggan, and Antti Oulasvirta
(Aalto University, Finland; KAIST, South Korea; Aarhus University, Denmark)
Trust Triggers for Multimodal Command and Control Interfaces
Helen Hastie, Xingkun Liu, and Pedro Patron
(Heriot-Watt University, UK; SeeByte, UK)
Info
TouchScope: A Hybrid Multitouch Oscilloscope Interface
Matthew Heinz, Sven Bertel, and Florian Echtler
(Bauhaus-Universität Weimar, Germany)
A Multimodal System to Characterise Melancholia: Cascaded Bag of Words Approach
Shalini Bhatia, Munawar Hayat, and Roland Goecke
(University of Canberra, Australia)
Crowdsourcing Ratings of Caller Engagement in Thin-Slice Videos of Human-Machine Dialog: Benefits and Pitfalls
Vikram Ramanarayanan, Chee Wee Leong, David Suendermann-Oeft, and Keelan Evanini
(ETS at San Francisco, USA; ETS at Princeton, USA)
Modelling Fusion of Modalities in Multimodal Interactive Systems with MMMM
Bruno Dumas, Jonathan Pirau, and Denis Lalanne
(University of Namur, Belgium; University of Fribourg, Switzerland)
Temporal Alignment using the Incremental Unit Framework
Casey Kennington, Ting Han, and David Schlangen
(Boise State University, USA; Bielefeld University, Germany)
Multimodal Gender Detection
Mohamed Abouelenien, Verónica Pérez-Rosas, Rada Mihalcea, and Mihai Burzo
(University of Michigan, USA)
How May I Help You? Behavior and Impressions in Hospitality Service Encounters
Skanda Muralidhar, Marianne Schmid Mast, and Daniel Gatica-Perez
(Idiap, Switzerland; EPFL, Switzerland; University of Lausanne, Switzerland)
Tracking Liking State in Brain Activity while Watching Multiple Movies
Naoto Terasawa, Hiroki Tanaka, Sakriani Sakti, and Satoshi Nakamura
(NAIST, Japan)
Does Serial Memory of Locations Benefit from Spatially Congruent Audiovisual Stimuli? Investigating the Effect of Adding Spatial Sound to Visuospatial Sequences
Benjamin Stahl and Georgios Marentakis
(Graz University of Technology, Austria)
ZSGL: Zero Shot Gestural Learning
Naveen Madapana and Juan Wachs
(Purdue University, USA)
Markov Reward Models for Analyzing Group Interaction
Gabriel Murray
(University of the Fraser Valley, Canada)
Info
Analyzing First Impressions of Warmth and Competence from Observable Nonverbal Cues in Expert-Novice Interactions
Beatrice Biancardi, Angelo Cafaro, and Catherine Pelachaud
(CNRS, France; UPMC, France)
The NoXi Database: Multimodal Recordings of Mediated Novice-Expert Interactions
Angelo Cafaro, Johannes Wagner, Tobias Baur, Soumia Dermouche, Mercedes Torres Torres, Catherine Pelachaud, Elisabeth André, and Michel Valstar
(CNRS, France; UPMC, France; University of Augsburg, Germany; University of Nottingham, UK)
Info
Head-Mounted Displays as Opera Glasses: Using Mixed-Reality to Deliver an Egalitarian User Experience during Live Events
Carl Bishop, Augusto Esteves, and Iain McGregor
(Edinburgh Napier University, UK)

Poster Session 2

Analyzing Gaze Behavior during Turn-Taking for Estimating Empathy Skill Level
Ryo Ishii, Shiro Kumano, and Kazuhiro Otsuka
(NTT, Japan)
Text Based User Comments as a Signal for Automatic Language Identification of Online Videos
A. Seza Doğruöz, Natalia Ponomareva, Sertan Girgin, Reshu Jain, and Christoph Oehler
(Xoogler, Turkey; Google, USA; Google, France; Google, Switzerland)
Gender and Emotion Recognition with Implicit User Signals
Maneesh Bilalpur, Seyed Mostafa Kia, Manisha Chawla, Tat-Seng Chua, and Ramanathan Subramanian
(IIIT Hyderabad, India; Radboud University, Netherlands; IIT Gandhinagar, India; National University of Singapore, Singapore; University of Glasgow, UK; Advanced Digital Sciences Center, Singapore)
Animating the Adelino Robot with ERIK: The Expressive Robotics Inverse Kinematics
Tiago Ribeiro and Ana Paiva
(INESC-ID, Portugal; University of Lisbon, Portugal)
Video
Automatic Detection of Pain from Spontaneous Facial Expressions
Fatma Meawad, Su-Yin Yang, and Fong Ling Loy
(University of Glasgow, UK; Tan Tock Seng Hospital, Singapore)
Evaluating Content-Centric vs. User-Centric Ad Affect Recognition
Abhinav Shukla, Shruti Shriya Gullapuram, Harish Katti, Karthik Yadati, Mohan Kankanhalli, and Ramanathan Subramanian
(IIIT Hyderabad, India; Indian Institute of Science, India; Delft University of Technology, Netherlands; National University of Singapore, Singapore; University of Glasgow at Singapore, Singapore)
A Domain Adaptation Approach to Improve Speaker Turn Embedding using Face Representation
Nam Le and Jean-Marc Odobez
(Idiap, Switzerland)
Computer Vision Based Fall Detection by a Convolutional Neural Network
Miao Yu, Liyun Gong, and Stefanos Kollias
(University of Lincoln, UK)
Predicting Meeting Extracts in Group Discussions using Multimodal Convolutional Neural Networks
Fumio Nihei, Yukiko I. Nakano, and Yutaka Takase
(Seikei University, Japan)
The Relationship between Task-Induced Stress, Vocal Changes, and Physiological State during a Dyadic Team Task
Catherine Neubauer, Mathieu Chollet, Sharon Mozgai, Mark Dennison, Peter Khooshabeh, and Stefan Scherer
(Army Research Lab at Playa Vista, USA; University of Southern California, USA)
Meyendtris: A Hands-Free, Multimodal Tetris Clone using Eye Tracking and Passive BCI for Intuitive Neuroadaptive Gaming
Laurens R. Krol, Sarah-Christin Freytag, and Thorsten O. Zander
(TU Berlin, Germany)
AMHUSE: A Multimodal dataset for HUmour SEnsing
Giuseppe Boccignone, Donatello Conte, Vittorio Cuculo, and Raffaella Lanzarotti
(University of Milan, Italy; University of Tours, France)
GazeTouchPIN: Protecting Sensitive Data on Mobile Devices using Secure Multimodal Authentication
Mohamed Khamis, Mariam Hassib, Emanuel von Zezschwitz, Andreas Bulling, and Florian Alt
(LMU Munich, Germany; Max Planck Institute for Informatics, Germany)
Video
Multi-task Learning of Social Psychology Assessments and Nonverbal Features for Automatic Leadership Identification
Cigdem Beyan, Francesca Capozzi, Cristina Becchio, and Vittorio Murino
(IIT Genoa, Italy; McGill University, Canada; University of Turin, Italy; University of Verona, Italy)
Multimodal Analysis of Vocal Collaborative Search: A Public Corpus and Results
Daniel McDuff, Paul Thomas, Mary Czerwinski, and Nick Craswell
(Microsoft Research, USA; Microsoft Research, Australia)
UE-HRI: A New Dataset for the Study of User Engagement in Spontaneous Human-Robot Interactions
Atef Ben-Youssef, Chloé Clavel, Slim Essid, Miriam Bilac, Marine Chamoux, and Angelica Lim
(Telecom ParisTech, France; University of Paris-Saclay, France; SoftBank Robotics, France)
Info
Mining a Multimodal Corpus of Doctor’s Training for Virtual Patient’s Feedbacks
Chris Porhet, Magalie Ochs, Jorane Saubesty, Grégoire de Montcheuil, and Roxane Bertrand
(Aix-Marseille University, France; CNRS, France; ENSAM, France; University of Toulon, France)
Info
Multimodal Affect Recognition in an Interactive Gaming Environment using Eye Tracking and Speech Signals
Ashwaq Alhargan, Neil Cooke, and Tareq Binjammaz
(University of Birmingham, UK; De Montfort University, UK)

Demonstrations

Demonstrations 1

Multimodal Interaction in Classrooms: Implementation of Tangibles in Integrated Music and Math Lessons
Jennifer Müller, Uwe Oestermeier, and Peter Gerjets
(University of Tübingen, Germany; Leibniz-Institut für Wissensmedien, Germany)
Info
Web-Based Interactive Media Authoring System with Multimodal Interaction
Bok Deuk Song, Yeon Jun Choi, and Jong Hyun Park
(ETRI, South Korea)
Textured Surfaces for Ultrasound Haptic Displays
Euan Freeman, Ross Anderson, Julie Williamson, Graham Wilson, and Stephen A. Brewster
(University of Glasgow, UK)
Rapid Development of Multimodal Interactive Systems: A Demonstration of Platform for Situated Intelligence
Dan Bohus, Sean Andrist, and Mihai Jalobeanu
(Microsoft, USA; Microsoft Research, USA)
MIRIAM: A Multimodal Chat-Based Interface for Autonomous Systems
Helen Hastie, Francisco Javier Chiyah Garcia, David A. Robb, Pedro Patron, and Atanas Laskov
(Heriot-Watt University, UK; SeeByte, UK)
Info
SAM: The School Attachment Monitor
Dong-Bach Vo, Mohammad Tayarani, Maki Rooksby, Rui Huan, Alessandro Vinciarelli, Helen Minnis, and Stephen A. Brewster
(University of Glasgow, UK)
The Boston Massacre History Experience
David Novick, Laura Rodriguez, Aaron Pacheco, Aaron Rodriguez, Laura Hinojos, Brad Cartwright, Marco Cardiel, Ivan Gris Sepulveda, Olivia Rodriguez-Herrera, and Enrique Ponce
(University of Texas at El Paso, USA; Black Portal Productions, USA)
Demonstrating TouchScope: A Hybrid Multitouch Oscilloscope Interface
Matthew Heinz, Sven Bertel, and Florian Echtler
(Bauhaus-Universität Weimar, Germany)
The MULTISIMO Multimodal Corpus of Collaborative Interactions
Maria Koutsombogera and Carl Vogel
(Trinity College, Ireland)
Using Mobile Virtual Reality to Empower People with Hidden Disabilities to Overcome Their Barriers
Matthieu Poyade, Glyn Morris, Ian Taylor, and Victor Portela
(Glasgow School of Art, UK; Friendly Access, UK; Crag3D, UK)

Demonstrations 2

Bot or Not: Exploring the Fine Line between Cyber and Human Identity
Mirjam Wester, Matthew P. Aylett, and David A. Braude
(CereProc, UK)
Modulating the Non-verbal Social Signals of a Humanoid Robot
Amol Deshmukh, Bart Craenen, Alessandro Vinciarelli, and Mary Ellen Foster
(University of Glasgow, UK)
Video
Thermal In-Car Interaction for Navigation
Patrizia Di Campli San Vito, Stephen A. Brewster, Frank Pollick, and Stuart White
(University of Glasgow, UK; Jaguar Land Rover, UK)
AQUBE: An Interactive Music Reproduction System for Aquariums
Daisuke Sasaki, Musashi Nakajima, and Yoshihiro Kanno
(Waseda University, Japan; Tokyo Polytechnic University, Japan)
Real-Time Mixed-Reality Telepresence via 3D Reconstruction with HoloLens and Commodity Depth Sensors
Michal Joachimczak, Juan Liu, and Hiroshi Ando
(National Institute of Information and Communications Technology, Japan; Osaka University, Japan)
Evaluating Robot Facial Expressions
Ruth Aylett, Frank Broz, Ayan Ghosh, Peter McKenna, Gnanathusharan Rajendran, Mary Ellen Foster, Giorgio Roffo, and Alessandro Vinciarelli
(Heriot-Watt University, UK; University of Glasgow, UK)
Bimodal Feedback for In-Car Mid-Air Gesture Interaction
Gözel Shakeri, John H. Williamson, and Stephen A. Brewster
(University of Glasgow, UK)
A Modular, Multimodal Open-Source Virtual Interviewer Dialog Agent
Kirby Cofino, Vikram Ramanarayanan, Patrick Lange, David Pautler, David Suendermann-Oeft, and Keelan Evanini
(American University, USA; ETS at San Francisco, USA; ETS at Princeton, USA)
Wearable Interactive Display for the Local Positioning System (LPS)
Daniel M. Lofaro, Christopher Taylor, Ryan Tse, and Donald Sofge
(US Naval Research Lab, USA; George Mason University, USA; Thomas Jefferson High School for Science and Technology, USA)

Grand Challenge

From Individual to Group-Level Emotion Recognition: EmotiW 5.0
Abhinav Dhall, Roland Goecke, Shreya Ghosh, Jyoti Joshi, Jesse Hoey, and Tom Gedeon
(IIT Ropar, India; University of Canberra, Australia; University of Waterloo, Canada; Australian National University, Australia)
Info
Multi-modal Emotion Recognition using Semi-supervised Learning and Multiple Neural Networks in the Wild
Dae Ha Kim, Min Kyu Lee, Dong Yoon Choi, and Byung Cheol Song
(Inha University, South Korea)
Modeling Multimodal Cues in a Deep Learning-Based Framework for Emotion Recognition in the Wild
Stefano Pini, Olfa Ben Ahmed, Marcella Cornia, Lorenzo Baraldi, Rita Cucchiara, and Benoit Huet
(University of Modena and Reggio Emilia, Italy; EURECOM, France)
Group-Level Emotion Recognition using Transfer Learning from Face Identification
Alexandr Rassadin, Alexey Gruzdev, and Andrey Savchenko
(National Research University Higher School of Economics, Russia)
Group Emotion Recognition with Individual Facial Emotion CNNs and Global Image Based CNNs
Lianzhi Tan, Kaipeng Zhang, Kai Wang, Xiaoxing Zeng, Xiaojiang Peng, and Yu Qiao
(SIAT at Chinese Academy of Sciences, China; National Taiwan University, Taiwan)
Learning Supervised Scoring Ensemble for Emotion Recognition in the Wild
Ping Hu, Dongqi Cai, Shandong Wang, Anbang Yao, and Yurong Chen
(Intel Labs, China)
Group Emotion Recognition in the Wild by Combining Deep Neural Networks for Facial Expression Classification and Scene-Context Analysis
Asad Abbas and Stephan K. Chalup
(University of Newcastle, Australia)
Temporal Multimodal Fusion for Video Emotion Classification in the Wild
Valentin Vielzeuf, Stéphane Pateux, and Frédéric Jurie
(Orange Labs, France; Normandy University, France; CNRS, France)
Audio-Visual Emotion Recognition using Deep Transfer Learning and Multiple Temporal Models
Xi Ouyang, Shigenori Kawaai, Ester Gue Hua Goh, Shengmei Shen, Wan Ding, Huaiping Ming, and Dong-Yan Huang
(Panasonic R&D Center, Singapore; Central China Normal University, China; Institute for Infocomm Research at A*STAR, Singapore)
Multi-Level Feature Fusion for Group-Level Emotion Recognition
B. Balaji and V. Ramana Murthy Oruganti
(Amrita University at Coimbatore, India)
A New Deep-Learning Framework for Group Emotion Recognition
Qinglan Wei, Yijia Zhao, Qihua Xu, Liandong Li, Jun He, Lejun Yu, and Bo Sun
(Beijing Normal University, China)
Emotion Recognition in the Wild using Deep Neural Networks and Bayesian Classifiers
Luca Surace, Massimiliano Patacchiola, Elena Battini Sönmez, William Spataro, and Angelo Cangelosi
(University of Calabria, Italy; Plymouth University, UK; Instanbul Bilgi University, Turkey)
Emotion Recognition with Multimodal Features and Temporal Models
Shuai Wang, Wenxuan Wang, Jinming Zhao, Shizhe Chen, Qin Jin, Shilei Zhang, and Yong Qin
(Renmin University of China, China; IBM Research Lab, China)
Group-Level Emotion Recognition using Deep Models on Image Scene, Faces, and Skeletons
Xin Guo, Luisa F. Polanía, and Kenneth E. Barner
(University of Delaware, USA; American Family Mutual Insurance Company, USA)

Doctoral Consortium

Towards Designing Speech Technology Based Assistive Interfaces for Children's Speech Therapy
Revathy Nayar
(University of Strathaclyde, UK)
Social Robots for Motivation and Engagement in Therapy
Katie Winkle
(Bristol Robotics Laboratory, UK)
Immersive Virtual Eating and Conditioned Food Responses
Nikita Mae B. Tuanquin
(University of Canterbury, New Zealand)
Towards Edible Interfaces: Designing Interactions with Food
Tom Gayler
(Lancaster University, UK)
Towards a Computational Model for First Impressions Generation
Beatrice Biancardi
(CNRS, France; UPMC, France)
A Decentralised Multimodal Integration of Social Signals: A Bio-Inspired Approach
Esma Mansouri-Benssassi
(University of St. Andrews, UK)
Human-Centered Recognition of Children's Touchscreen Gestures
Alex Shaw
(University of Florida, USA)
Cross-Modality Interaction between EEG Signals and Facial Expression
Soheil Rayatdoost
(University of Geneva, Switzerland)
Hybrid Models for Opinion Analysis in Speech Interactions
Valentin Barriere
(Telecom ParisTech, France; University of Paris-Saclay, France)
Evaluating Engagement in Digital Narratives from Facial Data
Rui Huan
(University of Glasgow, UK)
Social Signal Extraction from Egocentric Photo-Streams
Maedeh Aghaei
(University of Barcelona, Spain)
Multimodal Language Grounding for Improved Human-Robot Collaboration: Exploring Spatial Semantic Representations in the Shared Space of Attention
Dimosthenis Kontogiorgos
(KTH, Sweden)

Workshop Summaries

ISIAA 2017: 1st International Workshop on Investigating Social Interactions with Artificial Agents (Workshop Summary)
Thierry Chaminade, Fabrice Lefèvre, Noël Nguyen, and Magalie Ochs
(Aix-Marseille University, France; CNRS, France; University of Avignon, France)
WOCCI 2017: 6th International Workshop on Child Computer Interaction (Workshop Summary)
Keelan Evanini, Maryam Najafian, Saeid Safavi, and Kay Berkling
(ETS at Princeton, USA; Massachusetts Institute of Technology, USA; University of Hertfordshire, UK; DHBW Karlsruhe, Germany)
MIE 2017: 1st International Workshop on Multimodal Interaction for Education (Workshop Summary)
Gualtiero Volpe, Monica Gori, Nadia Bianchi-Berthouze, Gabriel Baud-Bovy, Paolo Alborno, and Erica Volta
(University of Genoa, Italy; IIT Genoa, Italy; University College London, UK)
Playlab: Telling Stories with Technology (Workshop Summary)
Julie Williamson, Tom Flint, and Chris Speed
(University of Glasgow, UK; Edinburgh Napier University, UK; Edinburgh College of Art, UK)
Info
MHFI 2017: 2nd International Workshop on Multisensorial Approaches to Human-Food Interaction (Workshop Summary)
Carlos Velasco, Anton Nijholt, Marianna Obrist, Katsunori Okajima, Rick Schifferstein, and Charles Spence
(BI Norwegian Business School, Norway; University of Twente, Netherlands; University of Sussex, UK; Yokohama National University, Japan; TU Delft, Netherlands; University of Oxford, UK)

proc time: 0.82