Workshop MIE 2017 – Author Index |
Contents -
Abstracts -
Authors
|
A B C D E G J L M N O P R S T V W X Z
Alborno, Paolo |
MIE '17: "What Cognitive and Affective ..."
What Cognitive and Affective States Should Technology Monitor to Support Learning?
Temitayo Olugbade, Luigi Cuturi, Giulia Cappagli, Erica Volta, Paolo Alborno, Joseph Newbold, Nadia Bianchi-Berthouze, Gabriel Baud-Bovy, Gualtiero Volpe, and Monica Gori (University College London, UK; IIT Genoa, Italy; University of Genoa, Italy) This paper discusses self-efficacy, curiosity, and reflectivity as cognitive and affective states that are critical to learning but are overlooked in the context of affect-aware technology for learning. This discussion sits within the opportunities offered by the weDRAW project aiming at an embodied approach to the design of technology to support exploration and learning of mathematical concepts. We first review existing literature to clarify how the three states facilitate learning and how, if not supported, they may instead hinder learning. We then review the literature to understand how bodily expressions communicate these states and how technology could be used to monitor them. We conclude by presenting initial movement cues currently explored in the context of weDRAW. @InProceedings{MIE17p22, author = {Temitayo Olugbade and Luigi Cuturi and Giulia Cappagli and Erica Volta and Paolo Alborno and Joseph Newbold and Nadia Bianchi-Berthouze and Gabriel Baud-Bovy and Gualtiero Volpe and Monica Gori}, title = {What Cognitive and Affective States Should Technology Monitor to Support Learning?}, booktitle = {Proc.\ MIE}, publisher = {ACM}, pages = {22--25}, doi = {10.1145/3139513.3139522}, year = {2017}, } Publisher's Version MIE '17: "An Open Platform for Full-Body ..." An Open Platform for Full-Body Multisensory Serious-Games to Teach Geometry in Primary School Simone Ghisio, Erica Volta, Paolo Alborno, Monica Gori, and Gualtiero Volpe (University of Genoa, Italy; IIT Genoa, Italy) Recent results from psychophysics and developmental psychology show that children have a preferential sensory channel to learn specific concepts. In this work, we explore the possibility of developing and evaluating novel multisensory technologies for deeper learning of arithmetic and geometry. The main novelty of such new technologies comes from the renewed understanding of the role of communication between sensory modalities during development that is that specific sensory systems have specific roles for learning specific concepts. Such understanding suggests that it is possible to open a new teaching/learning channel, personalized for each student based on the child’s sensory skills. Multisensory interactive technologies exploiting full-body movement interaction and including a hardware and software platform to support this approach will be presented and discussed. The platform is part of a more general framework developed in the context of the EU-ICT-H2020 weDRAW Project that aims to develop new multimodal technologies for multisensory serious-games to teach mathematics concepts in the primary school. @InProceedings{MIE17p49, author = {Simone Ghisio and Erica Volta and Paolo Alborno and Monica Gori and Gualtiero Volpe}, title = {An Open Platform for Full-Body Multisensory Serious-Games to Teach Geometry in Primary School}, booktitle = {Proc.\ MIE}, publisher = {ACM}, pages = {49--52}, doi = {10.1145/3139513.3139523}, year = {2017}, } Publisher's Version MIE '17: "A Multimodal Serious-Game ..." A Multimodal Serious-Game to Teach Fractions in Primary School Simone Ghisio, Paolo Alborno, Erica Volta, Monica Gori, and Gualtiero Volpe (University of Genoa, Italy; IIT Genoa, Italy) Multisensory learning is considered a relevant pedagogical framework for education since a very long time and several authors support the use of a multisensory and kinesthetic approach in children learning. Moreover, results from psychophysics and developmental psychology show that children have a preferential sensory channel to learn specific concepts (spatial and/or temporal), hence a further evidence for the need of a multisensory approach. In this work, we present an example of serious game for learning a particularly complicated mathematical concept: fractions. The main novelty of our proposal comes from the role covered by the communication between sensory modalities in particular, movement, vision, and sound. The game has been developed in the context of the EU-ICT-H2020 weDRAW Project aiming at developing new multimodal technologies for multisensory serious-games on mathematical concepts for primary school children. @InProceedings{MIE17p67, author = {Simone Ghisio and Paolo Alborno and Erica Volta and Monica Gori and Gualtiero Volpe}, title = {A Multimodal Serious-Game to Teach Fractions in Primary School}, booktitle = {Proc.\ MIE}, publisher = {ACM}, pages = {67--70}, doi = {10.1145/3139513.3139524}, year = {2017}, } Publisher's Version |
|
Alyuz, Nese |
MIE '17: "An Unobtrusive and Multimodal ..."
An Unobtrusive and Multimodal Approach for Behavioral Engagement Detection of Students
Nese Alyuz, Eda Okur, Utku Genc, Sinem Aslan, Cagri Tanriover, and Asli Arslan Esme (Intel, USA) In this paper, we investigate detection of students’ behavioral engagement states (On-Task vs. Off-Task) in authentic classroom settings. We propose a multimodal detection approach, based on three unobtrusive modalities readily available in a 1:1 learning scenario where learning technologies are incorporated. These modalities are: (1)Appearance: upper-body video captured using a camera; (2) Context-Performance: students’ interaction and performance data related to learning content; and (3) Mouse: data related to mouse movements during learning process. For each modality, separate unimodal classifiers were trained, and decision-level fusion was applied to obtain final behavioral engagement states. We also analyzed each modality based on Instructional and Assessment sections separately (i.e., Instructional where a student is reading an article or watching an instructional video vs. Assessment where a student is solving exercises on the digital learning platform). We carried out various experiments on a dataset collected in an authentic classroom, where students used laptops equipped with cameras and they consumed learning content for Math on a digital learning platform. The dataset included multimodal data of 17 students who attended a Math course for 13 sessions (40 minutes each). The results indicate that it is beneficial to have separate classification pipelines for Instructional and Assessment sections: For Instructional, using only Appearance modality yields an F1-measure of 0.74, compared to fused performance of 0.70. For Assessment, fusing all three modalities (F1-measure of 0.89) provide a prominent improvement over the best performing unimodality (i.e., 0.81 for Appearance). @InProceedings{MIE17p26, author = {Nese Alyuz and Eda Okur and Utku Genc and Sinem Aslan and Cagri Tanriover and Asli Arslan Esme}, title = {An Unobtrusive and Multimodal Approach for Behavioral Engagement Detection of Students}, booktitle = {Proc.\ MIE}, publisher = {ACM}, pages = {26--32}, doi = {10.1145/3139513.3139521}, year = {2017}, } Publisher's Version |
|
Aslan, Sinem |
MIE '17: "An Unobtrusive and Multimodal ..."
An Unobtrusive and Multimodal Approach for Behavioral Engagement Detection of Students
Nese Alyuz, Eda Okur, Utku Genc, Sinem Aslan, Cagri Tanriover, and Asli Arslan Esme (Intel, USA) In this paper, we investigate detection of students’ behavioral engagement states (On-Task vs. Off-Task) in authentic classroom settings. We propose a multimodal detection approach, based on three unobtrusive modalities readily available in a 1:1 learning scenario where learning technologies are incorporated. These modalities are: (1)Appearance: upper-body video captured using a camera; (2) Context-Performance: students’ interaction and performance data related to learning content; and (3) Mouse: data related to mouse movements during learning process. For each modality, separate unimodal classifiers were trained, and decision-level fusion was applied to obtain final behavioral engagement states. We also analyzed each modality based on Instructional and Assessment sections separately (i.e., Instructional where a student is reading an article or watching an instructional video vs. Assessment where a student is solving exercises on the digital learning platform). We carried out various experiments on a dataset collected in an authentic classroom, where students used laptops equipped with cameras and they consumed learning content for Math on a digital learning platform. The dataset included multimodal data of 17 students who attended a Math course for 13 sessions (40 minutes each). The results indicate that it is beneficial to have separate classification pipelines for Instructional and Assessment sections: For Instructional, using only Appearance modality yields an F1-measure of 0.74, compared to fused performance of 0.70. For Assessment, fusing all three modalities (F1-measure of 0.89) provide a prominent improvement over the best performing unimodality (i.e., 0.81 for Appearance). @InProceedings{MIE17p26, author = {Nese Alyuz and Eda Okur and Utku Genc and Sinem Aslan and Cagri Tanriover and Asli Arslan Esme}, title = {An Unobtrusive and Multimodal Approach for Behavioral Engagement Detection of Students}, booktitle = {Proc.\ MIE}, publisher = {ACM}, pages = {26--32}, doi = {10.1145/3139513.3139521}, year = {2017}, } Publisher's Version |
|
Balzarotti, Nicolò |
MIE '17: "Using Force-Feedback Devices ..."
Using Force-Feedback Devices in Educational Settings: A Short Review
Gabriel Baud-Bovy and Nicolò Balzarotti (IIT Genoa, Italy; Vita-Salute San Raffaele University, Italy; University of Genoa, Italy) In this short review, we aim at providing an update about recent research on force-feedback devices in educational settings, with a particular focus on primary school teaching. This review describes haptic devices and education virtual environments before entering into the details of domain-specific applications of this technology in schools. Currently, the number of studies that investigated the potential of haptic devices in educational settings is limited, in particular for primary schools. The absence of longitudinal studies makes it difficult to reach any strong conclusion about the learning outcomes of this technology. Additional research is needed about how this technology might contribute to teach specific concepts at different ages. In particular, we point out the need to do more research on how to combine haptic feedback with visual and audio information, which seems important both on the basis of the results of previous studies and on recent research in neuroscience demonstrating the importance of multi-modal integration in the cognitive development of children. Demonstrating the potential benefit of haptic devices in a learning environment is also important to create a demand for this technology which might lead to the commercialization of haptic devices with an affordable price for schools. @InProceedings{MIE17p14, author = {Gabriel Baud-Bovy and Nicolò Balzarotti}, title = {Using Force-Feedback Devices in Educational Settings: A Short Review}, booktitle = {Proc.\ MIE}, publisher = {ACM}, pages = {14--21}, doi = {10.1145/3139513.3139518}, year = {2017}, } Publisher's Version |
|
Baud-Bovy, Gabriel |
MIE '17: "Using Force-Feedback Devices ..."
Using Force-Feedback Devices in Educational Settings: A Short Review
Gabriel Baud-Bovy and Nicolò Balzarotti (IIT Genoa, Italy; Vita-Salute San Raffaele University, Italy; University of Genoa, Italy) In this short review, we aim at providing an update about recent research on force-feedback devices in educational settings, with a particular focus on primary school teaching. This review describes haptic devices and education virtual environments before entering into the details of domain-specific applications of this technology in schools. Currently, the number of studies that investigated the potential of haptic devices in educational settings is limited, in particular for primary schools. The absence of longitudinal studies makes it difficult to reach any strong conclusion about the learning outcomes of this technology. Additional research is needed about how this technology might contribute to teach specific concepts at different ages. In particular, we point out the need to do more research on how to combine haptic feedback with visual and audio information, which seems important both on the basis of the results of previous studies and on recent research in neuroscience demonstrating the importance of multi-modal integration in the cognitive development of children. Demonstrating the potential benefit of haptic devices in a learning environment is also important to create a demand for this technology which might lead to the commercialization of haptic devices with an affordable price for schools. @InProceedings{MIE17p14, author = {Gabriel Baud-Bovy and Nicolò Balzarotti}, title = {Using Force-Feedback Devices in Educational Settings: A Short Review}, booktitle = {Proc.\ MIE}, publisher = {ACM}, pages = {14--21}, doi = {10.1145/3139513.3139518}, year = {2017}, } Publisher's Version MIE '17: "What Cognitive and Affective ..." What Cognitive and Affective States Should Technology Monitor to Support Learning? Temitayo Olugbade, Luigi Cuturi, Giulia Cappagli, Erica Volta, Paolo Alborno, Joseph Newbold, Nadia Bianchi-Berthouze, Gabriel Baud-Bovy, Gualtiero Volpe, and Monica Gori (University College London, UK; IIT Genoa, Italy; University of Genoa, Italy) This paper discusses self-efficacy, curiosity, and reflectivity as cognitive and affective states that are critical to learning but are overlooked in the context of affect-aware technology for learning. This discussion sits within the opportunities offered by the weDRAW project aiming at an embodied approach to the design of technology to support exploration and learning of mathematical concepts. We first review existing literature to clarify how the three states facilitate learning and how, if not supported, they may instead hinder learning. We then review the literature to understand how bodily expressions communicate these states and how technology could be used to monitor them. We conclude by presenting initial movement cues currently explored in the context of weDRAW. @InProceedings{MIE17p22, author = {Temitayo Olugbade and Luigi Cuturi and Giulia Cappagli and Erica Volta and Paolo Alborno and Joseph Newbold and Nadia Bianchi-Berthouze and Gabriel Baud-Bovy and Gualtiero Volpe and Monica Gori}, title = {What Cognitive and Affective States Should Technology Monitor to Support Learning?}, booktitle = {Proc.\ MIE}, publisher = {ACM}, pages = {22--25}, doi = {10.1145/3139513.3139522}, year = {2017}, } Publisher's Version |
|
Bianchi-Berthouze, Nadia |
MIE '17: "What Cognitive and Affective ..."
What Cognitive and Affective States Should Technology Monitor to Support Learning?
Temitayo Olugbade, Luigi Cuturi, Giulia Cappagli, Erica Volta, Paolo Alborno, Joseph Newbold, Nadia Bianchi-Berthouze, Gabriel Baud-Bovy, Gualtiero Volpe, and Monica Gori (University College London, UK; IIT Genoa, Italy; University of Genoa, Italy) This paper discusses self-efficacy, curiosity, and reflectivity as cognitive and affective states that are critical to learning but are overlooked in the context of affect-aware technology for learning. This discussion sits within the opportunities offered by the weDRAW project aiming at an embodied approach to the design of technology to support exploration and learning of mathematical concepts. We first review existing literature to clarify how the three states facilitate learning and how, if not supported, they may instead hinder learning. We then review the literature to understand how bodily expressions communicate these states and how technology could be used to monitor them. We conclude by presenting initial movement cues currently explored in the context of weDRAW. @InProceedings{MIE17p22, author = {Temitayo Olugbade and Luigi Cuturi and Giulia Cappagli and Erica Volta and Paolo Alborno and Joseph Newbold and Nadia Bianchi-Berthouze and Gabriel Baud-Bovy and Gualtiero Volpe and Monica Gori}, title = {What Cognitive and Affective States Should Technology Monitor to Support Learning?}, booktitle = {Proc.\ MIE}, publisher = {ACM}, pages = {22--25}, doi = {10.1145/3139513.3139522}, year = {2017}, } Publisher's Version |
|
Blanco, Angel |
MIE '17: "Evaluation of Audio-Based ..."
Evaluation of Audio-Based Feedback Technologies for Bow Learning Technique in Violin Beginners
Angel Blanco and Rafael Ramirez (Pompeu Fabra University, Spain) We present a study of the effects of feedback technologies on the learning process of novice violin students. Twenty-one subjects participated in our experiment, divided into two groups: Beginners (participants with no prior violin playing experience, N=14), and experts (participants with more than 6 years of violin playing experience, N=7). The beginners group was further divided into two: a group of beginners learning with Youtube videos (N=7), and a group of beginners with additional feedback related to the quality of their performance (N=7). Participants were asked to perform a violin exercise during 21 trials while their audio was recorded and analyzed. Three different audio descriptors were extracted from each audio in order to evaluate the quality of the performance: Dynamic stability, pitch stability and aperiodicity. Beginners showed a significant improvement during the session(i.e. by comparing the beginning and the end of the session)in the quality of the sound recorded, while experts maintained their results. However, only the beginner group with feedback showed significant improvement between the middle and late part of the session, while the group without feedback remained stable. @InProceedings{MIE17p41, author = {Angel Blanco and Rafael Ramirez}, title = {Evaluation of Audio-Based Feedback Technologies for Bow Learning Technique in Violin Beginners}, booktitle = {Proc.\ MIE}, publisher = {ACM}, pages = {41--43}, doi = {10.1145/3139513.3139520}, year = {2017}, } Publisher's Version |
|
Cappagli, Giulia |
MIE '17: "Angle Discrimination by Walking ..."
Angle Discrimination by Walking in Children
Luigi Cuturi, Giulia Cappagli, and Monica Gori (IIT Genoa, Italy) In primary school, children tend to have difficulties in discriminating angles of different degrees and categorizing them either as acute or obtuse, especially at the first stages of development (6-7 y.o.). In the context of a novel approach that intends to use other sensory modalities than visual to teach geometrical concepts, we ran a psychophysical study investigating angle perception by spatially navigating in space. Our results show that the youngest group of children tend to be more imprecise when asked to discriminate the walking angle of 90°, pivotal to learn how to differentiate between acute and obtuse angles. These results are then discussed in terms of the development of novel technological solutions aimed to integrate locomotion in the teaching of geometrical concepts. @InProceedings{MIE17p10, author = {Luigi Cuturi and Giulia Cappagli and Monica Gori}, title = {Angle Discrimination by Walking in Children}, booktitle = {Proc.\ MIE}, publisher = {ACM}, pages = {10--13}, doi = {10.1145/3139513.3139516}, year = {2017}, } Publisher's Version MIE '17: "What Cognitive and Affective ..." What Cognitive and Affective States Should Technology Monitor to Support Learning? Temitayo Olugbade, Luigi Cuturi, Giulia Cappagli, Erica Volta, Paolo Alborno, Joseph Newbold, Nadia Bianchi-Berthouze, Gabriel Baud-Bovy, Gualtiero Volpe, and Monica Gori (University College London, UK; IIT Genoa, Italy; University of Genoa, Italy) This paper discusses self-efficacy, curiosity, and reflectivity as cognitive and affective states that are critical to learning but are overlooked in the context of affect-aware technology for learning. This discussion sits within the opportunities offered by the weDRAW project aiming at an embodied approach to the design of technology to support exploration and learning of mathematical concepts. We first review existing literature to clarify how the three states facilitate learning and how, if not supported, they may instead hinder learning. We then review the literature to understand how bodily expressions communicate these states and how technology could be used to monitor them. We conclude by presenting initial movement cues currently explored in the context of weDRAW. @InProceedings{MIE17p22, author = {Temitayo Olugbade and Luigi Cuturi and Giulia Cappagli and Erica Volta and Paolo Alborno and Joseph Newbold and Nadia Bianchi-Berthouze and Gabriel Baud-Bovy and Gualtiero Volpe and Monica Gori}, title = {What Cognitive and Affective States Should Technology Monitor to Support Learning?}, booktitle = {Proc.\ MIE}, publisher = {ACM}, pages = {22--25}, doi = {10.1145/3139513.3139522}, year = {2017}, } Publisher's Version |
|
Cuturi, Luigi |
MIE '17: "Angle Discrimination by Walking ..."
Angle Discrimination by Walking in Children
Luigi Cuturi, Giulia Cappagli, and Monica Gori (IIT Genoa, Italy) In primary school, children tend to have difficulties in discriminating angles of different degrees and categorizing them either as acute or obtuse, especially at the first stages of development (6-7 y.o.). In the context of a novel approach that intends to use other sensory modalities than visual to teach geometrical concepts, we ran a psychophysical study investigating angle perception by spatially navigating in space. Our results show that the youngest group of children tend to be more imprecise when asked to discriminate the walking angle of 90°, pivotal to learn how to differentiate between acute and obtuse angles. These results are then discussed in terms of the development of novel technological solutions aimed to integrate locomotion in the teaching of geometrical concepts. @InProceedings{MIE17p10, author = {Luigi Cuturi and Giulia Cappagli and Monica Gori}, title = {Angle Discrimination by Walking in Children}, booktitle = {Proc.\ MIE}, publisher = {ACM}, pages = {10--13}, doi = {10.1145/3139513.3139516}, year = {2017}, } Publisher's Version MIE '17: "What Cognitive and Affective ..." What Cognitive and Affective States Should Technology Monitor to Support Learning? Temitayo Olugbade, Luigi Cuturi, Giulia Cappagli, Erica Volta, Paolo Alborno, Joseph Newbold, Nadia Bianchi-Berthouze, Gabriel Baud-Bovy, Gualtiero Volpe, and Monica Gori (University College London, UK; IIT Genoa, Italy; University of Genoa, Italy) This paper discusses self-efficacy, curiosity, and reflectivity as cognitive and affective states that are critical to learning but are overlooked in the context of affect-aware technology for learning. This discussion sits within the opportunities offered by the weDRAW project aiming at an embodied approach to the design of technology to support exploration and learning of mathematical concepts. We first review existing literature to clarify how the three states facilitate learning and how, if not supported, they may instead hinder learning. We then review the literature to understand how bodily expressions communicate these states and how technology could be used to monitor them. We conclude by presenting initial movement cues currently explored in the context of weDRAW. @InProceedings{MIE17p22, author = {Temitayo Olugbade and Luigi Cuturi and Giulia Cappagli and Erica Volta and Paolo Alborno and Joseph Newbold and Nadia Bianchi-Berthouze and Gabriel Baud-Bovy and Gualtiero Volpe and Monica Gori}, title = {What Cognitive and Affective States Should Technology Monitor to Support Learning?}, booktitle = {Proc.\ MIE}, publisher = {ACM}, pages = {22--25}, doi = {10.1145/3139513.3139522}, year = {2017}, } Publisher's Version |
|
Dalmazzo, David |
MIE '17: "Air Violin: A Machine Learning ..."
Air Violin: A Machine Learning Approach to Fingering Gesture Recognition
David Dalmazzo and Rafael Ramirez (Pompeu Fabra University, Spain) We train and evaluate two machine learning models for predicting fingering in violin performances using motion and EMG sensors integrated in the Myo device. Our aim is twofold: first, provide a fingering recognition model in the context of a gamification virtual violin application where we measure both right hand (i.e. bow) and left hand (i.e. fingering) gestures, and second, implement a tracking system for a computer assisted pedagogical tool for self-regulated learners in high-level music education. Our approach is based on the principle of mapping-by-demonstration in which the model is trained by the performer. We evaluated a model based on Decision Trees and compared it with a Hidden Markovian Model. @InProceedings{MIE17p63, author = {David Dalmazzo and Rafael Ramirez}, title = {Air Violin: A Machine Learning Approach to Fingering Gesture Recognition}, booktitle = {Proc.\ MIE}, publisher = {ACM}, pages = {63--66}, doi = {10.1145/3139513.3139526}, year = {2017}, } Publisher's Version |
|
Das, Rahul |
MIE '17: "Automatic Generation of Actionable ..."
Automatic Generation of Actionable Feedback towards Improving Social Competency in Job Interviews
Shruthi Kukal Nambiar, Rahul Das, Sowmya Rasipuram, and Dinesh Babu Jayagopi (IIIT Bangalore, India) Soft skill assessment is a vital aspect of a job interview process as these qualities are indicative of the candidates compatibility in the work environment, their negotiation skills, client interaction prowess and leadership flair among other factors. The rise in popularity of asynchronous video based job interviews has created the need for a scalable solution to gauge candidate performance and hence we turn to automation. In this research, we aim to build a system that automatically provides a summative feedback to candidates at the end of an interview. Most feedback system predicts values of social indicators and communication cues, leaving the interpretation open to the user. Our system directly predicts an actionable feedback that leaves the candidate with a tangible take away at the end of the interview. We approached placement trainers and made a list of most common feedback that is given during training and we attempt to predict them directly. Towards this front,we captured data from over 145 participants in an interview like environment. Designing intelligent training environments for job interview preparation using a video data corpus is a demanding task due to its complex correlations and multimodal interactions. We used several state-of-the-art machine learning algorithms with manual annotation as ground truth. The predictive models were built with a focus on nonverbal communication cues so as to reduce the task of addressing the challenges faced in spoken language understanding and task modelling. We extracted audio and lexical features and our findings indicate a stronger correlation to audio and prosodic features in candidate assessment.Our best results gave an accuracy of 95% when the baseline accuracy was 77%. @InProceedings{MIE17p53, author = {Shruthi Kukal Nambiar and Rahul Das and Sowmya Rasipuram and Dinesh Babu Jayagopi}, title = {Automatic Generation of Actionable Feedback towards Improving Social Competency in Job Interviews}, booktitle = {Proc.\ MIE}, publisher = {ACM}, pages = {53--59}, doi = {10.1145/3139513.3139515}, year = {2017}, } Publisher's Version |
|
Duffy, Sam |
MIE '17: "Developing a Pedagogical Framework ..."
Developing a Pedagogical Framework for Designing a Multisensory Serious Gaming Environment
Sara Price, Sam Duffy, and Monica Gori (University College London, UK; IIT Genoa, Italy) The importance of multisensory interaction for learning has increased with improved understanding of children’s sensory development, and a flourishing interest in embodied cognition. The potential to foster new forms of multisensory interaction through various sensor, mobile and haptic technologies is promising in providing new ways for young children to engage with key mathematical concepts. However, designing effective learning environments for real world classrooms is challenging, and requires a pedagogically, rather than technologically, driven approach to design. This paper describes initial work underpinning the development of a pedagogical framework, intended to inform the design of a multisensory serious gaming environment. It identifies the theoretical basis of the framework, illustrates how this informs teaching strategies, and outlines key technology research driven perspectives and considerations important for informing design. An initial table mapping mathematical concepts to design, a framework of considerations for design, and a process model of how the framework will continue to be developed across the design process are provided. @InProceedings{MIE17p1, author = {Sara Price and Sam Duffy and Monica Gori}, title = {Developing a Pedagogical Framework for Designing a Multisensory Serious Gaming Environment}, booktitle = {Proc.\ MIE}, publisher = {ACM}, pages = {1--9}, doi = {10.1145/3139513.3139517}, year = {2017}, } Publisher's Version |
|
Esme, Asli Arslan |
MIE '17: "An Unobtrusive and Multimodal ..."
An Unobtrusive and Multimodal Approach for Behavioral Engagement Detection of Students
Nese Alyuz, Eda Okur, Utku Genc, Sinem Aslan, Cagri Tanriover, and Asli Arslan Esme (Intel, USA) In this paper, we investigate detection of students’ behavioral engagement states (On-Task vs. Off-Task) in authentic classroom settings. We propose a multimodal detection approach, based on three unobtrusive modalities readily available in a 1:1 learning scenario where learning technologies are incorporated. These modalities are: (1)Appearance: upper-body video captured using a camera; (2) Context-Performance: students’ interaction and performance data related to learning content; and (3) Mouse: data related to mouse movements during learning process. For each modality, separate unimodal classifiers were trained, and decision-level fusion was applied to obtain final behavioral engagement states. We also analyzed each modality based on Instructional and Assessment sections separately (i.e., Instructional where a student is reading an article or watching an instructional video vs. Assessment where a student is solving exercises on the digital learning platform). We carried out various experiments on a dataset collected in an authentic classroom, where students used laptops equipped with cameras and they consumed learning content for Math on a digital learning platform. The dataset included multimodal data of 17 students who attended a Math course for 13 sessions (40 minutes each). The results indicate that it is beneficial to have separate classification pipelines for Instructional and Assessment sections: For Instructional, using only Appearance modality yields an F1-measure of 0.74, compared to fused performance of 0.70. For Assessment, fusing all three modalities (F1-measure of 0.89) provide a prominent improvement over the best performing unimodality (i.e., 0.81 for Appearance). @InProceedings{MIE17p26, author = {Nese Alyuz and Eda Okur and Utku Genc and Sinem Aslan and Cagri Tanriover and Asli Arslan Esme}, title = {An Unobtrusive and Multimodal Approach for Behavioral Engagement Detection of Students}, booktitle = {Proc.\ MIE}, publisher = {ACM}, pages = {26--32}, doi = {10.1145/3139513.3139521}, year = {2017}, } Publisher's Version |
|
Genc, Utku |
MIE '17: "An Unobtrusive and Multimodal ..."
An Unobtrusive and Multimodal Approach for Behavioral Engagement Detection of Students
Nese Alyuz, Eda Okur, Utku Genc, Sinem Aslan, Cagri Tanriover, and Asli Arslan Esme (Intel, USA) In this paper, we investigate detection of students’ behavioral engagement states (On-Task vs. Off-Task) in authentic classroom settings. We propose a multimodal detection approach, based on three unobtrusive modalities readily available in a 1:1 learning scenario where learning technologies are incorporated. These modalities are: (1)Appearance: upper-body video captured using a camera; (2) Context-Performance: students’ interaction and performance data related to learning content; and (3) Mouse: data related to mouse movements during learning process. For each modality, separate unimodal classifiers were trained, and decision-level fusion was applied to obtain final behavioral engagement states. We also analyzed each modality based on Instructional and Assessment sections separately (i.e., Instructional where a student is reading an article or watching an instructional video vs. Assessment where a student is solving exercises on the digital learning platform). We carried out various experiments on a dataset collected in an authentic classroom, where students used laptops equipped with cameras and they consumed learning content for Math on a digital learning platform. The dataset included multimodal data of 17 students who attended a Math course for 13 sessions (40 minutes each). The results indicate that it is beneficial to have separate classification pipelines for Instructional and Assessment sections: For Instructional, using only Appearance modality yields an F1-measure of 0.74, compared to fused performance of 0.70. For Assessment, fusing all three modalities (F1-measure of 0.89) provide a prominent improvement over the best performing unimodality (i.e., 0.81 for Appearance). @InProceedings{MIE17p26, author = {Nese Alyuz and Eda Okur and Utku Genc and Sinem Aslan and Cagri Tanriover and Asli Arslan Esme}, title = {An Unobtrusive and Multimodal Approach for Behavioral Engagement Detection of Students}, booktitle = {Proc.\ MIE}, publisher = {ACM}, pages = {26--32}, doi = {10.1145/3139513.3139521}, year = {2017}, } Publisher's Version |
|
Ghisio, Simone |
MIE '17: "An Open Platform for Full-Body ..."
An Open Platform for Full-Body Multisensory Serious-Games to Teach Geometry in Primary School
Simone Ghisio, Erica Volta, Paolo Alborno, Monica Gori, and Gualtiero Volpe (University of Genoa, Italy; IIT Genoa, Italy) Recent results from psychophysics and developmental psychology show that children have a preferential sensory channel to learn specific concepts. In this work, we explore the possibility of developing and evaluating novel multisensory technologies for deeper learning of arithmetic and geometry. The main novelty of such new technologies comes from the renewed understanding of the role of communication between sensory modalities during development that is that specific sensory systems have specific roles for learning specific concepts. Such understanding suggests that it is possible to open a new teaching/learning channel, personalized for each student based on the child’s sensory skills. Multisensory interactive technologies exploiting full-body movement interaction and including a hardware and software platform to support this approach will be presented and discussed. The platform is part of a more general framework developed in the context of the EU-ICT-H2020 weDRAW Project that aims to develop new multimodal technologies for multisensory serious-games to teach mathematics concepts in the primary school. @InProceedings{MIE17p49, author = {Simone Ghisio and Erica Volta and Paolo Alborno and Monica Gori and Gualtiero Volpe}, title = {An Open Platform for Full-Body Multisensory Serious-Games to Teach Geometry in Primary School}, booktitle = {Proc.\ MIE}, publisher = {ACM}, pages = {49--52}, doi = {10.1145/3139513.3139523}, year = {2017}, } Publisher's Version MIE '17: "A Multimodal Serious-Game ..." A Multimodal Serious-Game to Teach Fractions in Primary School Simone Ghisio, Paolo Alborno, Erica Volta, Monica Gori, and Gualtiero Volpe (University of Genoa, Italy; IIT Genoa, Italy) Multisensory learning is considered a relevant pedagogical framework for education since a very long time and several authors support the use of a multisensory and kinesthetic approach in children learning. Moreover, results from psychophysics and developmental psychology show that children have a preferential sensory channel to learn specific concepts (spatial and/or temporal), hence a further evidence for the need of a multisensory approach. In this work, we present an example of serious game for learning a particularly complicated mathematical concept: fractions. The main novelty of our proposal comes from the role covered by the communication between sensory modalities in particular, movement, vision, and sound. The game has been developed in the context of the EU-ICT-H2020 weDRAW Project aiming at developing new multimodal technologies for multisensory serious-games on mathematical concepts for primary school children. @InProceedings{MIE17p67, author = {Simone Ghisio and Paolo Alborno and Erica Volta and Monica Gori and Gualtiero Volpe}, title = {A Multimodal Serious-Game to Teach Fractions in Primary School}, booktitle = {Proc.\ MIE}, publisher = {ACM}, pages = {67--70}, doi = {10.1145/3139513.3139524}, year = {2017}, } Publisher's Version |
|
Giraldo, Sergio I. |
MIE '17: "Bowing Modeling for Violin ..."
Bowing Modeling for Violin Students Assistance
Fabio J. M. Ortega, Sergio I. Giraldo, and Rafael Ramirez (Pompeu Fabra University, Spain) Though musicians tend to agree on the importance of practicing expressivity in performance, not many tools and techniques are available for the task. A machine learning model is proposed for predicting bowing velocity during performances of violin pieces. Our aim is to provide feedback to violin students in a technology--enhanced learning setting. Predictions are generated for musical phrases in a score by matching them to melodically and rhythmically similar phrases in performances by experts and adapting the bow velocity curve measured in the experts' performance. Results show that mean error in velocity predictions and bowing direction classification accuracy outperform our baseline when reference phrases similar to the predicted ones are available. @InProceedings{MIE17p60, author = {Fabio J. M. Ortega and Sergio I. Giraldo and Rafael Ramirez}, title = {Bowing Modeling for Violin Students Assistance}, booktitle = {Proc.\ MIE}, publisher = {ACM}, pages = {60--62}, doi = {10.1145/3139513.3139525}, year = {2017}, } Publisher's Version |
|
Gori, Monica |
MIE '17: "Angle Discrimination by Walking ..."
Angle Discrimination by Walking in Children
Luigi Cuturi, Giulia Cappagli, and Monica Gori (IIT Genoa, Italy) In primary school, children tend to have difficulties in discriminating angles of different degrees and categorizing them either as acute or obtuse, especially at the first stages of development (6-7 y.o.). In the context of a novel approach that intends to use other sensory modalities than visual to teach geometrical concepts, we ran a psychophysical study investigating angle perception by spatially navigating in space. Our results show that the youngest group of children tend to be more imprecise when asked to discriminate the walking angle of 90°, pivotal to learn how to differentiate between acute and obtuse angles. These results are then discussed in terms of the development of novel technological solutions aimed to integrate locomotion in the teaching of geometrical concepts. @InProceedings{MIE17p10, author = {Luigi Cuturi and Giulia Cappagli and Monica Gori}, title = {Angle Discrimination by Walking in Children}, booktitle = {Proc.\ MIE}, publisher = {ACM}, pages = {10--13}, doi = {10.1145/3139513.3139516}, year = {2017}, } Publisher's Version MIE '17: "What Cognitive and Affective ..." What Cognitive and Affective States Should Technology Monitor to Support Learning? Temitayo Olugbade, Luigi Cuturi, Giulia Cappagli, Erica Volta, Paolo Alborno, Joseph Newbold, Nadia Bianchi-Berthouze, Gabriel Baud-Bovy, Gualtiero Volpe, and Monica Gori (University College London, UK; IIT Genoa, Italy; University of Genoa, Italy) This paper discusses self-efficacy, curiosity, and reflectivity as cognitive and affective states that are critical to learning but are overlooked in the context of affect-aware technology for learning. This discussion sits within the opportunities offered by the weDRAW project aiming at an embodied approach to the design of technology to support exploration and learning of mathematical concepts. We first review existing literature to clarify how the three states facilitate learning and how, if not supported, they may instead hinder learning. We then review the literature to understand how bodily expressions communicate these states and how technology could be used to monitor them. We conclude by presenting initial movement cues currently explored in the context of weDRAW. @InProceedings{MIE17p22, author = {Temitayo Olugbade and Luigi Cuturi and Giulia Cappagli and Erica Volta and Paolo Alborno and Joseph Newbold and Nadia Bianchi-Berthouze and Gabriel Baud-Bovy and Gualtiero Volpe and Monica Gori}, title = {What Cognitive and Affective States Should Technology Monitor to Support Learning?}, booktitle = {Proc.\ MIE}, publisher = {ACM}, pages = {22--25}, doi = {10.1145/3139513.3139522}, year = {2017}, } Publisher's Version MIE '17: "Developing a Pedagogical Framework ..." Developing a Pedagogical Framework for Designing a Multisensory Serious Gaming Environment Sara Price, Sam Duffy, and Monica Gori (University College London, UK; IIT Genoa, Italy) The importance of multisensory interaction for learning has increased with improved understanding of children’s sensory development, and a flourishing interest in embodied cognition. The potential to foster new forms of multisensory interaction through various sensor, mobile and haptic technologies is promising in providing new ways for young children to engage with key mathematical concepts. However, designing effective learning environments for real world classrooms is challenging, and requires a pedagogically, rather than technologically, driven approach to design. This paper describes initial work underpinning the development of a pedagogical framework, intended to inform the design of a multisensory serious gaming environment. It identifies the theoretical basis of the framework, illustrates how this informs teaching strategies, and outlines key technology research driven perspectives and considerations important for informing design. An initial table mapping mathematical concepts to design, a framework of considerations for design, and a process model of how the framework will continue to be developed across the design process are provided. @InProceedings{MIE17p1, author = {Sara Price and Sam Duffy and Monica Gori}, title = {Developing a Pedagogical Framework for Designing a Multisensory Serious Gaming Environment}, booktitle = {Proc.\ MIE}, publisher = {ACM}, pages = {1--9}, doi = {10.1145/3139513.3139517}, year = {2017}, } Publisher's Version MIE '17: "An Open Platform for Full-Body ..." An Open Platform for Full-Body Multisensory Serious-Games to Teach Geometry in Primary School Simone Ghisio, Erica Volta, Paolo Alborno, Monica Gori, and Gualtiero Volpe (University of Genoa, Italy; IIT Genoa, Italy) Recent results from psychophysics and developmental psychology show that children have a preferential sensory channel to learn specific concepts. In this work, we explore the possibility of developing and evaluating novel multisensory technologies for deeper learning of arithmetic and geometry. The main novelty of such new technologies comes from the renewed understanding of the role of communication between sensory modalities during development that is that specific sensory systems have specific roles for learning specific concepts. Such understanding suggests that it is possible to open a new teaching/learning channel, personalized for each student based on the child’s sensory skills. Multisensory interactive technologies exploiting full-body movement interaction and including a hardware and software platform to support this approach will be presented and discussed. The platform is part of a more general framework developed in the context of the EU-ICT-H2020 weDRAW Project that aims to develop new multimodal technologies for multisensory serious-games to teach mathematics concepts in the primary school. @InProceedings{MIE17p49, author = {Simone Ghisio and Erica Volta and Paolo Alborno and Monica Gori and Gualtiero Volpe}, title = {An Open Platform for Full-Body Multisensory Serious-Games to Teach Geometry in Primary School}, booktitle = {Proc.\ MIE}, publisher = {ACM}, pages = {49--52}, doi = {10.1145/3139513.3139523}, year = {2017}, } Publisher's Version MIE '17: "A Multimodal Serious-Game ..." A Multimodal Serious-Game to Teach Fractions in Primary School Simone Ghisio, Paolo Alborno, Erica Volta, Monica Gori, and Gualtiero Volpe (University of Genoa, Italy; IIT Genoa, Italy) Multisensory learning is considered a relevant pedagogical framework for education since a very long time and several authors support the use of a multisensory and kinesthetic approach in children learning. Moreover, results from psychophysics and developmental psychology show that children have a preferential sensory channel to learn specific concepts (spatial and/or temporal), hence a further evidence for the need of a multisensory approach. In this work, we present an example of serious game for learning a particularly complicated mathematical concept: fractions. The main novelty of our proposal comes from the role covered by the communication between sensory modalities in particular, movement, vision, and sound. The game has been developed in the context of the EU-ICT-H2020 weDRAW Project aiming at developing new multimodal technologies for multisensory serious-games on mathematical concepts for primary school children. @InProceedings{MIE17p67, author = {Simone Ghisio and Paolo Alborno and Erica Volta and Monica Gori and Gualtiero Volpe}, title = {A Multimodal Serious-Game to Teach Fractions in Primary School}, booktitle = {Proc.\ MIE}, publisher = {ACM}, pages = {67--70}, doi = {10.1145/3139513.3139524}, year = {2017}, } Publisher's Version |
|
Jayagopi, Dinesh Babu |
MIE '17: "Automatic Generation of Actionable ..."
Automatic Generation of Actionable Feedback towards Improving Social Competency in Job Interviews
Shruthi Kukal Nambiar, Rahul Das, Sowmya Rasipuram, and Dinesh Babu Jayagopi (IIIT Bangalore, India) Soft skill assessment is a vital aspect of a job interview process as these qualities are indicative of the candidates compatibility in the work environment, their negotiation skills, client interaction prowess and leadership flair among other factors. The rise in popularity of asynchronous video based job interviews has created the need for a scalable solution to gauge candidate performance and hence we turn to automation. In this research, we aim to build a system that automatically provides a summative feedback to candidates at the end of an interview. Most feedback system predicts values of social indicators and communication cues, leaving the interpretation open to the user. Our system directly predicts an actionable feedback that leaves the candidate with a tangible take away at the end of the interview. We approached placement trainers and made a list of most common feedback that is given during training and we attempt to predict them directly. Towards this front,we captured data from over 145 participants in an interview like environment. Designing intelligent training environments for job interview preparation using a video data corpus is a demanding task due to its complex correlations and multimodal interactions. We used several state-of-the-art machine learning algorithms with manual annotation as ground truth. The predictive models were built with a focus on nonverbal communication cues so as to reduce the task of addressing the challenges faced in spoken language understanding and task modelling. We extracted audio and lexical features and our findings indicate a stronger correlation to audio and prosodic features in candidate assessment.Our best results gave an accuracy of 95% when the baseline accuracy was 77%. @InProceedings{MIE17p53, author = {Shruthi Kukal Nambiar and Rahul Das and Sowmya Rasipuram and Dinesh Babu Jayagopi}, title = {Automatic Generation of Actionable Feedback towards Improving Social Competency in Job Interviews}, booktitle = {Proc.\ MIE}, publisher = {ACM}, pages = {53--59}, doi = {10.1145/3139513.3139515}, year = {2017}, } Publisher's Version MIE '17: "Predicting Student Engagement ..." Predicting Student Engagement in Classrooms using Facial Behavioral Cues Chinchu Thomas and Dinesh Babu Jayagopi (IIIT Bangalore, India) Student engagement is the key to successful classroom learning. Measuring or analyzing the engagement of students is very important to improve learning as well as teaching. In this work, we analyze the engagement or attention level of the students from their facial expressions, headpose and eye gaze using computer vision techniques and a decision is taken using machine learning algorithms. Since the human observers are able to well distinguish the attention level from student’s facial expressions,head pose and eye gaze, we assume that machine will also be able to learn the behavior automatically. The engagement level is analyzed on 10 second video clips. The performance of the algorithm is better than the baseline results. Our best accuracy results are 10 % better than the baseline. The paper also gives a detailed review of works related to the analysis of student engagement in a classroom using vision based techniques. @InProceedings{MIE17p33, author = {Chinchu Thomas and Dinesh Babu Jayagopi}, title = {Predicting Student Engagement in Classrooms using Facial Behavioral Cues}, booktitle = {Proc.\ MIE}, publisher = {ACM}, pages = {33--40}, doi = {10.1145/3139513.3139514}, year = {2017}, } Publisher's Version |
|
Lai, Song |
MIE '17: "Differences of Online Learning ..."
Differences of Online Learning Behaviors and Eye-Movement between Students Having Different Personality Traits
Bo Sun, Song Lai, Congcong Xu, Rong Xiao, Yungang Wei, and Yongkang Xiao (Beijing Normal University, China) The information technologies are integrated into education so that mass data is available reflecting each action of students in online environments. Numerous studies have exploited these data to do the learning analytics.In this paper, we aim at achieving the show of personalized indicators for students per personality trait on the learning analytics dashboard (LAD) and present the preliminary results. First, we employ learning behavior engagement (LBE) to describe students' learning behaviors, exploited to analyze the significant differences among students having different personality traits. In experiments, fifteen behavioral indicators are tested. The experimental results show that there are significant differences about some behavioral indicators among personality traits. Second, some of these behavioral indicators are presented on the LAD and distributed in each area of interest (AOI). Hence, students can visualize their behavioral data that they care about in AOIs anytime in the learning process. Through the analysis of eye-movement including the fixation duration, fixation count, heat map and track map, we have found that there are significant differences about some visual indicators in AOIs. This is partly consistent with the results of behavioral indicators. @InProceedings{MIE17p71, author = {Bo Sun and Song Lai and Congcong Xu and Rong Xiao and Yungang Wei and Yongkang Xiao}, title = {Differences of Online Learning Behaviors and Eye-Movement between Students Having Different Personality Traits}, booktitle = {Proc.\ MIE}, publisher = {ACM}, pages = {71--75}, doi = {10.1145/3139513.3139527}, year = {2017}, } Publisher's Version |
|
Ludovico, Luca Andrea |
MIE '17: "A Multimodal LEGO®-Based ..."
A Multimodal LEGO®-Based Learning Activity Mixing Musical Notation and Computer Programming
Luca Andrea Ludovico, Dario Malchiodi, and Luisa Zecca (University of Milan, Italy; University of Milano-Bicocca, Italy) This paper discusses a multimodal learning activity based on LEGO® bricks where elements from the domains of music and informatics are mixed. Such an experience addresses children in preschool age and students of the primary schools in order to convey some basic aspects of computational thinking. The learning methodology is organized in two phases where construction blocks are employed as a physical tool and as a metaphor for music notation, respectively. The goal is to foster in young students abilities such as analysis and re-synthesis, problem solving, abstraction and adaptive reasoning. A web application to support this approach and to provide a prompt feedback to user action is under development, and its design principles and key characteristics will be presented. @InProceedings{MIE17p44, author = {Luca Andrea Ludovico and Dario Malchiodi and Luisa Zecca}, title = {A Multimodal LEGO®-Based Learning Activity Mixing Musical Notation and Computer Programming}, booktitle = {Proc.\ MIE}, publisher = {ACM}, pages = {44--48}, doi = {10.1145/3139513.3139519}, year = {2017}, } Publisher's Version |
|
Malchiodi, Dario |
MIE '17: "A Multimodal LEGO®-Based ..."
A Multimodal LEGO®-Based Learning Activity Mixing Musical Notation and Computer Programming
Luca Andrea Ludovico, Dario Malchiodi, and Luisa Zecca (University of Milan, Italy; University of Milano-Bicocca, Italy) This paper discusses a multimodal learning activity based on LEGO® bricks where elements from the domains of music and informatics are mixed. Such an experience addresses children in preschool age and students of the primary schools in order to convey some basic aspects of computational thinking. The learning methodology is organized in two phases where construction blocks are employed as a physical tool and as a metaphor for music notation, respectively. The goal is to foster in young students abilities such as analysis and re-synthesis, problem solving, abstraction and adaptive reasoning. A web application to support this approach and to provide a prompt feedback to user action is under development, and its design principles and key characteristics will be presented. @InProceedings{MIE17p44, author = {Luca Andrea Ludovico and Dario Malchiodi and Luisa Zecca}, title = {A Multimodal LEGO®-Based Learning Activity Mixing Musical Notation and Computer Programming}, booktitle = {Proc.\ MIE}, publisher = {ACM}, pages = {44--48}, doi = {10.1145/3139513.3139519}, year = {2017}, } Publisher's Version |
|
Nambiar, Shruthi Kukal |
MIE '17: "Automatic Generation of Actionable ..."
Automatic Generation of Actionable Feedback towards Improving Social Competency in Job Interviews
Shruthi Kukal Nambiar, Rahul Das, Sowmya Rasipuram, and Dinesh Babu Jayagopi (IIIT Bangalore, India) Soft skill assessment is a vital aspect of a job interview process as these qualities are indicative of the candidates compatibility in the work environment, their negotiation skills, client interaction prowess and leadership flair among other factors. The rise in popularity of asynchronous video based job interviews has created the need for a scalable solution to gauge candidate performance and hence we turn to automation. In this research, we aim to build a system that automatically provides a summative feedback to candidates at the end of an interview. Most feedback system predicts values of social indicators and communication cues, leaving the interpretation open to the user. Our system directly predicts an actionable feedback that leaves the candidate with a tangible take away at the end of the interview. We approached placement trainers and made a list of most common feedback that is given during training and we attempt to predict them directly. Towards this front,we captured data from over 145 participants in an interview like environment. Designing intelligent training environments for job interview preparation using a video data corpus is a demanding task due to its complex correlations and multimodal interactions. We used several state-of-the-art machine learning algorithms with manual annotation as ground truth. The predictive models were built with a focus on nonverbal communication cues so as to reduce the task of addressing the challenges faced in spoken language understanding and task modelling. We extracted audio and lexical features and our findings indicate a stronger correlation to audio and prosodic features in candidate assessment.Our best results gave an accuracy of 95% when the baseline accuracy was 77%. @InProceedings{MIE17p53, author = {Shruthi Kukal Nambiar and Rahul Das and Sowmya Rasipuram and Dinesh Babu Jayagopi}, title = {Automatic Generation of Actionable Feedback towards Improving Social Competency in Job Interviews}, booktitle = {Proc.\ MIE}, publisher = {ACM}, pages = {53--59}, doi = {10.1145/3139513.3139515}, year = {2017}, } Publisher's Version |
|
Newbold, Joseph |
MIE '17: "What Cognitive and Affective ..."
What Cognitive and Affective States Should Technology Monitor to Support Learning?
Temitayo Olugbade, Luigi Cuturi, Giulia Cappagli, Erica Volta, Paolo Alborno, Joseph Newbold, Nadia Bianchi-Berthouze, Gabriel Baud-Bovy, Gualtiero Volpe, and Monica Gori (University College London, UK; IIT Genoa, Italy; University of Genoa, Italy) This paper discusses self-efficacy, curiosity, and reflectivity as cognitive and affective states that are critical to learning but are overlooked in the context of affect-aware technology for learning. This discussion sits within the opportunities offered by the weDRAW project aiming at an embodied approach to the design of technology to support exploration and learning of mathematical concepts. We first review existing literature to clarify how the three states facilitate learning and how, if not supported, they may instead hinder learning. We then review the literature to understand how bodily expressions communicate these states and how technology could be used to monitor them. We conclude by presenting initial movement cues currently explored in the context of weDRAW. @InProceedings{MIE17p22, author = {Temitayo Olugbade and Luigi Cuturi and Giulia Cappagli and Erica Volta and Paolo Alborno and Joseph Newbold and Nadia Bianchi-Berthouze and Gabriel Baud-Bovy and Gualtiero Volpe and Monica Gori}, title = {What Cognitive and Affective States Should Technology Monitor to Support Learning?}, booktitle = {Proc.\ MIE}, publisher = {ACM}, pages = {22--25}, doi = {10.1145/3139513.3139522}, year = {2017}, } Publisher's Version |
|
Okur, Eda |
MIE '17: "An Unobtrusive and Multimodal ..."
An Unobtrusive and Multimodal Approach for Behavioral Engagement Detection of Students
Nese Alyuz, Eda Okur, Utku Genc, Sinem Aslan, Cagri Tanriover, and Asli Arslan Esme (Intel, USA) In this paper, we investigate detection of students’ behavioral engagement states (On-Task vs. Off-Task) in authentic classroom settings. We propose a multimodal detection approach, based on three unobtrusive modalities readily available in a 1:1 learning scenario where learning technologies are incorporated. These modalities are: (1)Appearance: upper-body video captured using a camera; (2) Context-Performance: students’ interaction and performance data related to learning content; and (3) Mouse: data related to mouse movements during learning process. For each modality, separate unimodal classifiers were trained, and decision-level fusion was applied to obtain final behavioral engagement states. We also analyzed each modality based on Instructional and Assessment sections separately (i.e., Instructional where a student is reading an article or watching an instructional video vs. Assessment where a student is solving exercises on the digital learning platform). We carried out various experiments on a dataset collected in an authentic classroom, where students used laptops equipped with cameras and they consumed learning content for Math on a digital learning platform. The dataset included multimodal data of 17 students who attended a Math course for 13 sessions (40 minutes each). The results indicate that it is beneficial to have separate classification pipelines for Instructional and Assessment sections: For Instructional, using only Appearance modality yields an F1-measure of 0.74, compared to fused performance of 0.70. For Assessment, fusing all three modalities (F1-measure of 0.89) provide a prominent improvement over the best performing unimodality (i.e., 0.81 for Appearance). @InProceedings{MIE17p26, author = {Nese Alyuz and Eda Okur and Utku Genc and Sinem Aslan and Cagri Tanriover and Asli Arslan Esme}, title = {An Unobtrusive and Multimodal Approach for Behavioral Engagement Detection of Students}, booktitle = {Proc.\ MIE}, publisher = {ACM}, pages = {26--32}, doi = {10.1145/3139513.3139521}, year = {2017}, } Publisher's Version |
|
Olugbade, Temitayo |
MIE '17: "What Cognitive and Affective ..."
What Cognitive and Affective States Should Technology Monitor to Support Learning?
Temitayo Olugbade, Luigi Cuturi, Giulia Cappagli, Erica Volta, Paolo Alborno, Joseph Newbold, Nadia Bianchi-Berthouze, Gabriel Baud-Bovy, Gualtiero Volpe, and Monica Gori (University College London, UK; IIT Genoa, Italy; University of Genoa, Italy) This paper discusses self-efficacy, curiosity, and reflectivity as cognitive and affective states that are critical to learning but are overlooked in the context of affect-aware technology for learning. This discussion sits within the opportunities offered by the weDRAW project aiming at an embodied approach to the design of technology to support exploration and learning of mathematical concepts. We first review existing literature to clarify how the three states facilitate learning and how, if not supported, they may instead hinder learning. We then review the literature to understand how bodily expressions communicate these states and how technology could be used to monitor them. We conclude by presenting initial movement cues currently explored in the context of weDRAW. @InProceedings{MIE17p22, author = {Temitayo Olugbade and Luigi Cuturi and Giulia Cappagli and Erica Volta and Paolo Alborno and Joseph Newbold and Nadia Bianchi-Berthouze and Gabriel Baud-Bovy and Gualtiero Volpe and Monica Gori}, title = {What Cognitive and Affective States Should Technology Monitor to Support Learning?}, booktitle = {Proc.\ MIE}, publisher = {ACM}, pages = {22--25}, doi = {10.1145/3139513.3139522}, year = {2017}, } Publisher's Version |
|
Ortega, Fabio J. M. |
MIE '17: "Bowing Modeling for Violin ..."
Bowing Modeling for Violin Students Assistance
Fabio J. M. Ortega, Sergio I. Giraldo, and Rafael Ramirez (Pompeu Fabra University, Spain) Though musicians tend to agree on the importance of practicing expressivity in performance, not many tools and techniques are available for the task. A machine learning model is proposed for predicting bowing velocity during performances of violin pieces. Our aim is to provide feedback to violin students in a technology--enhanced learning setting. Predictions are generated for musical phrases in a score by matching them to melodically and rhythmically similar phrases in performances by experts and adapting the bow velocity curve measured in the experts' performance. Results show that mean error in velocity predictions and bowing direction classification accuracy outperform our baseline when reference phrases similar to the predicted ones are available. @InProceedings{MIE17p60, author = {Fabio J. M. Ortega and Sergio I. Giraldo and Rafael Ramirez}, title = {Bowing Modeling for Violin Students Assistance}, booktitle = {Proc.\ MIE}, publisher = {ACM}, pages = {60--62}, doi = {10.1145/3139513.3139525}, year = {2017}, } Publisher's Version |
|
Price, Sara |
MIE '17: "Developing a Pedagogical Framework ..."
Developing a Pedagogical Framework for Designing a Multisensory Serious Gaming Environment
Sara Price, Sam Duffy, and Monica Gori (University College London, UK; IIT Genoa, Italy) The importance of multisensory interaction for learning has increased with improved understanding of children’s sensory development, and a flourishing interest in embodied cognition. The potential to foster new forms of multisensory interaction through various sensor, mobile and haptic technologies is promising in providing new ways for young children to engage with key mathematical concepts. However, designing effective learning environments for real world classrooms is challenging, and requires a pedagogically, rather than technologically, driven approach to design. This paper describes initial work underpinning the development of a pedagogical framework, intended to inform the design of a multisensory serious gaming environment. It identifies the theoretical basis of the framework, illustrates how this informs teaching strategies, and outlines key technology research driven perspectives and considerations important for informing design. An initial table mapping mathematical concepts to design, a framework of considerations for design, and a process model of how the framework will continue to be developed across the design process are provided. @InProceedings{MIE17p1, author = {Sara Price and Sam Duffy and Monica Gori}, title = {Developing a Pedagogical Framework for Designing a Multisensory Serious Gaming Environment}, booktitle = {Proc.\ MIE}, publisher = {ACM}, pages = {1--9}, doi = {10.1145/3139513.3139517}, year = {2017}, } Publisher's Version |
|
Ramirez, Rafael |
MIE '17: "Bowing Modeling for Violin ..."
Bowing Modeling for Violin Students Assistance
Fabio J. M. Ortega, Sergio I. Giraldo, and Rafael Ramirez (Pompeu Fabra University, Spain) Though musicians tend to agree on the importance of practicing expressivity in performance, not many tools and techniques are available for the task. A machine learning model is proposed for predicting bowing velocity during performances of violin pieces. Our aim is to provide feedback to violin students in a technology--enhanced learning setting. Predictions are generated for musical phrases in a score by matching them to melodically and rhythmically similar phrases in performances by experts and adapting the bow velocity curve measured in the experts' performance. Results show that mean error in velocity predictions and bowing direction classification accuracy outperform our baseline when reference phrases similar to the predicted ones are available. @InProceedings{MIE17p60, author = {Fabio J. M. Ortega and Sergio I. Giraldo and Rafael Ramirez}, title = {Bowing Modeling for Violin Students Assistance}, booktitle = {Proc.\ MIE}, publisher = {ACM}, pages = {60--62}, doi = {10.1145/3139513.3139525}, year = {2017}, } Publisher's Version MIE '17: "Evaluation of Audio-Based ..." Evaluation of Audio-Based Feedback Technologies for Bow Learning Technique in Violin Beginners Angel Blanco and Rafael Ramirez (Pompeu Fabra University, Spain) We present a study of the effects of feedback technologies on the learning process of novice violin students. Twenty-one subjects participated in our experiment, divided into two groups: Beginners (participants with no prior violin playing experience, N=14), and experts (participants with more than 6 years of violin playing experience, N=7). The beginners group was further divided into two: a group of beginners learning with Youtube videos (N=7), and a group of beginners with additional feedback related to the quality of their performance (N=7). Participants were asked to perform a violin exercise during 21 trials while their audio was recorded and analyzed. Three different audio descriptors were extracted from each audio in order to evaluate the quality of the performance: Dynamic stability, pitch stability and aperiodicity. Beginners showed a significant improvement during the session(i.e. by comparing the beginning and the end of the session)in the quality of the sound recorded, while experts maintained their results. However, only the beginner group with feedback showed significant improvement between the middle and late part of the session, while the group without feedback remained stable. @InProceedings{MIE17p41, author = {Angel Blanco and Rafael Ramirez}, title = {Evaluation of Audio-Based Feedback Technologies for Bow Learning Technique in Violin Beginners}, booktitle = {Proc.\ MIE}, publisher = {ACM}, pages = {41--43}, doi = {10.1145/3139513.3139520}, year = {2017}, } Publisher's Version MIE '17: "Air Violin: A Machine Learning ..." Air Violin: A Machine Learning Approach to Fingering Gesture Recognition David Dalmazzo and Rafael Ramirez (Pompeu Fabra University, Spain) We train and evaluate two machine learning models for predicting fingering in violin performances using motion and EMG sensors integrated in the Myo device. Our aim is twofold: first, provide a fingering recognition model in the context of a gamification virtual violin application where we measure both right hand (i.e. bow) and left hand (i.e. fingering) gestures, and second, implement a tracking system for a computer assisted pedagogical tool for self-regulated learners in high-level music education. Our approach is based on the principle of mapping-by-demonstration in which the model is trained by the performer. We evaluated a model based on Decision Trees and compared it with a Hidden Markovian Model. @InProceedings{MIE17p63, author = {David Dalmazzo and Rafael Ramirez}, title = {Air Violin: A Machine Learning Approach to Fingering Gesture Recognition}, booktitle = {Proc.\ MIE}, publisher = {ACM}, pages = {63--66}, doi = {10.1145/3139513.3139526}, year = {2017}, } Publisher's Version |
|
Rasipuram, Sowmya |
MIE '17: "Automatic Generation of Actionable ..."
Automatic Generation of Actionable Feedback towards Improving Social Competency in Job Interviews
Shruthi Kukal Nambiar, Rahul Das, Sowmya Rasipuram, and Dinesh Babu Jayagopi (IIIT Bangalore, India) Soft skill assessment is a vital aspect of a job interview process as these qualities are indicative of the candidates compatibility in the work environment, their negotiation skills, client interaction prowess and leadership flair among other factors. The rise in popularity of asynchronous video based job interviews has created the need for a scalable solution to gauge candidate performance and hence we turn to automation. In this research, we aim to build a system that automatically provides a summative feedback to candidates at the end of an interview. Most feedback system predicts values of social indicators and communication cues, leaving the interpretation open to the user. Our system directly predicts an actionable feedback that leaves the candidate with a tangible take away at the end of the interview. We approached placement trainers and made a list of most common feedback that is given during training and we attempt to predict them directly. Towards this front,we captured data from over 145 participants in an interview like environment. Designing intelligent training environments for job interview preparation using a video data corpus is a demanding task due to its complex correlations and multimodal interactions. We used several state-of-the-art machine learning algorithms with manual annotation as ground truth. The predictive models were built with a focus on nonverbal communication cues so as to reduce the task of addressing the challenges faced in spoken language understanding and task modelling. We extracted audio and lexical features and our findings indicate a stronger correlation to audio and prosodic features in candidate assessment.Our best results gave an accuracy of 95% when the baseline accuracy was 77%. @InProceedings{MIE17p53, author = {Shruthi Kukal Nambiar and Rahul Das and Sowmya Rasipuram and Dinesh Babu Jayagopi}, title = {Automatic Generation of Actionable Feedback towards Improving Social Competency in Job Interviews}, booktitle = {Proc.\ MIE}, publisher = {ACM}, pages = {53--59}, doi = {10.1145/3139513.3139515}, year = {2017}, } Publisher's Version |
|
Sun, Bo |
MIE '17: "Differences of Online Learning ..."
Differences of Online Learning Behaviors and Eye-Movement between Students Having Different Personality Traits
Bo Sun, Song Lai, Congcong Xu, Rong Xiao, Yungang Wei, and Yongkang Xiao (Beijing Normal University, China) The information technologies are integrated into education so that mass data is available reflecting each action of students in online environments. Numerous studies have exploited these data to do the learning analytics.In this paper, we aim at achieving the show of personalized indicators for students per personality trait on the learning analytics dashboard (LAD) and present the preliminary results. First, we employ learning behavior engagement (LBE) to describe students' learning behaviors, exploited to analyze the significant differences among students having different personality traits. In experiments, fifteen behavioral indicators are tested. The experimental results show that there are significant differences about some behavioral indicators among personality traits. Second, some of these behavioral indicators are presented on the LAD and distributed in each area of interest (AOI). Hence, students can visualize their behavioral data that they care about in AOIs anytime in the learning process. Through the analysis of eye-movement including the fixation duration, fixation count, heat map and track map, we have found that there are significant differences about some visual indicators in AOIs. This is partly consistent with the results of behavioral indicators. @InProceedings{MIE17p71, author = {Bo Sun and Song Lai and Congcong Xu and Rong Xiao and Yungang Wei and Yongkang Xiao}, title = {Differences of Online Learning Behaviors and Eye-Movement between Students Having Different Personality Traits}, booktitle = {Proc.\ MIE}, publisher = {ACM}, pages = {71--75}, doi = {10.1145/3139513.3139527}, year = {2017}, } Publisher's Version |
|
Tanriover, Cagri |
MIE '17: "An Unobtrusive and Multimodal ..."
An Unobtrusive and Multimodal Approach for Behavioral Engagement Detection of Students
Nese Alyuz, Eda Okur, Utku Genc, Sinem Aslan, Cagri Tanriover, and Asli Arslan Esme (Intel, USA) In this paper, we investigate detection of students’ behavioral engagement states (On-Task vs. Off-Task) in authentic classroom settings. We propose a multimodal detection approach, based on three unobtrusive modalities readily available in a 1:1 learning scenario where learning technologies are incorporated. These modalities are: (1)Appearance: upper-body video captured using a camera; (2) Context-Performance: students’ interaction and performance data related to learning content; and (3) Mouse: data related to mouse movements during learning process. For each modality, separate unimodal classifiers were trained, and decision-level fusion was applied to obtain final behavioral engagement states. We also analyzed each modality based on Instructional and Assessment sections separately (i.e., Instructional where a student is reading an article or watching an instructional video vs. Assessment where a student is solving exercises on the digital learning platform). We carried out various experiments on a dataset collected in an authentic classroom, where students used laptops equipped with cameras and they consumed learning content for Math on a digital learning platform. The dataset included multimodal data of 17 students who attended a Math course for 13 sessions (40 minutes each). The results indicate that it is beneficial to have separate classification pipelines for Instructional and Assessment sections: For Instructional, using only Appearance modality yields an F1-measure of 0.74, compared to fused performance of 0.70. For Assessment, fusing all three modalities (F1-measure of 0.89) provide a prominent improvement over the best performing unimodality (i.e., 0.81 for Appearance). @InProceedings{MIE17p26, author = {Nese Alyuz and Eda Okur and Utku Genc and Sinem Aslan and Cagri Tanriover and Asli Arslan Esme}, title = {An Unobtrusive and Multimodal Approach for Behavioral Engagement Detection of Students}, booktitle = {Proc.\ MIE}, publisher = {ACM}, pages = {26--32}, doi = {10.1145/3139513.3139521}, year = {2017}, } Publisher's Version |
|
Thomas, Chinchu |
MIE '17: "Predicting Student Engagement ..."
Predicting Student Engagement in Classrooms using Facial Behavioral Cues
Chinchu Thomas and Dinesh Babu Jayagopi (IIIT Bangalore, India) Student engagement is the key to successful classroom learning. Measuring or analyzing the engagement of students is very important to improve learning as well as teaching. In this work, we analyze the engagement or attention level of the students from their facial expressions, headpose and eye gaze using computer vision techniques and a decision is taken using machine learning algorithms. Since the human observers are able to well distinguish the attention level from student’s facial expressions,head pose and eye gaze, we assume that machine will also be able to learn the behavior automatically. The engagement level is analyzed on 10 second video clips. The performance of the algorithm is better than the baseline results. Our best accuracy results are 10 % better than the baseline. The paper also gives a detailed review of works related to the analysis of student engagement in a classroom using vision based techniques. @InProceedings{MIE17p33, author = {Chinchu Thomas and Dinesh Babu Jayagopi}, title = {Predicting Student Engagement in Classrooms using Facial Behavioral Cues}, booktitle = {Proc.\ MIE}, publisher = {ACM}, pages = {33--40}, doi = {10.1145/3139513.3139514}, year = {2017}, } Publisher's Version |
|
Volpe, Gualtiero |
MIE '17: "What Cognitive and Affective ..."
What Cognitive and Affective States Should Technology Monitor to Support Learning?
Temitayo Olugbade, Luigi Cuturi, Giulia Cappagli, Erica Volta, Paolo Alborno, Joseph Newbold, Nadia Bianchi-Berthouze, Gabriel Baud-Bovy, Gualtiero Volpe, and Monica Gori (University College London, UK; IIT Genoa, Italy; University of Genoa, Italy) This paper discusses self-efficacy, curiosity, and reflectivity as cognitive and affective states that are critical to learning but are overlooked in the context of affect-aware technology for learning. This discussion sits within the opportunities offered by the weDRAW project aiming at an embodied approach to the design of technology to support exploration and learning of mathematical concepts. We first review existing literature to clarify how the three states facilitate learning and how, if not supported, they may instead hinder learning. We then review the literature to understand how bodily expressions communicate these states and how technology could be used to monitor them. We conclude by presenting initial movement cues currently explored in the context of weDRAW. @InProceedings{MIE17p22, author = {Temitayo Olugbade and Luigi Cuturi and Giulia Cappagli and Erica Volta and Paolo Alborno and Joseph Newbold and Nadia Bianchi-Berthouze and Gabriel Baud-Bovy and Gualtiero Volpe and Monica Gori}, title = {What Cognitive and Affective States Should Technology Monitor to Support Learning?}, booktitle = {Proc.\ MIE}, publisher = {ACM}, pages = {22--25}, doi = {10.1145/3139513.3139522}, year = {2017}, } Publisher's Version MIE '17: "An Open Platform for Full-Body ..." An Open Platform for Full-Body Multisensory Serious-Games to Teach Geometry in Primary School Simone Ghisio, Erica Volta, Paolo Alborno, Monica Gori, and Gualtiero Volpe (University of Genoa, Italy; IIT Genoa, Italy) Recent results from psychophysics and developmental psychology show that children have a preferential sensory channel to learn specific concepts. In this work, we explore the possibility of developing and evaluating novel multisensory technologies for deeper learning of arithmetic and geometry. The main novelty of such new technologies comes from the renewed understanding of the role of communication between sensory modalities during development that is that specific sensory systems have specific roles for learning specific concepts. Such understanding suggests that it is possible to open a new teaching/learning channel, personalized for each student based on the child’s sensory skills. Multisensory interactive technologies exploiting full-body movement interaction and including a hardware and software platform to support this approach will be presented and discussed. The platform is part of a more general framework developed in the context of the EU-ICT-H2020 weDRAW Project that aims to develop new multimodal technologies for multisensory serious-games to teach mathematics concepts in the primary school. @InProceedings{MIE17p49, author = {Simone Ghisio and Erica Volta and Paolo Alborno and Monica Gori and Gualtiero Volpe}, title = {An Open Platform for Full-Body Multisensory Serious-Games to Teach Geometry in Primary School}, booktitle = {Proc.\ MIE}, publisher = {ACM}, pages = {49--52}, doi = {10.1145/3139513.3139523}, year = {2017}, } Publisher's Version MIE '17: "A Multimodal Serious-Game ..." A Multimodal Serious-Game to Teach Fractions in Primary School Simone Ghisio, Paolo Alborno, Erica Volta, Monica Gori, and Gualtiero Volpe (University of Genoa, Italy; IIT Genoa, Italy) Multisensory learning is considered a relevant pedagogical framework for education since a very long time and several authors support the use of a multisensory and kinesthetic approach in children learning. Moreover, results from psychophysics and developmental psychology show that children have a preferential sensory channel to learn specific concepts (spatial and/or temporal), hence a further evidence for the need of a multisensory approach. In this work, we present an example of serious game for learning a particularly complicated mathematical concept: fractions. The main novelty of our proposal comes from the role covered by the communication between sensory modalities in particular, movement, vision, and sound. The game has been developed in the context of the EU-ICT-H2020 weDRAW Project aiming at developing new multimodal technologies for multisensory serious-games on mathematical concepts for primary school children. @InProceedings{MIE17p67, author = {Simone Ghisio and Paolo Alborno and Erica Volta and Monica Gori and Gualtiero Volpe}, title = {A Multimodal Serious-Game to Teach Fractions in Primary School}, booktitle = {Proc.\ MIE}, publisher = {ACM}, pages = {67--70}, doi = {10.1145/3139513.3139524}, year = {2017}, } Publisher's Version |
|
Volta, Erica |
MIE '17: "What Cognitive and Affective ..."
What Cognitive and Affective States Should Technology Monitor to Support Learning?
Temitayo Olugbade, Luigi Cuturi, Giulia Cappagli, Erica Volta, Paolo Alborno, Joseph Newbold, Nadia Bianchi-Berthouze, Gabriel Baud-Bovy, Gualtiero Volpe, and Monica Gori (University College London, UK; IIT Genoa, Italy; University of Genoa, Italy) This paper discusses self-efficacy, curiosity, and reflectivity as cognitive and affective states that are critical to learning but are overlooked in the context of affect-aware technology for learning. This discussion sits within the opportunities offered by the weDRAW project aiming at an embodied approach to the design of technology to support exploration and learning of mathematical concepts. We first review existing literature to clarify how the three states facilitate learning and how, if not supported, they may instead hinder learning. We then review the literature to understand how bodily expressions communicate these states and how technology could be used to monitor them. We conclude by presenting initial movement cues currently explored in the context of weDRAW. @InProceedings{MIE17p22, author = {Temitayo Olugbade and Luigi Cuturi and Giulia Cappagli and Erica Volta and Paolo Alborno and Joseph Newbold and Nadia Bianchi-Berthouze and Gabriel Baud-Bovy and Gualtiero Volpe and Monica Gori}, title = {What Cognitive and Affective States Should Technology Monitor to Support Learning?}, booktitle = {Proc.\ MIE}, publisher = {ACM}, pages = {22--25}, doi = {10.1145/3139513.3139522}, year = {2017}, } Publisher's Version MIE '17: "An Open Platform for Full-Body ..." An Open Platform for Full-Body Multisensory Serious-Games to Teach Geometry in Primary School Simone Ghisio, Erica Volta, Paolo Alborno, Monica Gori, and Gualtiero Volpe (University of Genoa, Italy; IIT Genoa, Italy) Recent results from psychophysics and developmental psychology show that children have a preferential sensory channel to learn specific concepts. In this work, we explore the possibility of developing and evaluating novel multisensory technologies for deeper learning of arithmetic and geometry. The main novelty of such new technologies comes from the renewed understanding of the role of communication between sensory modalities during development that is that specific sensory systems have specific roles for learning specific concepts. Such understanding suggests that it is possible to open a new teaching/learning channel, personalized for each student based on the child’s sensory skills. Multisensory interactive technologies exploiting full-body movement interaction and including a hardware and software platform to support this approach will be presented and discussed. The platform is part of a more general framework developed in the context of the EU-ICT-H2020 weDRAW Project that aims to develop new multimodal technologies for multisensory serious-games to teach mathematics concepts in the primary school. @InProceedings{MIE17p49, author = {Simone Ghisio and Erica Volta and Paolo Alborno and Monica Gori and Gualtiero Volpe}, title = {An Open Platform for Full-Body Multisensory Serious-Games to Teach Geometry in Primary School}, booktitle = {Proc.\ MIE}, publisher = {ACM}, pages = {49--52}, doi = {10.1145/3139513.3139523}, year = {2017}, } Publisher's Version MIE '17: "A Multimodal Serious-Game ..." A Multimodal Serious-Game to Teach Fractions in Primary School Simone Ghisio, Paolo Alborno, Erica Volta, Monica Gori, and Gualtiero Volpe (University of Genoa, Italy; IIT Genoa, Italy) Multisensory learning is considered a relevant pedagogical framework for education since a very long time and several authors support the use of a multisensory and kinesthetic approach in children learning. Moreover, results from psychophysics and developmental psychology show that children have a preferential sensory channel to learn specific concepts (spatial and/or temporal), hence a further evidence for the need of a multisensory approach. In this work, we present an example of serious game for learning a particularly complicated mathematical concept: fractions. The main novelty of our proposal comes from the role covered by the communication between sensory modalities in particular, movement, vision, and sound. The game has been developed in the context of the EU-ICT-H2020 weDRAW Project aiming at developing new multimodal technologies for multisensory serious-games on mathematical concepts for primary school children. @InProceedings{MIE17p67, author = {Simone Ghisio and Paolo Alborno and Erica Volta and Monica Gori and Gualtiero Volpe}, title = {A Multimodal Serious-Game to Teach Fractions in Primary School}, booktitle = {Proc.\ MIE}, publisher = {ACM}, pages = {67--70}, doi = {10.1145/3139513.3139524}, year = {2017}, } Publisher's Version |
|
Wei, Yungang |
MIE '17: "Differences of Online Learning ..."
Differences of Online Learning Behaviors and Eye-Movement between Students Having Different Personality Traits
Bo Sun, Song Lai, Congcong Xu, Rong Xiao, Yungang Wei, and Yongkang Xiao (Beijing Normal University, China) The information technologies are integrated into education so that mass data is available reflecting each action of students in online environments. Numerous studies have exploited these data to do the learning analytics.In this paper, we aim at achieving the show of personalized indicators for students per personality trait on the learning analytics dashboard (LAD) and present the preliminary results. First, we employ learning behavior engagement (LBE) to describe students' learning behaviors, exploited to analyze the significant differences among students having different personality traits. In experiments, fifteen behavioral indicators are tested. The experimental results show that there are significant differences about some behavioral indicators among personality traits. Second, some of these behavioral indicators are presented on the LAD and distributed in each area of interest (AOI). Hence, students can visualize their behavioral data that they care about in AOIs anytime in the learning process. Through the analysis of eye-movement including the fixation duration, fixation count, heat map and track map, we have found that there are significant differences about some visual indicators in AOIs. This is partly consistent with the results of behavioral indicators. @InProceedings{MIE17p71, author = {Bo Sun and Song Lai and Congcong Xu and Rong Xiao and Yungang Wei and Yongkang Xiao}, title = {Differences of Online Learning Behaviors and Eye-Movement between Students Having Different Personality Traits}, booktitle = {Proc.\ MIE}, publisher = {ACM}, pages = {71--75}, doi = {10.1145/3139513.3139527}, year = {2017}, } Publisher's Version |
|
Xiao, Rong |
MIE '17: "Differences of Online Learning ..."
Differences of Online Learning Behaviors and Eye-Movement between Students Having Different Personality Traits
Bo Sun, Song Lai, Congcong Xu, Rong Xiao, Yungang Wei, and Yongkang Xiao (Beijing Normal University, China) The information technologies are integrated into education so that mass data is available reflecting each action of students in online environments. Numerous studies have exploited these data to do the learning analytics.In this paper, we aim at achieving the show of personalized indicators for students per personality trait on the learning analytics dashboard (LAD) and present the preliminary results. First, we employ learning behavior engagement (LBE) to describe students' learning behaviors, exploited to analyze the significant differences among students having different personality traits. In experiments, fifteen behavioral indicators are tested. The experimental results show that there are significant differences about some behavioral indicators among personality traits. Second, some of these behavioral indicators are presented on the LAD and distributed in each area of interest (AOI). Hence, students can visualize their behavioral data that they care about in AOIs anytime in the learning process. Through the analysis of eye-movement including the fixation duration, fixation count, heat map and track map, we have found that there are significant differences about some visual indicators in AOIs. This is partly consistent with the results of behavioral indicators. @InProceedings{MIE17p71, author = {Bo Sun and Song Lai and Congcong Xu and Rong Xiao and Yungang Wei and Yongkang Xiao}, title = {Differences of Online Learning Behaviors and Eye-Movement between Students Having Different Personality Traits}, booktitle = {Proc.\ MIE}, publisher = {ACM}, pages = {71--75}, doi = {10.1145/3139513.3139527}, year = {2017}, } Publisher's Version |
|
Xiao, Yongkang |
MIE '17: "Differences of Online Learning ..."
Differences of Online Learning Behaviors and Eye-Movement between Students Having Different Personality Traits
Bo Sun, Song Lai, Congcong Xu, Rong Xiao, Yungang Wei, and Yongkang Xiao (Beijing Normal University, China) The information technologies are integrated into education so that mass data is available reflecting each action of students in online environments. Numerous studies have exploited these data to do the learning analytics.In this paper, we aim at achieving the show of personalized indicators for students per personality trait on the learning analytics dashboard (LAD) and present the preliminary results. First, we employ learning behavior engagement (LBE) to describe students' learning behaviors, exploited to analyze the significant differences among students having different personality traits. In experiments, fifteen behavioral indicators are tested. The experimental results show that there are significant differences about some behavioral indicators among personality traits. Second, some of these behavioral indicators are presented on the LAD and distributed in each area of interest (AOI). Hence, students can visualize their behavioral data that they care about in AOIs anytime in the learning process. Through the analysis of eye-movement including the fixation duration, fixation count, heat map and track map, we have found that there are significant differences about some visual indicators in AOIs. This is partly consistent with the results of behavioral indicators. @InProceedings{MIE17p71, author = {Bo Sun and Song Lai and Congcong Xu and Rong Xiao and Yungang Wei and Yongkang Xiao}, title = {Differences of Online Learning Behaviors and Eye-Movement between Students Having Different Personality Traits}, booktitle = {Proc.\ MIE}, publisher = {ACM}, pages = {71--75}, doi = {10.1145/3139513.3139527}, year = {2017}, } Publisher's Version |
|
Xu, Congcong |
MIE '17: "Differences of Online Learning ..."
Differences of Online Learning Behaviors and Eye-Movement between Students Having Different Personality Traits
Bo Sun, Song Lai, Congcong Xu, Rong Xiao, Yungang Wei, and Yongkang Xiao (Beijing Normal University, China) The information technologies are integrated into education so that mass data is available reflecting each action of students in online environments. Numerous studies have exploited these data to do the learning analytics.In this paper, we aim at achieving the show of personalized indicators for students per personality trait on the learning analytics dashboard (LAD) and present the preliminary results. First, we employ learning behavior engagement (LBE) to describe students' learning behaviors, exploited to analyze the significant differences among students having different personality traits. In experiments, fifteen behavioral indicators are tested. The experimental results show that there are significant differences about some behavioral indicators among personality traits. Second, some of these behavioral indicators are presented on the LAD and distributed in each area of interest (AOI). Hence, students can visualize their behavioral data that they care about in AOIs anytime in the learning process. Through the analysis of eye-movement including the fixation duration, fixation count, heat map and track map, we have found that there are significant differences about some visual indicators in AOIs. This is partly consistent with the results of behavioral indicators. @InProceedings{MIE17p71, author = {Bo Sun and Song Lai and Congcong Xu and Rong Xiao and Yungang Wei and Yongkang Xiao}, title = {Differences of Online Learning Behaviors and Eye-Movement between Students Having Different Personality Traits}, booktitle = {Proc.\ MIE}, publisher = {ACM}, pages = {71--75}, doi = {10.1145/3139513.3139527}, year = {2017}, } Publisher's Version |
|
Zecca, Luisa |
MIE '17: "A Multimodal LEGO®-Based ..."
A Multimodal LEGO®-Based Learning Activity Mixing Musical Notation and Computer Programming
Luca Andrea Ludovico, Dario Malchiodi, and Luisa Zecca (University of Milan, Italy; University of Milano-Bicocca, Italy) This paper discusses a multimodal learning activity based on LEGO® bricks where elements from the domains of music and informatics are mixed. Such an experience addresses children in preschool age and students of the primary schools in order to convey some basic aspects of computational thinking. The learning methodology is organized in two phases where construction blocks are employed as a physical tool and as a metaphor for music notation, respectively. The goal is to foster in young students abilities such as analysis and re-synthesis, problem solving, abstraction and adaptive reasoning. A web application to support this approach and to provide a prompt feedback to user action is under development, and its design principles and key characteristics will be presented. @InProceedings{MIE17p44, author = {Luca Andrea Ludovico and Dario Malchiodi and Luisa Zecca}, title = {A Multimodal LEGO®-Based Learning Activity Mixing Musical Notation and Computer Programming}, booktitle = {Proc.\ MIE}, publisher = {ACM}, pages = {44--48}, doi = {10.1145/3139513.3139519}, year = {2017}, } Publisher's Version |
39 authors
proc time: 8.35