HAI Poster: Continuous Multi-Modal Interaction Causes Human-Robot Alignment

In October 2017 I’ve been attending the 5th annual International Conference on Human-Agent Interaction (HAI 2017) where I presented work done in the course Science and Technology of Human-Robot Interaction, part of my master’s degree.

This included a short paper as well as a poster presentation. While I am not sure about the legality of publishing the full length paper here (we handed rights to ACM), what I can share is the abstract and the poster.

Here is the poster

Abstract: This study explores the effect of continuous interaction with a multi-modal robot on alignment in user dialogue. A game application of ‘20 Questions’ was developed for a SoftBank Robotics NAO robot with supporting gestures and a study was carried out in which subjects played a number of games. The robot’s confidence of speech comprehension was logged and used to analyse the similarity between application legal dialogue and user speech. It was found that subjects significantly aligned their dialogue to the robot throughout continuous, multi-modal interaction.

(Full paper available here)

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s