A dynamic field approach to goal inference and error monitoring for human-robot interaction
| Autor(a) principal: | |
|---|---|
| Data de Publicação: | 2009 |
| Outros Autores: | , , |
| Idioma: | eng |
| Título da fonte: | Repositórios Científicos de Acesso Aberto de Portugal (RCAAP) |
| Texto Completo: | http://hdl.handle.net/1822/10306 |
Resumo: | In this paper we present results of our ongoing research on non-verbal human-robot interaction that is heavily inspired by recent experimental findings about the neuro-cognitive mechanisms supporting joint action in humans. The robot control architecture implements the joint coordination of actions and goals as a dynamic process that integrates contextual cues, shared task knowledge and the predicted outcome of the user’s motor behavior. The architecture is formalized by a coupled system of dynamic neural fields representing a distributed network of local but connected neural populations with specific functionalities. We validate the approach in a task in which a robot and a human user jointly construct a toy ’vehicle’. We show that the context-dependent mapping from action observation onto appropriate complementary actions allows the robot to cope with dynamically changing joint action situations. This includes a basic form of error monitoring and compensation. |
| id |
RCAP_5a8b7b95bdd6a71671a4d979daee640b |
|---|---|
| oai_identifier_str |
oai:repositorium.sdum.uminho.pt:1822/10306 |
| network_acronym_str |
RCAP |
| network_name_str |
Repositórios Científicos de Acesso Aberto de Portugal (RCAAP) |
| repository_id_str |
https://opendoar.ac.uk/repository/7160 |
| spelling |
A dynamic field approach to goal inference and error monitoring for human-robot interactionHuman-robot interactionGoal inferenceError monitoringJoint actionAnticipatory behaviorAction understandingDynamic neural fieldsIn this paper we present results of our ongoing research on non-verbal human-robot interaction that is heavily inspired by recent experimental findings about the neuro-cognitive mechanisms supporting joint action in humans. The robot control architecture implements the joint coordination of actions and goals as a dynamic process that integrates contextual cues, shared task knowledge and the predicted outcome of the user’s motor behavior. The architecture is formalized by a coupled system of dynamic neural fields representing a distributed network of local but connected neural populations with specific functionalities. We validate the approach in a task in which a robot and a human user jointly construct a toy ’vehicle’. We show that the context-dependent mapping from action observation onto appropriate complementary actions allows the robot to cope with dynamically changing joint action situations. This includes a basic form of error monitoring and compensation.Fundação para a Ciência e a Tecnologia (FCT) - POCI/V.5/A0119/2005, CONC-REEQ/17/2001Universidade do MinhoBicho, E.Louro, LuísHipólito, NzojiErlhagen, Wolfram20092009-01-01T00:00:00Zconference paperinfo:eu-repo/semantics/publishedVersionapplication/pdfhttp://hdl.handle.net/1822/10306engDAUTENHAHN, E., ed. lit. – “AISB Convention 2009 on Adaptive & Emergent Behaviour & Complex Systems : proceedings of the International Symposium on New Frontiers in Human-Robot Interaction, 1, Edinburgh, Scotland, 2009”. [S.l. : s.n, 2009]. p. 31-37.1902956850info:eu-repo/semantics/openAccessreponame:Repositórios Científicos de Acesso Aberto de Portugal (RCAAP)instname:FCCN, serviços digitais da FCT – Fundação para a Ciência e a Tecnologiainstacron:RCAAP2024-05-11T05:04:25Zoai:repositorium.sdum.uminho.pt:1822/10306Portal AgregadorONGhttps://www.rcaap.pt/oai/openaireinfo@rcaap.ptopendoar:https://opendoar.ac.uk/repository/71602025-05-28T15:07:15.222636Repositórios Científicos de Acesso Aberto de Portugal (RCAAP) - FCCN, serviços digitais da FCT – Fundação para a Ciência e a Tecnologiafalse |
| dc.title.none.fl_str_mv |
A dynamic field approach to goal inference and error monitoring for human-robot interaction |
| title |
A dynamic field approach to goal inference and error monitoring for human-robot interaction |
| spellingShingle |
A dynamic field approach to goal inference and error monitoring for human-robot interaction Bicho, E. Human-robot interaction Goal inference Error monitoring Joint action Anticipatory behavior Action understanding Dynamic neural fields |
| title_short |
A dynamic field approach to goal inference and error monitoring for human-robot interaction |
| title_full |
A dynamic field approach to goal inference and error monitoring for human-robot interaction |
| title_fullStr |
A dynamic field approach to goal inference and error monitoring for human-robot interaction |
| title_full_unstemmed |
A dynamic field approach to goal inference and error monitoring for human-robot interaction |
| title_sort |
A dynamic field approach to goal inference and error monitoring for human-robot interaction |
| author |
Bicho, E. |
| author_facet |
Bicho, E. Louro, Luís Hipólito, Nzoji Erlhagen, Wolfram |
| author_role |
author |
| author2 |
Louro, Luís Hipólito, Nzoji Erlhagen, Wolfram |
| author2_role |
author author author |
| dc.contributor.none.fl_str_mv |
Universidade do Minho |
| dc.contributor.author.fl_str_mv |
Bicho, E. Louro, Luís Hipólito, Nzoji Erlhagen, Wolfram |
| dc.subject.por.fl_str_mv |
Human-robot interaction Goal inference Error monitoring Joint action Anticipatory behavior Action understanding Dynamic neural fields |
| topic |
Human-robot interaction Goal inference Error monitoring Joint action Anticipatory behavior Action understanding Dynamic neural fields |
| description |
In this paper we present results of our ongoing research on non-verbal human-robot interaction that is heavily inspired by recent experimental findings about the neuro-cognitive mechanisms supporting joint action in humans. The robot control architecture implements the joint coordination of actions and goals as a dynamic process that integrates contextual cues, shared task knowledge and the predicted outcome of the user’s motor behavior. The architecture is formalized by a coupled system of dynamic neural fields representing a distributed network of local but connected neural populations with specific functionalities. We validate the approach in a task in which a robot and a human user jointly construct a toy ’vehicle’. We show that the context-dependent mapping from action observation onto appropriate complementary actions allows the robot to cope with dynamically changing joint action situations. This includes a basic form of error monitoring and compensation. |
| publishDate |
2009 |
| dc.date.none.fl_str_mv |
2009 2009-01-01T00:00:00Z |
| dc.type.driver.fl_str_mv |
conference paper |
| dc.type.status.fl_str_mv |
info:eu-repo/semantics/publishedVersion |
| status_str |
publishedVersion |
| dc.identifier.uri.fl_str_mv |
http://hdl.handle.net/1822/10306 |
| url |
http://hdl.handle.net/1822/10306 |
| dc.language.iso.fl_str_mv |
eng |
| language |
eng |
| dc.relation.none.fl_str_mv |
DAUTENHAHN, E., ed. lit. – “AISB Convention 2009 on Adaptive & Emergent Behaviour & Complex Systems : proceedings of the International Symposium on New Frontiers in Human-Robot Interaction, 1, Edinburgh, Scotland, 2009”. [S.l. : s.n, 2009]. p. 31-37. 1902956850 |
| dc.rights.driver.fl_str_mv |
info:eu-repo/semantics/openAccess |
| eu_rights_str_mv |
openAccess |
| dc.format.none.fl_str_mv |
application/pdf |
| dc.source.none.fl_str_mv |
reponame:Repositórios Científicos de Acesso Aberto de Portugal (RCAAP) instname:FCCN, serviços digitais da FCT – Fundação para a Ciência e a Tecnologia instacron:RCAAP |
| instname_str |
FCCN, serviços digitais da FCT – Fundação para a Ciência e a Tecnologia |
| instacron_str |
RCAAP |
| institution |
RCAAP |
| reponame_str |
Repositórios Científicos de Acesso Aberto de Portugal (RCAAP) |
| collection |
Repositórios Científicos de Acesso Aberto de Portugal (RCAAP) |
| repository.name.fl_str_mv |
Repositórios Científicos de Acesso Aberto de Portugal (RCAAP) - FCCN, serviços digitais da FCT – Fundação para a Ciência e a Tecnologia |
| repository.mail.fl_str_mv |
info@rcaap.pt |
| _version_ |
1833595111245611008 |