TY - GEN
T1 - AVR-AKG
T2 - 2021 China Automation Congress, CAC 2021
AU - Qi, Tao
AU - Zhao, Yongjia
AU - Dai, Shuling
AU - Zhang, Ning
AU - Liu, Kan
AU - Chen, Shiyu
N1 - Publisher Copyright:
© 2021 IEEE
PY - 2021
Y1 - 2021
N2 - Manipulation tasks in specific environment, such as truss assembly in the space and "Hape" building blocks assembly in the toy room, unfold intentionally under corresponding domains, which contains a lot of domain knowledge. Being able to process contextual knowledge in these activities under corresponding domains over time can help us understand manipulation intentions. However, most researchers use methods of machine learning to make robots understand manipulation scenarios, which is a black model consuming a lot of computing resources. Moreover, the demonstration of assembly tasks in the real world is time-consuming and labor-intensive, and it is not suitable to initialize the assembly scene. To overcome these limitations, we introduced AVR-AKG: an implementing framework for domain knowledge generation that can generate dynamic knowledge graph in real time from assembly demonstrations. A combination of an Assembly Virtual Reality subsystem and an Assembly Knowledge Generation subsystem, in correspondence with assembly tasks and knowledge generation, is used to represent robot manipulation knowledge with Entity-Relation-Entity (E-R-E) and Entity-Attribute-Value (E-A-V) tuples. Using this framework, we propose a case study in which the demonstrator completes an assembly task using "Hape" building blocks, generating knowledge related to the operational context with knowledge graphs during first-person demonstration.
AB - Manipulation tasks in specific environment, such as truss assembly in the space and "Hape" building blocks assembly in the toy room, unfold intentionally under corresponding domains, which contains a lot of domain knowledge. Being able to process contextual knowledge in these activities under corresponding domains over time can help us understand manipulation intentions. However, most researchers use methods of machine learning to make robots understand manipulation scenarios, which is a black model consuming a lot of computing resources. Moreover, the demonstration of assembly tasks in the real world is time-consuming and labor-intensive, and it is not suitable to initialize the assembly scene. To overcome these limitations, we introduced AVR-AKG: an implementing framework for domain knowledge generation that can generate dynamic knowledge graph in real time from assembly demonstrations. A combination of an Assembly Virtual Reality subsystem and an Assembly Knowledge Generation subsystem, in correspondence with assembly tasks and knowledge generation, is used to represent robot manipulation knowledge with Entity-Relation-Entity (E-R-E) and Entity-Attribute-Value (E-A-V) tuples. Using this framework, we propose a case study in which the demonstrator completes an assembly task using "Hape" building blocks, generating knowledge related to the operational context with knowledge graphs during first-person demonstration.
KW - domain knowledge
KW - human-computer interaction
KW - virtual reality
UR - https://www.scopus.com/pages/publications/85128108089
U2 - 10.1109/CAC53003.2021.9727698
DO - 10.1109/CAC53003.2021.9727698
M3 - 会议稿件
AN - SCOPUS:85128108089
T3 - Proceeding - 2021 China Automation Congress, CAC 2021
SP - 8084
EP - 8089
BT - Proceeding - 2021 China Automation Congress, CAC 2021
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 22 October 2021 through 24 October 2021
ER -