Abstract
In this paper, we present a novel three-stream deep feature fusion technique to recognize interaction-level human activities from a first-person viewpoint. Specifically, the proposed approach distinguishes human motion and camera ego-motion to focus on human’s movement. The features of human and camera ego-motion information are extracted from the three-stream architecture. These features are fused by considering a relationship of human action and camera ego-motion. To validate the effectiveness of our approach, we perform experiments on UTKinect-FirstPerson dataset, and achieve state-of-the-art performance.
Original language | English |
---|---|
Title of host publication | International Conference on Control, Automation and Systems |
Publisher | IEEE Computer Society |
Pages | 297-299 |
Number of pages | 3 |
ISBN (Electronic) | 9788993215151 |
Publication status | Published - 2018 Dec 10 |
Event | 18th International Conference on Control, Automation and Systems, ICCAS 2018 - PyeongChang, Korea, Republic of Duration: 2018 Oct 17 → 2018 Oct 20 |
Publication series
Name | International Conference on Control, Automation and Systems |
---|---|
Volume | 2018-October |
ISSN (Print) | 1598-7833 |
Other
Other | 18th International Conference on Control, Automation and Systems, ICCAS 2018 |
---|---|
Country/Territory | Korea, Republic of |
City | PyeongChang |
Period | 18/10/17 → 18/10/20 |
Bibliographical note
Funding Information:This work was supported by Institute for Information & communications Technology Promotion(IITP) grant funded by the Korea government(MSIT)(2014-0-00059, Development of Predictive Visual Intelligence Technology).
Publisher Copyright:
© ICROS.
Keywords
- First-person activity recognition
- Human-robot interaction
- Robot surveillance.
- Three-stream deep features
ASJC Scopus subject areas
- Artificial Intelligence
- Computer Science Applications
- Control and Systems Engineering
- Electrical and Electronic Engineering