Binary Dense SIFT Flow Based Position-Information Added Two-Stream CNN for Pedestrian Action Recognition

Sang Kyoo Park, Jun Ho Chung, Dong Sung Pae, Myo Taeg Lim

Research output: Contribution to journalArticlepeer-review

4 Citations (Scopus)


Pedestrian behavior recognition in the driving environment is an important technology to prevent pedestrian accidents by predicting the next movement. It is necessary to recognize current pedestrian behavior to predict future pedestrian behavior. However, many studies have recognized human visible characteristics such as face, body parts or clothes, but few have recognized pedestrian behavior. It is challenging to recognize pedestrian behavior in the driving environment due to the changes in the camera field of view due to the illumination conditions in outdoor environments and vehicle movement. In this paper, to predict pedestrian behavior, we introduce a position-information added two-stream convolutional neural network (CNN) with multi task learning that is robust to the limited conditions of the outdoor driving environment. The conventional two-stream CNN is the most widely used model for human-action recognition. However, the conventional two-stream CNN based on optical flow has limitations regarding pedestrian behavior recognition in a moving vehicle because of the assumptions of brightness constancy and piecewise smoothness. To solve this problem for a moving vehicle, the binary descriptor dense scale-invariant feature transform (SIFT) flow, a feature-based matching algorithm, is robust in moving-pedestrian behavior recognition, such as walking and standing, in a moving vehicle. However, recognizing cross attributes, such as crossing or not crossing the street, is challenging using the binary descriptor dense SIFT flow because people who cross the road or not act the same walking action, but their location on the image is different. Therefore, pedestrian position information should be added to the conventional binary descriptor dense SIFT flow two-stream CNN. Thus, learning biased toward action attributes is evenly learned across action and cross attributes. In addition, YOLO detection and the Siamese tracker are used instead of the ground-truth boundary box to prove the robustness in the action- and cross-attribute recognition from a moving vehicle. The JAAD and PIE datasets were used for training, and only the JAAD dataset was used as a testing dataset for comparison with other state-of-the-art research on multitask and single-task learning.

Original languageEnglish
Article number10445
JournalApplied Sciences (Switzerland)
Issue number20
Publication statusPublished - 2022 Oct

Bibliographical note

Funding Information:
This work was supported by the National Research Foundation of Korea(NRF) grant funded by the Korean government(MSIT) (No. 2022R1F1A1073543).

Publisher Copyright:
© 2022 by the authors.


  • binary descriptor dense scale-invariant feature transform (SIFT) flow
  • pedestrian-action recognition
  • position-information feature
  • two-stream convolutional neural network (CNN)

ASJC Scopus subject areas

  • General Materials Science
  • Instrumentation
  • General Engineering
  • Process Chemistry and Technology
  • Computer Science Applications
  • Fluid Flow and Transfer Processes


Dive into the research topics of 'Binary Dense SIFT Flow Based Position-Information Added Two-Stream CNN for Pedestrian Action Recognition'. Together they form a unique fingerprint.

Cite this