IDEAS home Printed from https://ideas.repec.org/a/gam/jmathe/v11y2023i6p1451-d1099719.html
   My bibliography  Save this article

OFPI: Optical Flow Pose Image for Action Recognition

Author

Listed:
  • Dong Chen

    (College of Computer Science and Engineering, Guangxi Normal University, Guilin 541004, China
    College of Physics and Electronic Engineering, Nanning Normal University, Nanning 530001, China)

  • Tao Zhang

    (College of Physics and Electronic Engineering, Nanning Normal University, Nanning 530001, China)

  • Peng Zhou

    (College of Computer Science and Engineering, Guangxi Normal University, Guilin 541004, China)

  • Chenyang Yan

    (Division of Electrical Engineering and Computer Science, Kanazawa University, Kakuma-machi, Kanazawa 920-1192, Japan)

  • Chuanqi Li

    (College of Computer Science and Engineering, Guangxi Normal University, Guilin 541004, China
    College of Physics and Electronic Engineering, Nanning Normal University, Nanning 530001, China)

Abstract

Most approaches to action recognition based on pseudo-images involve encoding skeletal data into RGB-like image representations. This approach cannot fully exploit the kinematic features and structural information of human poses, and convolutional neural network (CNN) models that process pseudo-images lack a global field of view and cannot completely extract action features from pseudo-images. In this paper, we propose a novel pose-based action representation method called Optical Flow Pose Image (OFPI) in order to fully capitalize on the spatial and temporal information of skeletal data. Specifically, in the proposed method, an advanced pose estimator collects skeletal data before locating the target person and then extracts skeletal data utilizing a human tracking algorithm. The OFPI representation is obtained by aggregating these skeletal data over time. To test the superiority of OFPI and investigate the significance of the model having a global field of view, we trained a simple CNN model and a transformer-based model, respectively. Both models achieved superior outcomes. Because of the global field of view, especially in the transformer-based model, the OFPI-based representation achieved 98.3% and 94.2% accuracy on the KTH and JHMDB datasets, respectively. Compared with other advanced pose representation methods and multi-stream methods, OFPI achieved state-of-the-art performance on the JHMDB dataset, indicating the utility and potential of this algorithm for skeleton-based action recognition research.

Suggested Citation

  • Dong Chen & Tao Zhang & Peng Zhou & Chenyang Yan & Chuanqi Li, 2023. "OFPI: Optical Flow Pose Image for Action Recognition," Mathematics, MDPI, vol. 11(6), pages 1-23, March.
  • Handle: RePEc:gam:jmathe:v:11:y:2023:i:6:p:1451-:d:1099719
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/2227-7390/11/6/1451/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/2227-7390/11/6/1451/
    Download Restriction: no
    ---><---

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Jinyoon Park & Chulwoong Kim & Seung-Chan Kim, 2023. "Enhancing Robustness of Viewpoint Changes in 3D Skeleton-Based Human Action Recognition," Mathematics, MDPI, vol. 11(15), pages 1-17, July.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jmathe:v:11:y:2023:i:6:p:1451-:d:1099719. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.