Giter Club home page Giter Club logo

vmi's Introduction

Visual-Motion-Interaction-Guided-Pedestrian-Intention-Prediction-Framework

architecture

This is the official python implementation for paper, "Visual-Motion-Interaction Guided Pedestrian Intention Prediction Framework". authored by Neha Sharma, Chhavi Dhiman, S. Indu

The capability to comprehend the intention of pedestrians on road is one of the most crucial skills that the current autonomous vehicles (AVs) are striving for, to become fully autonomous. In recent years, various multimodal methods have become popular in predicting pedestrian crossing intention that utilizes information from different modalities like trajectory, appearance, context, etc. However, most existing research works still lag rich feature representational ability in a multimodal scenario, restricting the performance of these works. Moreover, less emphasis is put on pedestrian interactions with the surroundings for predicting short-term pedestrian intention in a challenging ego-centric vision. To address these challenges, an efficient Visual Motion Interaction guided intention prediction framework has been proposed in this work. This framework combines three divisions namely, Visual Encoder (VE), Motion Encoder (ME) and Interaction Encoder (IE) to capture rich multimodal features of the pedestrian and its interactions with the surroundings, followed by temporal attention and adaptive fusion module to integrate these multimodal features efficiently. The proposed framework outperforms several SOTAs on benchmark datasets: PIE/JAAD with Accuracy, AUC, F1-score, Precision and Recall as 0.92/0.89, 0.91/0.90, 0.87/0.81, 0.86/0.79, 0.88/0.83 respectively. Furthermore, extensive experiments are carried out to investigate the effect of different fusion architectures and design parameters of all encoders. The proposed VMI framework is able to predict the pedestrian crossing intention 2.5 sec ahead of the crossing event.

Datasets

The proposed framework is trained and tested on the benchmark PIE and JAAD datasets. The precomputed pose features for both the datasets are available inside folder data/features/pie/poses/ and data/features/jaad/poses/.

vmi's People

Contributors

neha013 avatar

Stargazers

Rachid Ben abdelmalek avatar

Watchers

 avatar

Forkers

xbchen82

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.