Giter Club home page Giter Club logo

greed's Introduction

ST-GREED: Space-Time Generalized Entropic Differences for Frame Rate Dependent Video Quality

Pavan C. Madhusudana, Neil Birkbeck, Yilin Wang, Balu Adsumilli and Alan C. Bovik

This is the repository of the paper ST-GREED: Space-Time Generalized Entropic Differences for Frame Rate Dependent Video Quality

Usage

The code has been tested with python 3.6. Please refer to requirements.txt for details. Additionally FFmpeg needs to be installed.

Demo data download

For Linux and Mac users, the following commands to can be used to download sample videos for demo code. Videos with two different resolutions 1080p and 4K will be downloaded.

mkdir data
bash demo_data_download_1080p.sh
bash demo_data_download_4K.sh

Windows users can run the following commands

mkdir data
demo_data_download_1080p.bat
demo_data_download_4K.bat

Alternatively, the videos can be manually download from HERE

Running GREED

We provide two ways to run GREED. First method calculates GREED score based on the pretrained model. The model is trained on the entire LIVE-YT-HFR database. We provide models trained on three temporal filters - Haar, Daubechies-2 and Biorthogonal-2.2. The choice of temporal filter can be provided as an argument while running the command. The following commands can be used to compute GREED features for 1080p and 4K resolution videos respectively.

python3 demo_score.py --ref_path data/books_crf_0_120fps.yuv --dist_path data/books_crf_28_30fps.yuv --ref_fps 120 --dist_fps 30 --height 1080 --width 1920 --bit_depth 8 --temp_filt bior22
python3 demo_score.py --ref_path data/Flips_crf_0_120fps.yuv --dist_path data/Flips_crf_48_30fps.yuv --ref_fps 120 --dist_fps 30 --height 2160 --width 3840 --bit_depth 10 --temp_filt bior22

Second way is to simply calculate GREED features. The following commands can be used to compute GREED features for 1080p and 4K resolution videos respectively.

python3 demo_feat.py --ref_path data/books_crf_0_120fps.yuv --dist_path data/books_crf_28_30fps.yuv --ref_fps 120 --dist_fps 30 --height 1080 --width 1920 --bit_depth 8 --temp_filt bior22
python3 demo_feat.py --ref_path data/Flips_crf_0_120fps.yuv --dist_path data/Flips_crf_48_30fps.yuv --ref_fps 120 --dist_fps 30 --height 2160 --width 3840 --bit_depth 10 --temp_filt bior22

Contact

Please contact Pavan ([email protected]) if you have any questions, suggestions or corrections to the above implementation.

Citation

@article{madhusudana2021st,
  title={{ST-GREED}: Space-time generalized entropic differences for frame rate dependent video quality prediction},
  author={Madhusudana, Pavan C and Birkbeck, Neil and Wang, Yilin and Adsumilli, Balu and Bovik, Alan C},
  journal={IEEE Trans. Image Process.},
  year={2021},
  publisher={IEEE}
}

greed's People

Contributors

pavancm avatar qaohv avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

greed's Issues

Can I download other dataset? (Not Youtube-HFR)

Hi, The question is not about your code... sorry.

I'm reading your paper and your asset is really great!

I'm trying to study by some datasets, but recently... the old dataset like live-vqa or live-mobile or csiq-vqa can not be downloaded...

Somehow, there may some live download link and can you share it? thx.

Can't reproduce cross-dataset test results for LIVE VQA and CSIQ

Hello, Pavan.

I'm trying to reproduce cross-dataset test results for LIVE-VQA and CSIQ datasets from article "ST-GREED: Space-Time Generalized Entropic Differences for Frame Rate Dependent Video Quality". To do this I use code from repo and pretrained on LIVE-YT-HFR database model from repository ("model_params/bior22.model" and "model_params/bior22_params.mat").
I predicted scores and calculate SROCC for LIVE-VQA and CSIQ datasets. Results are shown in Table 1:

Table 1. SROCC for LIVE VQA and CSIQ datasets.

LIVE VQA CSIQ
ST-GREED (ours) 0.005 0.207
ST-GREED (article) 0.697 0.616

Despite the fact that we use the same code and model for cross-dataset test you can see our results differ from article noticeable.
Could you help and specify what we are doing in a wrong way? Code, csv files and instructions to reproduce our results in my repository.

Different SROCC results for individual frame rates on the LIVE-YT-HFR dataset

Hello, Pavan.

I tried to reproduce results for GREED-bior2.2 model for individual frame rates on the LIVE-YT-HFR dataset (Table II from article "ST-GREED: Space-Time Generalized Entropic Differences for Frame Rate Dependent Video Quality). To do this I used code from repo and pretrained on LIVE-YT-HFR database model from repository ("model_params/bior22.model" and "model_params/bior22_params.mat").
I predicted scores and calculated SROCC for each frame rate group, results shown in Table 1.

Table 1. SROCC for individual frame rates on the LIVE-YT-HFR dataset

24 30 60 82 98 120 Overall
GREED bior2.2 (article) 0.7268 0.7018 0.7321 0.8179 0.8643 0.8881 0.8822
GREED bior2.2 (our) 0.7516 0.7623 0.8244 0.8826 0.8924 0.9245 0.9172

According the Table 1. model from repository shows better results then reported in a article. I guess it happened bacause I used for measurements model trained on full dataset. Please, clarify which model did you use (trained on full dataset or only part of them) to predict scores for experiment on individual frame rates?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.