2D Animal Keypoint Dataset

It is recommended to symlink the dataset root to $MMPOSE/data. If your folder structure is different, you may need to change the corresponding paths in config files.

MMPose supported datasets:

Animal-Pose

@InProceedings{Cao_2019_ICCV,
    author = {Cao, Jinkun and Tang, Hongyang and Fang, Hao-Shu and Shen, Xiaoyong and Lu, Cewu and Tai, Yu-Wing},
    title = {Cross-Domain Adaptation for Animal Pose Estimation},
    booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
    month = {October},
    year = {2019} }

For Animal-Pose datatset, we prepare the dataset as follows:

  1. Download the images of PASCAL2011, especially the five categories (dog, cat, sheep, cow, horse), which we use as trainval dataset.

  2. Download the test-set images with raw annotations (1000 images, 5 categories).

  3. We have pre-processed the annotations to make it compatible with MMPose. Please download the annotation files from annotations. If you would like to generate the annotations by yourself, please check our dataset parsing codes.

Extract them under {MMPose}/data, and make them look like this:

mmpose
├── mmpose
├── docs
├── tests
├── tools
├── configs
`── data
    │── animalpose
        │
        │-- VOC2011
        │   │-- Annotations
        │   │-- ImageSets
        │   │-- JPEGImages
        │   │-- SegmentationClass
        │   │-- SegmentationObject
        │
        │-- animalpose_image_part2
        │   │-- cat
        │   │-- cow
        │   │-- dog
        │   │-- horse
        │   │-- sheep
        │
        │-- annotations
        │   │-- animalpose_train.json
        │   |-- animalpose_val.json
        │   |-- animalpose_trainval.json
        │   │-- animalpose_test.json
        │
        │-- PASCAL2011_animal_annotation
        │   │-- cat
        │   │   |-- 2007_000528_1.xml
        │   │   |-- 2007_000549_1.xml
        │   │   │-- ...
        │   │-- cow
        │   │-- dog
        │   │-- horse
        │   │-- sheep
        │
        │-- annimalpose_anno2
        │   │-- cat
        │   │   |-- ca1.xml
        │   │   |-- ca2.xml
        │   │   │-- ...
        │   │-- cow
        │   │-- dog
        │   │-- horse
        │   │-- sheep

The official dataset does not provide the official train/val/test set split. We choose the images from PascalVOC for train & val. In total, we have 3608 images and 5117 annotations for train+val, where 2798 images with 4000 annotations are used for training, and 810 images with 1117 annotations are used for validation. Those images from other sources (1000 images with 1000 annotations) are used for testing.

Horse-10

@inproceedings{mathis2021pretraining,
  title={Pretraining boosts out-of-domain robustness for pose estimation},
  author={Mathis, Alexander and Biasi, Thomas and Schneider, Steffen and Yuksekgonul, Mert and Rogers, Byron and Bethge, Matthias and Mathis, Mackenzie W},
  booktitle={Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision},
  pages={1859--1868},
  year={2021}
}

For Horse-10 datatset, images can be downloaded from download. Please download the annotation files from horse10_annotations. Extract them under {MMPose}/data, and make them look like this:

mmpose
├── mmpose
├── docs
├── tests
├── tools
├── configs
`── data
    │── horse10
        │-- annotations
        │   │-- horse10-train-split1.json
        │   |-- horse10-train-split2.json
        │   |-- horse10-train-split3.json
        │   │-- horse10-test-split1.json
        │   |-- horse10-test-split2.json
        │   |-- horse10-test-split3.json
        │-- labeled-data
        │   │-- BrownHorseinShadow
        │   │-- BrownHorseintoshadow
        │   │-- ...

MacaquePose

@article{labuguen2020macaquepose,
  title={MacaquePose: A novel ‘in the wild’macaque monkey pose dataset for markerless motion capture},
  author={Labuguen, Rollyn and Matsumoto, Jumpei and Negrete, Salvador and Nishimaru, Hiroshi and Nishijo, Hisao and Takada, Masahiko and Go, Yasuhiro and Inoue, Ken-ichi and Shibata, Tomohiro},
  journal={bioRxiv},
  year={2020},
  publisher={Cold Spring Harbor Laboratory}
}

For MacaquePose datatset, images can be downloaded from download. Please download the annotation files from macaque_annotations. Extract them under {MMPose}/data, and make them look like this:

mmpose
├── mmpose
├── docs
├── tests
├── tools
├── configs
`── data
    │── macaque
        │-- annotations
        │   │-- macaque_train.json
        │   |-- macaque_test.json
        │-- images
        │   │-- 01418849d54b3005.jpg
        │   │-- 0142d1d1a6904a70.jpg
        │   │-- 01ef2c4c260321b7.jpg
        │   │-- 020a1c75c8c85238.jpg
        │   │-- 020b1506eef2557d.jpg
        │   │-- ...

Since the official dataset does not provide the test set, we randomly select 12500 images for training, and the rest for evaluation (see code).

Vinegar Fly

@article{pereira2019fast,
  title={Fast animal pose estimation using deep neural networks},
  author={Pereira, Talmo D and Aldarondo, Diego E and Willmore, Lindsay and Kislin, Mikhail and Wang, Samuel S-H and Murthy, Mala and Shaevitz, Joshua W},
  journal={Nature methods},
  volume={16},
  number={1},
  pages={117--125},
  year={2019},
  publisher={Nature Publishing Group}
}

For Vinegar Fly datatset, images can be downloaded from vinegar_fly_images. Please download the annotation files from vinegar_fly_annotations. Extract them under {MMPose}/data, and make them look like this:

mmpose
├── mmpose
├── docs
├── tests
├── tools
├── configs
`── data
    │── fly
        │-- annotations
        │   │-- fly_train.json
        │   |-- fly_test.json
        │-- images
        │   │-- 0.jpg
        │   │-- 1.jpg
        │   │-- 2.jpg
        │   │-- 3.jpg
        │   │-- ...

Since the official dataset does not provide the test set, we randomly select 90% images for training, and the rest (10%) for evaluation (see code).

Desert Locust

@article{graving2019deepposekit,
  title={DeepPoseKit, a software toolkit for fast and robust animal pose estimation using deep learning},
  author={Graving, Jacob M and Chae, Daniel and Naik, Hemal and Li, Liang and Koger, Benjamin and Costelloe, Blair R and Couzin, Iain D},
  journal={Elife},
  volume={8},
  pages={e47994},
  year={2019},
  publisher={eLife Sciences Publications Limited}
}

For Desert Locust datatset, images can be downloaded from locust_images. Please download the annotation files from locust_annotations. Extract them under {MMPose}/data, and make them look like this:

mmpose
├── mmpose
├── docs
├── tests
├── tools
├── configs
`── data
    │── fly
        │-- annotations
        │   │-- locust_train.json
        │   |-- locust_test.json
        │-- images
        │   │-- 0.jpg
        │   │-- 1.jpg
        │   │-- 2.jpg
        │   │-- 3.jpg
        │   │-- ...

Since the official dataset does not provide the test set, we randomly select 90% images for training, and the rest (10%) for evaluation (see code).

Grévy’s Zebra

@article{graving2019deepposekit,
  title={DeepPoseKit, a software toolkit for fast and robust animal pose estimation using deep learning},
  author={Graving, Jacob M and Chae, Daniel and Naik, Hemal and Li, Liang and Koger, Benjamin and Costelloe, Blair R and Couzin, Iain D},
  journal={Elife},
  volume={8},
  pages={e47994},
  year={2019},
  publisher={eLife Sciences Publications Limited}
}

For Grévy’s Zebra datatset, images can be downloaded from zebra_images. Please download the annotation files from zebra_annotations. Extract them under {MMPose}/data, and make them look like this:

mmpose
├── mmpose
├── docs
├── tests
├── tools
├── configs
`── data
    │── zebra
        │-- annotations
        │   │-- zebra_train.json
        │   |-- zebra_test.json
        │-- images
        │   │-- 0.jpg
        │   │-- 1.jpg
        │   │-- 2.jpg
        │   │-- 3.jpg
        │   │-- ...

Since the official dataset does not provide the test set, we randomly select 90% images for training, and the rest (10%) for evaluation (see code).

ATRW

@inproceedings{li2020atrw,
  title={ATRW: A Benchmark for Amur Tiger Re-identification in the Wild},
  author={Li, Shuyuan and Li, Jianguo and Tang, Hanlin and Qian, Rui and Lin, Weiyao},
  booktitle={Proceedings of the 28th ACM International Conference on Multimedia},
  pages={2590--2598},
  year={2020}
}

ATRW captures images of the Amur tiger (also known as Siberian tiger, Northeast-China tiger) in the wild. For ATRW datatset, please download images from Pose_train, Pose_val, and Pose_test. Note that in the ATRW official annotation files, the key “file_name” is written as “filename”. To make it compatible with other coco-type json files, we have modified this key. Please download the modified annotation files from atrw_annotations. Extract them under {MMPose}/data, and make them look like this:

mmpose
├── mmpose
├── docs
├── tests
├── tools
├── configs
`── data
    │── atrw
        │-- annotations
        │   │-- keypoint_train.json
        │   │-- keypoint_val.json
        │   │-- keypoint_trainval.json
        │-- images
        │   │-- train
        │   │   │-- 000002.jpg
        │   │   │-- 000003.jpg
        │   │   │-- ...
        │   │-- val
        │   │   │-- 000001.jpg
        │   │   │-- 000013.jpg
        │   │   │-- ...
        │   │-- test
        │   │   │-- 000000.jpg
        │   │   │-- 000004.jpg
        │   │   │-- ...