Shortcuts

Useful Tools

Apart from training/testing scripts, We provide lots of useful tools under the tools/ directory.

Log Analysis

tools/analysis/analyze_logs.py plots loss/pose acc curves given a training log file. Run pip install seaborn first to install the dependency.

acc_curve_image

python tools/analysis/analyze_logs.py plot_curve ${JSON_LOGS} [--keys ${KEYS}] [--title ${TITLE}] [--legend ${LEGEND}] [--backend ${BACKEND}] [--style ${STYLE}] [--out ${OUT_FILE}]

Examples:

  • Plot the mse loss of some run.

    python tools/analysis/analyze_logs.py plot_curve log.json --keys loss --legend loss
    
  • Plot the acc of some run, and save the figure to a pdf.

    python tools/analysis/analyze_logs.py plot_curve log.json --keys acc_pose --out results.pdf
    
  • Compare the acc of two runs in the same figure.

    python tools/analysis/analyze_logs.py plot_curve log1.json log2.json --keys acc_pose --legend run1 run2
    

You can also compute the average training speed.

python tools/analysis/analyze_logs.py cal_train_time ${JSON_LOGS} [--include-outliers]
  • Compute the average training speed for a config file

    python tools/analysis/analyze_logs.py cal_train_time log.json
    

    The output is expected to be like the following.

    -----Analyze train time of log.json-----
    slowest epoch 114, average time is 0.9662
    fastest epoch 16, average time is 0.7532
    time std over epochs is 0.0426
    average iter time: 0.8406 s/iter
    

Model Complexity (Experimental)

/tools/analysis/get_flops.py is a script adapted from flops-counter.pytorch to compute the FLOPs and params of a given model.

python tools/analysis/get_flops.py ${CONFIG_FILE} [--shape ${INPUT_SHAPE}]

We will get the result like this

==============================
Input shape: (1, 3, 256, 192)
Flops: 8.9 GMac
Params: 28.04 M
==============================

Note

This tool is still experimental and we do not guarantee that the number is absolutely correct.

You may use the result for simple comparisons, but double check it before you adopt it in technical reports or papers.

(1) FLOPs are related to the input shape while parameters are not. The default input shape is (1, 3, 340, 256) for 2D recognizer, (1, 3, 32, 340, 256) for 3D recognizer. (2) Some operators are not counted into FLOPs like GN and custom operators. Refer to mmcv.cnn.get_model_complexity_info() for details.

Model Conversion

MMPose model to ONNX (experimental)

/tools/deployment/pytorch2onnx.py is a script to convert model to ONNX format. It also supports comparing the output results between Pytorch and ONNX model for verification. Run pip install onnx onnxruntime first to install the dependency.

python tools/deployment/pytorch2onnx.py $CONFIG_PATH $CHECKPOINT_PATH --shape $SHAPE --verify

Prepare a model for publishing

tools/publish_model.py helps users to prepare their model for publishing.

Before you upload a model to AWS, you may want to:

(1) convert model weights to CPU tensors. (2) delete the optimizer states. (3) compute the hash of the checkpoint file and append the hash id to the filename.

python tools/publish_model.py ${INPUT_FILENAME} ${OUTPUT_FILENAME}

E.g.,

python tools/publish_model.py work_dirs/hrnet_w32_coco_256x192/latest.pth hrnet_w32_coco_256x192

The final output filename will be hrnet_w32_coco_256x192-{hash id}_{time_stamp}.pth.

Model Serving

MMPose supports model serving with TorchServe. You can serve an MMPose model via following steps:

1. Install TorchServe

Please follow the official installation guide of TorchServe: https://github.com/pytorch/serve#install-torchserve-and-torch-model-archiver

2. Convert model from MMPose to TorchServe

python tools/deployment/mmpose2torchserve.py \
  ${CONFIG_FILE} ${CHECKPOINT_FILE} \
  --output-folder ${MODEL_STORE} \
  --model-name ${MODEL_NAME}

Note: ${MODEL_STORE} needs to be an absolute path to a folder.

A model file ${MODEL_NAME}.mar will be generated and placed in the ${MODEL_STORE} folder.

3. Deploy model serving

We introduce following 2 approaches to deploying the model serving.

Use TorchServe API

torchserve --start \
  --model-store ${MODEL_STORE} \
  --models ${MODEL_PATH1} [${MODEL_NAME}=${MODEL_PATH2} ... ]

Example:

# serve one model
torchserve --start --model-store /models --models hrnet=hrnet.mar

# serve all models in model-store
torchserve --start --model-store /models --models all

After executing the torchserve command above, TorchServe runse on your host, listening for inference requests. Check the official docs for more information.

Use mmpose-serve docker image

Build mmpose-serve docker image:

docker build -t mmpose-serve:latest docker/serve/

Run mmpose-serve:

Check the official docs for running TorchServe with docker.

In order to run in GPU, you need to install nvidia-docker. You can omit the --gpus argument in order to run in CPU.

Example:

docker run --rm \
--cpus 8 \
--gpus device=0 \
-p8080:8080 -p8081:8081 -p8082:8082 \
--mount type=bind,source=$MODEL_STORE,target=/home/model-server/model-store \
mmpose-serve:latest

Read the docs about the Inference (8080), Management (8081) and Metrics (8082) APis

4. Test deployment

You can use tools/deployment/test_torchserver.py to test the model serving. It will compare and visualize the result of torchserver and pytorch.

python tools/deployment/test_torchserver.py ${IMAGE_PAHT} ${CONFIG_PATH} ${CHECKPOINT_PATH} ${MODEL_NAME} --out-dir ${OUT_DIR}

Example:

python tools/deployment/test_torchserver.py \
  ls tests/data/coco/000000000785.jpg \
  configs/body/2d_kpt_sview_rgb_img/topdown_heatmap/coco/hrnet_w48_coco_256x192.py \
  https://download.openmmlab.com/mmpose/top_down/hrnet/hrnet_w48_coco_256x192-b9e0b3ab_20200708.pth \
  hrnet \
  --out-dir vis_results

Miscellaneous

Read the Docs v: latest
Versions
latest
1.x
v0.14.0
fix-doc
cn_doc
Downloads
pdf
html
epub
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.