API Documentation

mmpose.apis

mmpose.apis.extract_pose_sequence(pose_results, frame_idx, causal, seq_len, step=1)[源代码]

Extract the target frame from 2D pose results, and pad the sequence to a fixed length.

参数
  • pose_results (List[List[Dict]]) –

    Multi-frame pose detection results stored in a nested list. Each element of the outer list is the pose detection results of a single frame, and each element of the inner list is the pose information of one person, which contains:

    keypoints (ndarray[K, 2 or 3]): x, y, [score] track_id (int): unique id of each person, required when

    with_track_id==True`

    bbox ((4, ) or (5, )): left, right, top, bottom, [score]

  • frame_idx (int) – The index of the frame in the original video.

  • causal (bool) – If True, the target frame is the last frame in a sequence. Otherwise, the target frame is in the middle of a sequence.

  • seq_len (int) – The number of frames in the input sequence.

  • step (int) – Step size to extract frames from the video.

返回

Multi-frame pose detection results stored in a

nested list with a length of seq_len.

int: The target frame index in the padded sequence.

返回类型

List[List[Dict]]

mmpose.apis.get_track_id(results, results_last, next_id, min_keypoints=3, use_oks=False, tracking_thr=0.3, use_one_euro=False, fps=None)[源代码]

Get track id for each person instance on the current frame.

参数
  • results (list[dict]) – The bbox & pose results of the current frame (bbox_result, pose_result).

  • results_last (list[dict]) – The bbox & pose & track_id info of the last frame (bbox_result, pose_result, track_id).

  • next_id (int) – The track id for the new person instance.

  • min_keypoints (int) – Minimum number of keypoints recognized as person. default: 3.

  • use_oks (bool) – Flag to using oks tracking. default: False.

  • tracking_thr (float) – The threshold for tracking.

  • use_one_euro (bool) – Option to use one-euro-filter. default: False.

  • fps (optional) – Parameters that d_cutoff when one-euro-filter is used as a video input

返回

The bbox & pose & track_id info of the

current frame (bbox_result, pose_result, track_id).

int: The track id for the new person instance.

返回类型

list[dict]

mmpose.apis.inference_bottom_up_pose_model(model, img_or_path, pose_nms_thr=0.9, return_heatmap=False, outputs=None)[源代码]

Inference a single image.

num_people: P num_keypoints: K bbox height: H bbox width: W

参数
  • model (nn.Module) – The loaded pose model.

  • img_or_path (str| np.ndarray) – Image filename or loaded image.

  • pose_nms_thr (float) – retain oks overlap < pose_nms_thr, default: 0.9.

  • return_heatmap (bool) – Flag to return heatmap, default: False.

  • outputs (list(str) | tuple(str)) – Names of layers whose outputs need to be returned, default: None.

返回

The predicted pose info.

The length of the list is the number of people (P). Each item in the list is a ndarray, containing each person’s pose (ndarray[Kx3]): x, y, score.

list[dict[np.ndarray[N, K, H, W] | torch.tensor[N, K, H, W]]]:

Output feature maps from layers specified in outputs. Includes ‘heatmap’ if return_heatmap is True.

返回类型

list[ndarray]

mmpose.apis.inference_interhand_3d_model(model, img_or_path, det_results, bbox_thr=None, format='xywh', dataset='InterHand3DDataset')[源代码]

Inference a single image with a list of hand bounding boxes.

num_bboxes: N num_keypoints: K

参数
  • model (nn.Module) – The loaded pose model.

  • img_or_path (str | np.ndarray) – Image filename or loaded image.

  • det_results (List[dict]) –

    The 2D bbox sequences stored in a list. Each each element of the list is the bbox of one person, which contains:

    • ”bbox” (ndarray[4 or 5]): The person bounding box,

    which contains 4 box coordinates (and score).

  • dataset (str) – Dataset name.

  • format – bbox format (‘xyxy’ | ‘xywh’). Default: ‘xywh’. ‘xyxy’ means (left, top, right, bottom), ‘xywh’ means (left, top, width, height).

返回

3D pose inference results. Each element is the result of

an instance, which contains: - “keypoints_3d” (ndarray[K,3]): predicted 3D keypoints If there is no valid instance, an empty list will be returned.

返回类型

List[dict]

mmpose.apis.inference_pose_lifter_model(model, pose_results_2d, dataset, with_track_id=True, image_size=None, norm_pose_2d=False)[源代码]

Inference 3D pose from 2D pose sequences using a pose lifter model.

参数
  • model (nn.Module) – The loaded pose lifter model

  • pose_results_2d (List[List[dict]]) –

    The 2D pose sequences stored in a nested list. Each element of the outer list is the 2D pose results of a single frame, and each element of the inner list is the 2D pose of one person, which contains:

    • ”keypoints” (ndarray[K, 2 or 3]): x, y, [score]

    • ”track_id” (int)

  • dataset (str) – Dataset name, e.g. ‘Body3DH36MDataset’

  • with_track_id – If True, the element in pose_results_2d is expected to contain “track_id”, which will be used to gather the pose sequence of a person from multiple frames. Otherwise, the pose results in each frame are expected to have a consistent number and order of identities. Default is True.

  • image_size (Tuple|List) – image width, image height. If None, image size will not be contained in dict data.

  • norm_pose_2d (bool) – If True, scale the bbox (along with the 2D pose) to the average bbox scale of the dataset, and move the bbox (along with the 2D pose) to the average bbox center of the dataset.

返回

3D pose inference results. Each element is the result of

an instance, which contains: - “keypoints_3d” (ndarray[K,3]): predicted 3D keypoints - “keypoints” (ndarray[K, 2 or 3]): from the last frame in

pose_results_2d.

  • ”track_id” (int): from the last frame in pose_results_2d.

If there is no valid instance, an empty list will be returned.

返回类型

List[dict]

mmpose.apis.inference_top_down_pose_model(model, img_or_path, person_results, bbox_thr=None, format='xywh', dataset='TopDownCocoDataset', return_heatmap=False, outputs=None)[源代码]

Inference a single image with a list of person bounding boxes.

num_people: P num_keypoints: K bbox height: H bbox width: W

参数
  • model (nn.Module) – The loaded pose model.

  • img_or_path (str| np.ndarray) – Image filename or loaded image.

  • person_results (List(dict)) – the item in the dict may contain ‘bbox’ and/or ‘track_id’. ‘bbox’ (4, ) or (5, ): The person bounding box, which contains 4 box coordinates (and score). ‘track_id’ (int): The unique id for each human instance.

  • bbox_thr – Threshold for bounding boxes. Only bboxes with higher scores will be fed into the pose detector. If bbox_thr is None, ignore it.

  • format – bbox format (‘xyxy’ | ‘xywh’). Default: ‘xywh’. ‘xyxy’ means (left, top, right, bottom), ‘xywh’ means (left, top, width, height).

  • dataset (str) – Dataset name, e.g. ‘TopDownCocoDataset’.

  • return_heatmap (bool) – Flag to return heatmap, default: False

  • outputs (list(str) | tuple(str)) – Names of layers whose outputs need to be returned, default: None

返回

The bbox & pose info,

Each item in the list is a dictionary, containing the bbox: (left, top, right, bottom, [score]) and the pose (ndarray[Kx3]): x, y, score

list[dict[np.ndarray[N, K, H, W] | torch.tensor[N, K, H, W]]]:

Output feature maps from layers specified in outputs. Includes ‘heatmap’ if return_heatmap is True.

返回类型

list[dict]

mmpose.apis.init_pose_model(config, checkpoint=None, device='cuda:0')[源代码]

Initialize a pose model from config file.

参数
  • config (str or mmcv.Config) – Config file path or the config object.

  • checkpoint (str, optional) – Checkpoint path. If left as None, the model will not load any weights.

返回

The constructed detector.

返回类型

nn.Module

mmpose.apis.multi_gpu_test(model, data_loader, tmpdir=None, gpu_collect=False)[源代码]

Test model with multiple gpus.

This method tests model with multiple gpus and collects the results under two different modes: gpu and cpu modes. By setting ‘gpu_collect=True’ it encodes results to gpu tensors and use gpu communication for results collection. On cpu mode it saves the results on different gpus to ‘tmpdir’ and collects them by the rank 0 worker.

参数
  • model (nn.Module) – Model to be tested.

  • data_loader (nn.Dataloader) – Pytorch data loader.

  • tmpdir (str) – Path of directory to save the temporary results from different gpus under cpu mode.

  • gpu_collect (bool) – Option to use either gpu or cpu to collect results.

返回

The prediction results.

返回类型

list

mmpose.apis.single_gpu_test(model, data_loader)[源代码]

Test model with a single gpu.

This method tests model with a single gpu and displays test progress bar.

参数
  • model (nn.Module) – Model to be tested.

  • data_loader (nn.Dataloader) – Pytorch data loader.

返回

The prediction results.

返回类型

list

mmpose.apis.train_model(model, dataset, cfg, distributed=False, validate=False, timestamp=None, meta=None)[源代码]

Train model entry function.

参数
  • model (nn.Module) – The model to be trained.

  • dataset (Dataset) – Train dataset.

  • cfg (dict) – The config dict for training.

  • distributed (bool) – Whether to use distributed training. Default: False.

  • validate (bool) – Whether to do evaluation. Default: False.

  • timestamp (str | None) – Local time for runner. Default: None.

  • meta (dict | None) – Meta dict to record some important information. Default: None

mmpose.apis.vis_3d_pose_result(model, result, img=None, dataset='Body3DH36MDataset', kpt_score_thr=0.3, radius=8, thickness=2, num_instances=- 1, show=False, out_file=None)[源代码]

Visualize the 3D pose estimation results.

参数
  • model (nn.Module) – The loaded model.

  • result (list[dict]) –

mmpose.apis.vis_pose_result(model, img, result, radius=4, thickness=1, kpt_score_thr=0.3, bbox_color='green', dataset='TopDownCocoDataset', show=False, out_file=None)[源代码]

Visualize the detection results on the image.

参数
  • model (nn.Module) – The loaded detector.

  • img (str | np.ndarray) – Image filename or loaded image.

  • result (list[dict]) – The results to draw over img (bbox_result, pose_result).

  • radius (int) – Radius of circles.

  • thickness (int) – Thickness of lines.

  • kpt_score_thr (float) – The threshold to visualize the keypoints.

  • skeleton (list[tuple()]) – Default None.

  • show (bool) – Whether to show the image. Default True.

  • out_file (str|None) – The filename of the output visualization image.

mmpose.apis.vis_pose_tracking_result(model, img, result, radius=4, thickness=1, kpt_score_thr=0.3, dataset='TopDownCocoDataset', show=False, out_file=None)[源代码]

Visualize the pose tracking results on the image.

参数
  • model (nn.Module) – The loaded detector.

  • img (str | np.ndarray) – Image filename or loaded image.

  • result (list[dict]) – The results to draw over img (bbox_result, pose_result).

  • kpt_score_thr (float) – The threshold to visualize the keypoints.

  • skeleton (list[tuple()]) – Default None.

  • show (bool) – Whether to show the image. Default True.

  • out_file (str|None) – The filename of the output visualization image.

mmpose.core

evaluation

class mmpose.core.evaluation.DistEvalHook(dataloader, interval=1, gpu_collect=False, save_best=True, key_indicator='AP', rule=None, **eval_kwargs)[源代码]

Distributed evaluation hook.

This hook will regularly perform evaluation in a given interval when performing in distributed environment.

参数
  • dataloader (DataLoader) – A PyTorch dataloader.

  • interval (int) – Evaluation interval (by epochs). Default: 1.

  • gpu_collect (bool) – Whether to use gpu or cpu to collect results. Default: False.

  • save_best (bool) – Whether to save best checkpoint during evaluation. Default: True.

  • key_indicator (str | None) – Key indicator to measure the best checkpoint during evaluation when save_best is set to True. Options are the evaluation metrics to the test dataset. e.g., top1_acc, top5_acc, mean_class_accuracy, mean_average_precision for action recognition dataset (RawframeDataset and VideoDataset). AR@AN, auc for action localization dataset (ActivityNetDataset). Default: top1_acc.

  • rule (str | None) – Comparison rule for best score. If set to None, it will infer a reasonable rule. Default: ‘None’.

  • eval_kwargs (dict, optional) – Arguments for evaluation.

after_train_epoch(runner)[源代码]

Called after each training epoch to evaluate the model.

class mmpose.core.evaluation.EvalHook(dataloader, interval=1, gpu_collect=False, save_best=True, key_indicator='AP', rule=None, **eval_kwargs)[源代码]

Non-Distributed evaluation hook.

This hook will regularly perform evaluation in a given interval when performing in non-distributed environment.

参数
  • dataloader (DataLoader) – A PyTorch dataloader.

  • interval (int) – Evaluation interval (by epochs). Default: 1.

  • gpu_collect (bool) – Whether to use gpu or cpu to collect results. Default: False.

  • save_best (bool) – Whether to save best checkpoint during evaluation. Default: True.

  • key_indicator (str | None) – Key indicator to measure the best checkpoint during evaluation when save_best is set to True. Options are the evaluation metrics to the test dataset. e.g., acc, AP, PCK. Default: AP.

  • rule (str | None) – Comparison rule for best score. If set to None, it will infer a reasonable rule. Default: ‘None’.

  • eval_kwargs (dict, optional) – Arguments for evaluation.

after_train_epoch(runner)[源代码]

Called after every training epoch to evaluate the results.

evaluate(runner, results)[源代码]

Evaluate the results.

参数
  • runner (mmcv.Runner) – The underlined training runner.

  • results (list) – Output results.

mmpose.core.evaluation.aggregate_results(scale, aggregated_heatmaps, tags_list, heatmaps, tags, test_scale_factor, project2image, flip_test, align_corners=False)[源代码]

Aggregate multi-scale outputs.

注解

batch size: N keypoints num : K heatmap width: W heatmap height: H

参数
  • scale (int) – current scale

  • aggregated_heatmaps (torch.Tensor | None) – Aggregated heatmaps.

  • tags_list (list(torch.Tensor)) – Tags list of previous scale.

  • heatmaps (List(torch.Tensor[NxKxWxH])) – A batch of heatmaps.

  • tags (List(torch.Tensor[NxKxWxH])) – A batch of tag maps.

  • test_scale_factor (List(int)) – Multi-scale factor for testing.

  • project2image (bool) – Option to resize to base scale.

  • flip_test (bool) – Option to use flip test.

  • align_corners (bool) – Align corners when performing interpolation.

返回

a tuple containing aggregated results.

  • aggregated_heatmaps (torch.Tensor): Heatmaps with multi scale.

  • tags_list (list(torch.Tensor)): Tag list of multi scale.

返回类型

tuple

mmpose.core.evaluation.compute_similarity_transform(source_points, target_points)[源代码]

Computes a similarity transform (sR, t) that takes a set of 3D points source_points (N x 3) closest to a set of 3D points target_points, where R is an 3x3 rotation matrix, t 3x1 translation, s scale. And return the transformed 3D points source_points_hat (N x 3). i.e. solves the orthogonal Procrutes problem.

提示

Points number: N

参数
  • source_points (np.ndarray([N, 3])) – Source point set.

  • target_points (np.ndarray([N, 3])) – Target point set.

返回

Transformed source point set.

返回类型

source_points_hat (np.ndarray([N, 3]))

mmpose.core.evaluation.get_group_preds(grouped_joints, center, scale, heatmap_size, use_udp=False)[源代码]

Transform the grouped joints back to the image.

参数
  • grouped_joints (list) – Grouped person joints.

  • center (np.ndarray[2, ]) – Center of the bounding box (x, y).

  • scale (np.ndarray[2, ]) – Scale of the bounding box wrt [width, height].

  • heatmap_size (np.ndarray[2, ]) – Size of the destination heatmaps.

  • use_udp (bool) – Unbiased data processing. Paper ref: Huang et al. The Devil is in the Details: Delving into Unbiased Data Processing for Human Pose Estimation (CVPR 2020).

返回

List of the pose result for each person.

返回类型

list

mmpose.core.evaluation.get_multi_stage_outputs(outputs, outputs_flip, num_joints, with_heatmaps, with_ae, tag_per_joint=True, flip_index=None, project2image=True, size_projected=None, align_corners=False)[源代码]

Inference the model to get multi-stage outputs (heatmaps & tags), and resize them to base sizes.

参数
  • outputs (list(torch.Tensor)) – Outputs of network

  • outputs_flip (list(torch.Tensor)) – Flip outputs of network

  • num_joints (int) – Number of joints

  • with_heatmaps (list[bool]) – Option to output heatmaps for different stages.

  • with_ae (list[bool]) – Option to output ae tags for different stages.

  • tag_per_joint (bool) – Option to use one tag map per joint.

  • flip_index (list[int]) – Keypoint flip index.

  • project2image (bool) – Option to resize to base scale.

  • size_projected ([w, h]) – Base size of heatmaps.

  • align_corners (bool) – Align corners when performing interpolation.

返回

A tuple containing multi-stage outputs.

  • outputs (list(torch.Tensor)): List of simple outputs and flip outputs.

  • heatmaps (torch.Tensor): Multi-stage heatmaps that are resized to the base size.

  • tags (torch.Tensor): Multi-stage tags that are resized to the base size.

返回类型

tuple

mmpose.core.evaluation.keypoint_3d_auc(pred, gt, mask, alignment='none')[源代码]

Calculate the Area Under the Curve (3DAUC) computed for a range of 3DPCK thresholds.

Paper ref: `Monocular 3D Human Pose Estimation In The Wild Using Improved CNN Supervision’ 3DV`2017 More details can be found in the `paper

This implementation is derived from mpii_compute_3d_pck.m, which is provided as part of the MPI-INF-3DHP test data release.

batch_size: N num_keypoints: K keypoint_dims: C

参数
  • pred (np.ndarray[N, K, C]) – Predicted keypoint location.

  • gt (np.ndarray[N, K, C]) – Groundtruth keypoint location.

  • mask (np.ndarray[N, K]) – Visibility of the target. False for invisible joints, and True for visible. Invisible joints will be ignored for accuracy calculation.

  • alignment (str, optional) –

    method to align the prediction with the groundtruth. Supported options are: - 'none': no alignment will be applied - 'scale': align in the least-square sense in scale - 'procrustes': align in the least-square sense in scale,

    rotation and translation.

返回

AUC computed for a range of 3DPCK thresholds.

返回类型

auc

mmpose.core.evaluation.keypoint_3d_pck(pred, gt, mask, alignment='none', threshold=0.15)[源代码]

Calculate the Percentage of Correct Keypoints (3DPCK) w. or w/o rigid alignment.

Paper ref: Monocular 3D Human Pose Estimation In The Wild Using Improved CNN Supervision’ 3DV`2017 More details can be found in the `paper.

batch_size: N num_keypoints: K keypoint_dims: C

参数
  • pred (np.ndarray[N, K, C]) – Predicted keypoint location.

  • gt (np.ndarray[N, K, C]) – Groundtruth keypoint location.

  • mask (np.ndarray[N, K]) – Visibility of the target. False for invisible joints, and True for visible. Invisible joints will be ignored for accuracy calculation.

  • alignment (str, optional) –

    method to align the prediction with the groundtruth. Supported options are: - 'none': no alignment will be applied - 'scale': align in the least-square sense in scale - 'procrustes': align in the least-square sense in scale,

    rotation and translation.

  • threshold – If L2 distance between the prediction and the groundtruth is less then threshold, the predicted result is considered as correct. Default: 0.15 (m).

返回

percentage of correct keypoints.

返回类型

pck

mmpose.core.evaluation.keypoint_auc(pred, gt, mask, normalize, num_step=20)[源代码]

Calculate the pose accuracy of PCK for each individual keypoint and the averaged accuracy across all keypoints for coordinates.

注解

batch_size: N num_keypoints: K

参数
  • pred (np.ndarray[N, K, 2]) – Predicted keypoint location.

  • gt (np.ndarray[N, K, 2]) – Groundtruth keypoint location.

  • mask (np.ndarray[N, K]) – Visibility of the target. False for invisible joints, and True for visible. Invisible joints will be ignored for accuracy calculation.

  • normalize (float) – Normalization factor.

返回

Area under curve.

返回类型

float

mmpose.core.evaluation.keypoint_epe(pred, gt, mask)[源代码]

Calculate the end-point error.

注解

batch_size: N num_keypoints: K

参数
  • pred (np.ndarray[N, K, 2]) – Predicted keypoint location.

  • gt (np.ndarray[N, K, 2]) – Groundtruth keypoint location.

  • mask (np.ndarray[N, K]) – Visibility of the target. False for invisible joints, and True for visible. Invisible joints will be ignored for accuracy calculation.

返回

Average end-point error.

返回类型

float

mmpose.core.evaluation.keypoint_mpjpe(pred, gt, mask, alignment='none')[源代码]

Calculate the mean per-joint position error (MPJPE) and the error after rigid alignment with the ground truth (P-MPJPE).

batch_size: N num_keypoints: K keypoint_dims: C

参数
  • pred (np.ndarray[N, K, C]) – Predicted keypoint location.

  • gt (np.ndarray[N, K, C]) – Groundtruth keypoint location.

  • mask (np.ndarray[N, K]) – Visibility of the target. False for invisible joints, and True for visible. Invisible joints will be ignored for accuracy calculation.

  • alignment (str, optional) –

    method to align the prediction with the groundtruth. Supported options are: - 'none': no alignment will be applied - 'scale': align in the least-square sense in scale - 'procrustes': align in the least-square sense in scale,

    rotation and translation.

返回

A tuple containing joint position errors

  • mpjpe (float|np.ndarray[N]): mean per-joint position error.

  • p-mpjpe (float|np.ndarray[N]): mpjpe after rigid alignment with the

    ground truth

返回类型

tuple

mmpose.core.evaluation.keypoint_pck_accuracy(pred, gt, mask, thr, normalize)[源代码]

Calculate the pose accuracy of PCK for each individual keypoint and the averaged accuracy across all keypoints for coordinates.

注解

PCK metric measures accuracy of the localization of the body joints. The distances between predicted positions and the ground-truth ones are typically normalized by the bounding box size. The threshold (thr) of the normalized distance is commonly set as 0.05, 0.1 or 0.2 etc.

batch_size: N num_keypoints: K

参数
  • pred (np.ndarray[N, K, 2]) – Predicted keypoint location.

  • gt (np.ndarray[N, K, 2]) – Groundtruth keypoint location.

  • mask (np.ndarray[N, K]) – Visibility of the target. False for invisible joints, and True for visible. Invisible joints will be ignored for accuracy calculation.

  • thr (float) – Threshold of PCK calculation.

  • normalize (np.ndarray[N, 2]) – Normalization factor for H&W.

返回

A tuple containing keypoint accuracy.

  • acc (np.ndarray[K]): Accuracy of each keypoint.

  • avg_acc (float): Averaged accuracy across all keypoints.

  • cnt (int): Number of valid keypoints.

返回类型

tuple

mmpose.core.evaluation.keypoints_from_heatmaps(heatmaps, center, scale, unbiased=False, post_process='default', kernel=11, valid_radius_factor=0.0546875, use_udp=False, target_type='GaussianHeatmap')[源代码]

Get final keypoint predictions from heatmaps and transform them back to the image.

注解

batch size: N num keypoints: K heatmap height: H heatmap width: W

参数
  • heatmaps (np.ndarray[N, K, H, W]) – model predicted heatmaps.

  • center (np.ndarray[N, 2]) – Center of the bounding box (x, y).

  • scale (np.ndarray[N, 2]) – Scale of the bounding box wrt height/width.

  • post_process (str/None) – Choice of methods to post-process heatmaps. Currently supported: None, ‘default’, ‘unbiased’, ‘megvii’.

  • unbiased (bool) – Option to use unbiased decoding. Mutually exclusive with megvii. Note: this arg is deprecated and unbiased=True can be replaced by post_process=’unbiased’ Paper ref: Zhang et al. Distribution-Aware Coordinate Representation for Human Pose Estimation (CVPR 2020).

  • kernel (int) – Gaussian kernel size (K) for modulation, which should match the heatmap gaussian sigma when training. K=17 for sigma=3 and k=11 for sigma=2.

  • valid_radius_factor (float) – The radius factor of the positive area in classification heatmap for UDP.

  • use_udp (bool) – Use unbiased data processing.

  • target_type (str) – ‘GaussianHeatmap’ or ‘CombinedTarget’. GaussianHeatmap: Classification target with gaussian distribution. CombinedTarget: The combination of classification target (response map) and regression target (offset map). Paper ref: Huang et al. The Devil is in the Details: Delving into Unbiased Data Processing for Human Pose Estimation (CVPR 2020).

返回

A tuple containing keypoint predictions and scores.

  • preds (np.ndarray[N, K, 2]): Predicted keypoint location in images.

  • maxvals (np.ndarray[N, K, 1]): Scores (confidence) of the keypoints.

返回类型

tuple

mmpose.core.evaluation.keypoints_from_heatmaps3d(heatmaps, center, scale)[源代码]

Get final keypoint predictions from 3d heatmaps and transform them back to the image.

注解

batch size: N num keypoints: K heatmap depth size: D heatmap height: H heatmap width: W

参数
  • heatmaps (np.ndarray[N, K, D, H, W]) – model predicted heatmaps.

  • center (np.ndarray[N, 2]) – Center of the bounding box (x, y).

  • scale (np.ndarray[N, 2]) – Scale of the bounding box wrt height/width.

返回

A tuple containing keypoint predictions and scores.

  • preds (np.ndarray[N, K, 3]): Predicted 3d keypoint location

in images. - maxvals (np.ndarray[N, K, 1]): Scores (confidence) of the keypoints.

返回类型

tuple

mmpose.core.evaluation.keypoints_from_regression(regression_preds, center, scale, img_size)[源代码]

Get final keypoint predictions from regression vectors and transform them back to the image.

注解

batch_size: N num_keypoints: K

参数
  • regression_preds (np.ndarray[N, K, 2]) – model prediction.

  • center (np.ndarray[N, 2]) – Center of the bounding box (x, y).

  • scale (np.ndarray[N, 2]) – Scale of the bounding box wrt height/width.

  • img_size (list(img_width, img_height)) – model input image size.

返回

Predicted keypoint location in images. maxvals (np.ndarray[N, K, 1]): Scores (confidence) of the keypoints.

返回类型

preds (np.ndarray[N, K, 2])

mmpose.core.evaluation.multilabel_classification_accuracy(pred, gt, mask, thr=0.5)[源代码]

Get multi-label classification accuracy. .. rubric:: 提示

batch size: N label number: L

参数
  • pred (np.ndarray[N, L, 2]) – model predicted labels.

  • gt (np.ndarray[N, L, 2]) – ground-truth labels.

  • mask (np.ndarray[N, 1] or np.ndarray[N, L]) – reliability of

  • labels. (ground-truth) –

返回

multi-label classification accuracy.

返回类型

acc (float)

mmpose.core.evaluation.pose_pck_accuracy(output, target, mask, thr=0.05, normalize=None)[源代码]

Calculate the pose accuracy of PCK for each individual keypoint and the averaged accuracy across all keypoints from heatmaps.

注解

PCK metric measures accuracy of the localization of the body joints. The distances between predicted positions and the ground-truth ones are typically normalized by the bounding box size. The threshold (thr) of the normalized distance is commonly set as 0.05, 0.1 or 0.2 etc.

batch_size: N num_keypoints: K heatmap height: H heatmap width: W

参数
  • output (np.ndarray[N, K, H, W]) – Model output heatmaps.

  • target (np.ndarray[N, K, H, W]) – Groundtruth heatmaps.

  • mask (np.ndarray[N, K]) – Visibility of the target. False for invisible joints, and True for visible. Invisible joints will be ignored for accuracy calculation.

  • thr (float) – Threshold of PCK calculation. Default 0.05.

  • normalize (np.ndarray[N, 2]) – Normalization factor for H&W.

返回

A tuple containing keypoint accuracy.

  • np.ndarray[K]: Accuracy of each keypoint.

  • float: Averaged accuracy across all keypoints.

  • int: Number of valid keypoints.

返回类型

tuple

mmpose.core.evaluation.post_dark_udp(coords, batch_heatmaps, kernel=3)[源代码]

DARK post-pocessing. Implemented by udp. Paper ref: Huang et al. The Devil is in the Details: Delving into Unbiased Data Processing for Human Pose Estimation (CVPR 2020). Zhang et al. Distribution-Aware Coordinate Representation for Human Pose Estimation (CVPR 2020).

注解

batch size: B num keypoints: K num persons: N hight of heatmaps: H width of heatmaps: W B=1 for bottom_up paradigm where all persons share the same heatmap. B=N for top_down paradigm where each person has its own heatmaps.

参数
  • coords (np.ndarray[N, K, 2]) – Initial coordinates of human pose.

  • batch_heatmaps (np.ndarray[B, K, H, W]) – batch_heatmaps

  • kernel (int) – Gaussian kernel size (K) for modulation.

返回

Refined coordinates.

返回类型

res (np.ndarray[N, K, 2])

fp16

class mmpose.core.fp16.Fp16OptimizerHook(grad_clip=None, coalesce=True, bucket_size_mb=- 1, loss_scale=512.0, distributed=True)[源代码]

FP16 optimizer hook.

The steps of fp16 optimizer is as follows. 1. Scale the loss value. 2. BP in the fp16 model. 2. Copy gradients from fp16 model to fp32 weights. 3. Update fp32 weights. 4. Copy updated parameters from fp32 weights to fp16 model.

Refer to https://arxiv.org/abs/1710.03740 for more details.

参数

loss_scale (float) – Scale factor multiplied with loss.

after_train_iter(runner)[源代码]

Backward optimization steps for Mixed Precision Training.

  1. Scale the loss by a scale factor.

  2. Backward the loss to obtain the gradients (fp16).

  3. Copy gradients from the model to the fp32 weight copy.

  4. Scale the gradients back and update the fp32 weight copy.

  5. Copy back the params from fp32 weight copy to the fp16 model.

参数

runner (mmcv.Runner) – The underlines training runner.

before_run(runner)[源代码]

Preparing steps before Mixed Precision Training.

  1. Make a master copy of fp32 weights for optimization.

  2. Convert the main model from fp32 to fp16.

参数

runner (mmcv.Runner) – The underlines training runner.

static copy_grads_to_fp32(fp16_net, fp32_weights)[源代码]

Copy gradients from fp16 model to fp32 weight copy.

static copy_params_to_fp16(fp16_net, fp32_weights)[源代码]

Copy updated params from fp32 weight copy to fp16 model.

mmpose.core.fp16.auto_fp16(apply_to=None, out_fp32=False)[源代码]

Decorator to enable fp16 training automatically.

This decorator is useful when you write custom modules and want to support mixed precision training. If inputs arguments are fp32 tensors, they will be converted to fp16 automatically. Arguments other than fp32 tensors are ignored.

参数
  • apply_to (Iterable, optional) – The argument names to be converted. None indicates all arguments.

  • out_fp32 (bool) – Whether to convert the output back to fp32.

示例

>>> import torch.nn as nn
>>> class MyModule1(nn.Module):
>>>
>>>     # Convert x and y to fp16
>>>     @auto_fp16()
>>>     def forward(self, x, y):
>>>         pass
>>> import torch.nn as nn
>>> class MyModule2(nn.Module):
>>>
>>>     # convert pred to fp16
>>>     @auto_fp16(apply_to=('pred', ))
>>>     def do_something(self, pred, others):
>>>         pass
mmpose.core.fp16.cast_tensor_type(inputs, src_type, dst_type)[源代码]

Recursively convert Tensor in inputs from src_type to dst_type.

参数
  • inputs – Inputs that to be casted.

  • src_type (torch.dtype) – Source type.

  • dst_type (torch.dtype) – Destination type.

返回

The same type with inputs, but all contained Tensors have been cast.

mmpose.core.fp16.force_fp32(apply_to=None, out_fp16=False)[源代码]

Decorator to convert input arguments to fp32 in force.

This decorator is useful when you write custom modules and want to support mixed precision training. If there are some inputs that must be processed in fp32 mode, then this decorator can handle it. If inputs arguments are fp16 tensors, they will be converted to fp32 automatically. Arguments other than fp16 tensors are ignored.

参数
  • apply_to (Iterable, optional) – The argument names to be converted. None indicates all arguments.

  • out_fp16 (bool) – Whether to convert the output back to fp16.

示例

>>> import torch.nn as nn
>>> class MyModule1(nn.Module):
>>>
>>>     # Convert x and y to fp32
>>>     @force_fp32()
>>>     def loss(self, x, y):
>>>         pass
>>> import torch.nn as nn
>>> class MyModule2(nn.Module):
>>>
>>>     # convert pred to fp32
>>>     @force_fp32(apply_to=('pred', ))
>>>     def post_process(self, pred, others):
>>>         pass
mmpose.core.fp16.wrap_fp16_model(model)[源代码]

Wrap the FP32 model to FP16.

  1. Convert FP32 model to FP16.

  2. Remain some necessary layers to be FP32, e.g., normalization layers.

参数

model (nn.Module) – Model in FP32.

utils

class mmpose.core.utils.WeightNormClipHook(max_norm=1.0, module_param_names='weight')[源代码]

Apply weight norm clip regularization.

The module’s parameter will be clip to a given maximum norm before each forward pass.

参数
  • max_norm (float) – The maximum norm of the parameter.

  • module_param_names (str|list) – The parameter name (or name list) to apply weight norm clip.

hook(module, _input)[源代码]

Hook function.

property hook_type

Hook type Subclasses should overwrite this function to return a string value in.

{forward, forward_pre, backward}

mmpose.core.utils.allreduce_grads(params, coalesce=True, bucket_size_mb=- 1)[源代码]

Allreduce gradients.

参数
  • params (list[torch.Parameters]) – List of parameters of a model

  • coalesce (bool, optional) – Whether allreduce parameters as a whole. Default: True.

  • bucket_size_mb (int, optional) – Size of bucket, the unit is MB. Default: -1.

post_processing

mmpose.core.post_processing.affine_transform(pt, trans_mat)[源代码]

Apply an affine transformation to the points.

参数
  • pt (np.ndarray) – a 2 dimensional point to be transformed

  • trans_mat (np.ndarray) – 2x3 matrix of an affine transform

返回

Transformed points.

返回类型

np.ndarray

mmpose.core.post_processing.flip_back(output_flipped, flip_pairs, target_type='GaussianHeatmap')[源代码]

Flip the flipped heatmaps back to the original form.

注解

batch_size: N num_keypoints: K heatmap height: H heatmap width: W

参数
  • output_flipped (np.ndarray[N, K, H, W]) – The output heatmaps obtained from the flipped images.

  • flip_pairs (list[tuple()) – Pairs of keypoints which are mirrored (for example, left ear – right ear).

  • target_type (str) – GaussianHeatmap or CombinedTarget

返回

heatmaps that flipped back to the original image

返回类型

np.ndarray

mmpose.core.post_processing.fliplr_joints(joints_3d, joints_3d_visible, img_width, flip_pairs)[源代码]

Flip human joints horizontally.

注解

num_keypoints: K

参数
  • joints_3d (np.ndarray([K, 3])) – Coordinates of keypoints.

  • joints_3d_visible (np.ndarray([K, 1])) – Visibility of keypoints.

  • img_width (int) – Image width.

  • flip_pairs (list[tuple()]) – Pairs of keypoints which are mirrored (for example, left ear – right ear).

返回

Flipped human joints.

  • joints_3d_flipped (np.ndarray([K, 3])): Flipped joints.

  • joints_3d_visible_flipped (np.ndarray([K, 1])): Joint visibility.

返回类型

tuple

mmpose.core.post_processing.fliplr_regression(regression, flip_pairs, center_mode='static', center_x=0.5, center_index=0)[源代码]

Flip human joints horizontally.

注解

batch_size: N num_keypoint: K

参数
  • regression (np.ndarray([..., K, C])) –

    Coordinates of keypoints, where K is the joint number and C is the dimension. Example shapes are: - [N, K, C]: a batch of keypoints where N is the batch size. - [N, T, K, C]: a batch of pose sequences, where T is the frame

    number.

  • flip_pairs (list[tuple()]) – Pairs of keypoints which are mirrored (for example, left ear – right ear).

  • center_mode (str) – The mode to set the center location on the x-axis to flip around. Options are: - static: use a static x value (see center_x also) - root: use a root joint (see center_index also)

  • center_x (float) – Set the x-axis location of the flip center. Only used when center_mode=static.

  • center_index (int) – Set the index of the root joint, whose x location will be used as the flip center. Only used when center_mode=root.

返回

Flipped human joints.

  • regression_flipped (np.ndarray([…, K, C])): Flipped joints.

返回类型

tuple

mmpose.core.post_processing.get_affine_transform(center, scale, rot, output_size, shift=(0.0, 0.0), inv=False)[源代码]

Get the affine transform matrix, given the center/scale/rot/output_size.

参数
  • center (np.ndarray[2, ]) – Center of the bounding box (x, y).

  • scale (np.ndarray[2, ]) – Scale of the bounding box wrt [width, height].

  • rot (float) – Rotation angle (degree).

  • output_size (np.ndarray[2, ] | list(2,)) – Size of the destination heatmaps.

  • shift (0-100%) – Shift translation ratio wrt the width/height. Default (0., 0.).

  • inv (bool) – Option to inverse the affine transform direction. (inv=False: src->dst or inv=True: dst->src)

返回

The transform matrix.

返回类型

np.ndarray

mmpose.core.post_processing.get_warp_matrix(theta, size_input, size_dst, size_target)[源代码]

Calculate the transformation matrix under the constraint of unbiased. Paper ref: Huang et al. The Devil is in the Details: Delving into Unbiased Data Processing for Human Pose Estimation (CVPR 2020).

参数
  • theta (float) – Rotation angle in degrees.

  • size_input (np.ndarray) – Size of input image [w, h].

  • size_dst (np.ndarray) – Size of output image [w, h].

  • size_target (np.ndarray) – Size of ROI in input plane [w, h].

返回

A matrix for transformation.

返回类型

matrix (np.ndarray)

mmpose.core.post_processing.oks_iou(g, d, a_g, a_d, sigmas=None, vis_thr=None)[源代码]

Calculate oks ious.

参数
  • g – Ground truth keypoints.

  • d – Detected keypoints.

  • a_g – Area of the ground truth object.

  • a_d – Area of the detected object.

  • sigmas – standard deviation of keypoint labelling.

  • vis_thr – threshold of the keypoint visibility.

返回

The oks ious.

返回类型

list

mmpose.core.post_processing.oks_nms(kpts_db, thr, sigmas=None, vis_thr=None)[源代码]

OKS NMS implementations.

参数
  • kpts_db – keypoints.

  • thr – Retain overlap < thr.

  • sigmas – standard deviation of keypoint labelling.

  • vis_thr – threshold of the keypoint visibility.

返回

indexes to keep.

返回类型

np.ndarray

mmpose.core.post_processing.rotate_point(pt, angle_rad)[源代码]

Rotate a point by an angle.

参数
  • pt (list[float]) – 2 dimensional point to be rotated

  • angle_rad (float) – rotation angle by radian

返回

Rotated point.

返回类型

list[float]

mmpose.core.post_processing.soft_oks_nms(kpts_db, thr, max_dets=20, sigmas=None, vis_thr=None)[源代码]

Soft OKS NMS implementations.

参数
  • kpts_db

  • thr – retain oks overlap < thr.

  • max_dets – max number of detections to keep.

  • sigmas – Keypoint labelling uncertainty.

返回

indexes to keep.

返回类型

np.ndarray

mmpose.core.post_processing.transform_preds(coords, center, scale, output_size, use_udp=False)[源代码]

Get final keypoint predictions from heatmaps and apply scaling and translation to map them back to the image.

注解

num_keypoints: K

参数
  • coords (np.ndarray[K, ndims]) –

    • If ndims=2, corrds are predicted keypoint location.

    • If ndims=4, corrds are composed of (x, y, scores, tags)

    • If ndims=5, corrds are composed of (x, y, scores, tags, flipped_tags)

  • center (np.ndarray[2, ]) – Center of the bounding box (x, y).

  • scale (np.ndarray[2, ]) – Scale of the bounding box wrt [width, height].

  • output_size (np.ndarray[2, ] | list(2,)) – Size of the destination heatmaps.

  • use_udp (bool) – Use unbiased data processing

返回

Predicted coordinates in the images.

返回类型

np.ndarray

mmpose.core.post_processing.warp_affine_joints(joints, mat)[源代码]

Apply affine transformation defined by the transform matrix on the joints.

参数
  • joints (np.ndarray[..., 2]) – Origin coordinate of joints.

  • mat (np.ndarray[3, 2]) – The affine matrix.

返回

Result coordinate of joints.

返回类型

matrix (np.ndarray[…, 2])

mmpose.models

backbones

class mmpose.models.backbones.AlexNet(num_classes=- 1)[源代码]

AlexNet backbone.

The input for AlexNet is a 224x224 RGB image.

参数

num_classes (int) – number of classes for classification. The default value is -1, which uses the backbone as a feature extractor without the top classifier.

forward(x)[源代码]

Forward function.

参数

x (tensor | tuple[tensor]) – x could be a Torch.tensor or a tuple of Torch.tensor, containing input data for forward computation.

class mmpose.models.backbones.CPM(in_channels, out_channels, feat_channels=128, middle_channels=32, num_stages=6, norm_cfg={'requires_grad': True, 'type': 'BN'})[源代码]

CPM backbone.

Convolutional Pose Machines. More details can be found in the paper .

参数
  • in_channels (int) – The input channels of the CPM.

  • out_channels (int) – The output channels of the CPM.

  • feat_channels (int) – Feature channel of each CPM stage.

  • middle_channels (int) – Feature channel of conv after the middle stage.

  • num_stages (int) – Number of stages.

  • norm_cfg (dict) – Dictionary to construct and config norm layer.

示例

>>> from mmpose.models import CPM
>>> import torch
>>> self = CPM(3, 17)
>>> self.eval()
>>> inputs = torch.rand(1, 3, 368, 368)
>>> level_outputs = self.forward(inputs)
>>> for level_output in level_outputs:
...     print(tuple(level_output.shape))
(1, 17, 46, 46)
(1, 17, 46, 46)
(1, 17, 46, 46)
(1, 17, 46, 46)
(1, 17, 46, 46)
(1, 17, 46, 46)
forward(x)[源代码]

Model forward function.

init_weights(pretrained=None)[源代码]

Initialize the weights in backbone.

参数

pretrained (str, optional) – Path to pre-trained weights. Defaults to None.

class mmpose.models.backbones.HRNet(extra, in_channels=3, conv_cfg=None, norm_cfg={'type': 'BN'}, norm_eval=False, with_cp=False, zero_init_residual=False)[源代码]

HRNet backbone.

High-Resolution Representations for Labeling Pixels and Regions

参数
  • extra (dict) – detailed configuration for each stage of HRNet.

  • in_channels (int) – Number of input image channels. Default: 3.

  • conv_cfg (dict) – dictionary to construct and config conv layer.

  • norm_cfg (dict) – dictionary to construct and config norm layer.

  • norm_eval (bool) – Whether to set norm layers to eval mode, namely, freeze running stats (mean and var). Note: Effect on Batch Norm and its variants only. Default: False

  • with_cp (bool) – Use checkpoint or not. Using checkpoint will save some memory while slowing down the training speed.

  • zero_init_residual (bool) – whether to use zero init for last norm layer in resblocks to let them behave as identity.

示例

>>> from mmpose.models import HRNet
>>> import torch
>>> extra = dict(
>>>     stage1=dict(
>>>         num_modules=1,
>>>         num_branches=1,
>>>         block='BOTTLENECK',
>>>         num_blocks=(4, ),
>>>         num_channels=(64, )),
>>>     stage2=dict(
>>>         num_modules=1,
>>>         num_branches=2,
>>>         block='BASIC',
>>>         num_blocks=(4, 4),
>>>         num_channels=(32, 64)),
>>>     stage3=dict(
>>>         num_modules=4,
>>>         num_branches=3,
>>>         block='BASIC',
>>>         num_blocks=(4, 4, 4),
>>>         num_channels=(32, 64, 128)),
>>>     stage4=dict(
>>>         num_modules=3,
>>>         num_branches=4,
>>>         block='BASIC',
>>>         num_blocks=(4, 4, 4, 4),
>>>         num_channels=(32, 64, 128, 256)))
>>> self = HRNet(extra, in_channels=1)
>>> self.eval()
>>> inputs = torch.rand(1, 1, 32, 32)
>>> level_outputs = self.forward(inputs)
>>> for level_out in level_outputs:
...     print(tuple(level_out.shape))
(1, 32, 8, 8)
(1, 64, 4, 4)
(1, 128, 2, 2)
(1, 256, 1, 1)
forward(x)[源代码]

Forward function.

init_weights(pretrained=None)[源代码]

Initialize the weights in backbone.

参数

pretrained (str, optional) – Path to pre-trained weights. Defaults to None.

property norm1

the normalization layer named “norm1”

Type

nn.Module

property norm2

the normalization layer named “norm2”

Type

nn.Module

train(mode=True)[源代码]

Convert the model into training mode.

class mmpose.models.backbones.HourglassNet(downsample_times=5, num_stacks=2, stage_channels=(256, 256, 384, 384, 384, 512), stage_blocks=(2, 2, 2, 2, 2, 4), feat_channel=256, norm_cfg={'requires_grad': True, 'type': 'BN'})[源代码]

HourglassNet backbone.

Stacked Hourglass Networks for Human Pose Estimation. More details can be found in the paper .

参数
  • downsample_times (int) – Downsample times in a HourglassModule.

  • num_stacks (int) – Number of HourglassModule modules stacked, 1 for Hourglass-52, 2 for Hourglass-104.

  • stage_channels (list[int]) – Feature channel of each sub-module in a HourglassModule.

  • stage_blocks (list[int]) – Number of sub-modules stacked in a HourglassModule.

  • feat_channel (int) – Feature channel of conv after a HourglassModule.

  • norm_cfg (dict) – Dictionary to construct and config norm layer.

示例

>>> from mmpose.models import HourglassNet
>>> import torch
>>> self = HourglassNet()
>>> self.eval()
>>> inputs = torch.rand(1, 3, 511, 511)
>>> level_outputs = self.forward(inputs)
>>> for level_output in level_outputs:
...     print(tuple(level_output.shape))
(1, 256, 128, 128)
(1, 256, 128, 128)
forward(x)[源代码]

Model forward function.

init_weights(pretrained=None)[源代码]

Initialize the weights in backbone.

参数

pretrained (str, optional) – Path to pre-trained weights. Defaults to None.

class mmpose.models.backbones.LiteHRNet(extra, in_channels=3, conv_cfg=None, norm_cfg={'type': 'BN'}, norm_eval=False, with_cp=False, zero_init_residual=False)[源代码]

Lite-HRNet backbone.

Lite-HRNet: A Lightweight High-Resolution Network

Code adapted from ‘https://github.com/HRNet/Lite-HRNet/’ ‘blob/hrnet/models/backbones/litehrnet.py’

参数
  • extra (dict) – detailed configuration for each stage of HRNet.

  • in_channels (int) – Number of input image channels. Default: 3.

  • conv_cfg (dict) – dictionary to construct and config conv layer.

  • norm_cfg (dict) – dictionary to construct and config norm layer.

  • norm_eval (bool) – Whether to set norm layers to eval mode, namely, freeze running stats (mean and var). Note: Effect on Batch Norm and its variants only. Default: False

  • with_cp (bool) – Use checkpoint or not. Using checkpoint will save some memory while slowing down the training speed.

示例

>>> from mmpose.models import LiteHRNet
>>> import torch
>>> extra=dict(
>>>    stem=dict(stem_channels=32, out_channels=32, expand_ratio=1),
>>>    num_stages=3,
>>>    stages_spec=dict(
>>>        num_modules=(2, 4, 2),
>>>        num_branches=(2, 3, 4),
>>>        num_blocks=(2, 2, 2),
>>>        module_type=('LITE', 'LITE', 'LITE'),
>>>        with_fuse=(True, True, True),
>>>        reduce_ratios=(8, 8, 8),
>>>        num_channels=(
>>>            (40, 80),
>>>            (40, 80, 160),
>>>            (40, 80, 160, 320),
>>>        )),
>>>    with_head=True)
>>> self = LiteHRNet(extra, in_channels=1)
>>> self.eval()
>>> inputs = torch.rand(1, 1, 32, 32)
>>> level_outputs = self.forward(inputs)
>>> for level_out in level_outputs:
...     print(tuple(level_out.shape))
(1, 40, 8, 8)
(1, 80, 4, 4)
(1, 160, 2, 2)
(1, 320, 1, 1)
forward(x)[源代码]

Forward function.

init_weights(pretrained=None)[源代码]

Initialize the weights in backbone.

参数

pretrained (str, optional) – Path to pre-trained weights. Defaults to None.

train(mode=True)[源代码]

Convert the model into training mode.

class mmpose.models.backbones.MSPN(unit_channels=256, num_stages=4, num_units=4, num_blocks=[2, 2, 2, 2], norm_cfg={'type': 'BN'}, res_top_channels=64)[源代码]

MSPN backbone. Paper ref: Li et al. “Rethinking on Multi-Stage Networks for Human Pose Estimation” (CVPR 2020).

参数
  • unit_channels (int) – Number of Channels in an upsample unit. Default: 256

  • num_stages (int) – Number of stages in a multi-stage MSPN. Default: 4

  • num_units (int) – NUmber of downsample/upsample units in a single-stage network. Default: 4 Note: Make sure num_units == len(self.num_blocks)

  • num_blocks (list) – Number of bottlenecks in each downsample unit. Default: [2, 2, 2, 2]

  • norm_cfg (dict) – dictionary to construct and config norm layer. Default: dict(type=’BN’)

  • res_top_channels (int) – Number of channels of feature from ResNetTop. Default: 64.

示例

>>> from mmpose.models import MSPN
>>> import torch
>>> self = MSPN(num_stages=2,num_units=2,num_blocks=[2,2])
>>> self.eval()
>>> inputs = torch.rand(1, 3, 511, 511)
>>> level_outputs = self.forward(inputs)
>>> for level_output in level_outputs:
...     for feature in level_output:
...         print(tuple(feature.shape))
...
(1, 256, 64, 64)
(1, 256, 128, 128)
(1, 256, 64, 64)
(1, 256, 128, 128)
forward(x)[源代码]

Model forward function.

init_weights(pretrained=None)[源代码]

Initialize model weights.

class mmpose.models.backbones.MobileNetV2(widen_factor=1.0, out_indices=(7,), frozen_stages=- 1, conv_cfg=None, norm_cfg={'type': 'BN'}, act_cfg={'type': 'ReLU6'}, norm_eval=False, with_cp=False)[源代码]

MobileNetV2 backbone.

参数
  • widen_factor (float) – Width multiplier, multiply number of channels in each layer by this amount. Default: 1.0.

  • out_indices (None or Sequence[int]) – Output from which stages. Default: (7, ).

  • frozen_stages (int) – Stages to be frozen (all param fixed). Default: -1, which means not freezing any parameters.

  • conv_cfg (dict) – Config dict for convolution layer. Default: None, which means using conv2d.

  • norm_cfg (dict) – Config dict for normalization layer. Default: dict(type=’BN’).

  • act_cfg (dict) – Config dict for activation layer. Default: dict(type=’ReLU6’).

  • norm_eval (bool) – Whether to set norm layers to eval mode, namely, freeze running stats (mean and var). Note: Effect on Batch Norm and its variants only. Default: False.

  • with_cp (bool) – Use checkpoint or not. Using checkpoint will save some memory while slowing down the training speed. Default: False.

forward(x)[源代码]

Forward function.

参数

x (tensor | tuple[tensor]) – x could be a Torch.tensor or a tuple of Torch.tensor, containing input data for forward computation.

init_weights(pretrained=None)[源代码]

Init backbone weights.

参数

pretrained (str | None) – If pretrained is a string, then it initializes backbone weights by loading the pretrained checkpoint. If pretrained is None, then it follows default initializer or customized initializer in subclasses.

make_layer(out_channels, num_blocks, stride, expand_ratio)[源代码]

Stack InvertedResidual blocks to build a layer for MobileNetV2.

参数
  • out_channels (int) – out_channels of block.

  • num_blocks (int) – number of blocks.

  • stride (int) – stride of the first block. Default: 1

  • expand_ratio (int) – Expand the number of channels of the hidden layer in InvertedResidual by this ratio. Default: 6.

train(mode=True)[源代码]

Sets the module in training mode.

This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g. Dropout, BatchNorm, etc.

参数

mode (bool) – whether to set training mode (True) or evaluation mode (False). Default: True.

返回

self

返回类型

Module

class mmpose.models.backbones.MobileNetV3(arch='small', conv_cfg=None, norm_cfg={'type': 'BN'}, out_indices=(10,), frozen_stages=- 1, norm_eval=False, with_cp=False)[源代码]

MobileNetV3 backbone.

参数
  • arch (str) – Architechture of mobilnetv3, from {small, big}. Default: small.

  • conv_cfg (dict) – Config dict for convolution layer. Default: None, which means using conv2d.

  • norm_cfg (dict) – Config dict for normalization layer. Default: dict(type=’BN’).

  • out_indices (None or Sequence[int]) – Output from which stages. Default: (10, ), which means output tensors from final stage.

  • frozen_stages (int) – Stages to be frozen (all param fixed). Defualt: -1, which means not freezing any parameters.

  • norm_eval (bool) – Whether to set norm layers to eval mode, namely, freeze running stats (mean and var). Note: Effect on Batch Norm and its variants only. Default: False.

  • with_cp (bool) – Use checkpoint or not. Using checkpoint will save some memory while slowing down the training speed. Defualt: False.

forward(x)[源代码]

Forward function.

参数

x (tensor | tuple[tensor]) – x could be a Torch.tensor or a tuple of Torch.tensor, containing input data for forward computation.

init_weights(pretrained=None)[源代码]

Init backbone weights.

参数

pretrained (str | None) – If pretrained is a string, then it initializes backbone weights by loading the pretrained checkpoint. If pretrained is None, then it follows default initializer or customized initializer in subclasses.

train(mode=True)[源代码]

Sets the module in training mode.

This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g. Dropout, BatchNorm, etc.

参数

mode (bool) – whether to set training mode (True) or evaluation mode (False). Default: True.

返回

self

返回类型

Module

class mmpose.models.backbones.RSN(unit_channels=256, num_stages=4, num_units=4, num_blocks=[2, 2, 2, 2], num_steps=4, norm_cfg={'type': 'BN'}, res_top_channels=64, expand_times=26)[源代码]

Residual Steps Network backbone. Paper ref: Cai et al. “Learning Delicate Local Representations for Multi-Person Pose Estimation” (ECCV 2020).

参数
  • unit_channels (int) – Number of Channels in an upsample unit. Default: 256

  • num_stages (int) – Number of stages in a multi-stage RSN. Default: 4

  • num_units (int) – NUmber of downsample/upsample units in a single-stage RSN. Default: 4 Note: Make sure num_units == len(self.num_blocks)

  • num_blocks (list) – Number of RSBs (Residual Steps Block) in each downsample unit. Default: [2, 2, 2, 2]

  • num_steps (int) – Number of steps in a RSB. Default:4

  • norm_cfg (dict) – dictionary to construct and config norm layer. Default: dict(type=’BN’)

  • res_top_channels (int) – Number of channels of feature from ResNet_top. Default: 64.

  • expand_times (int) – Times by which the in_channels are expanded in RSB. Default:26.

示例

>>> from mmpose.models import RSN
>>> import torch
>>> self = RSN(num_stages=2,num_units=2,num_blocks=[2,2])
>>> self.eval()
>>> inputs = torch.rand(1, 3, 511, 511)
>>> level_outputs = self.forward(inputs)
>>> for level_output in level_outputs:
...     for feature in level_output:
...         print(tuple(feature.shape))
...
(1, 256, 64, 64)
(1, 256, 128, 128)
(1, 256, 64, 64)
(1, 256, 128, 128)
forward(x)[源代码]

Model forward function.

init_weights(pretrained=None)[源代码]

Initialize model weights.

class mmpose.models.backbones.RegNet(arch, in_channels=3, stem_channels=32, base_channels=32, strides=(2, 2, 2, 2), dilations=(1, 1, 1, 1), out_indices=(3,), style='pytorch', deep_stem=False, avg_down=False, frozen_stages=- 1, conv_cfg=None, norm_cfg={'requires_grad': True, 'type': 'BN'}, norm_eval=False, with_cp=False, zero_init_residual=True)[源代码]

RegNet backbone.

More details can be found in paper .

参数
  • arch (dict) – The parameter of RegNets. - w0 (int): initial width - wa (float): slope of width - wm (float): quantization parameter to quantize the width - depth (int): depth of the backbone - group_w (int): width of group - bot_mul (float): bottleneck ratio, i.e. expansion of bottlneck.

  • strides (Sequence[int]) – Strides of the first block of each stage.

  • base_channels (int) – Base channels after stem layer.

  • in_channels (int) – Number of input image channels. Default: 3.

  • dilations (Sequence[int]) – Dilation of each stage.

  • out_indices (Sequence[int]) – Output from which stages.

  • style (str) – pytorch or caffe. If set to “pytorch”, the stride-two layer is the 3x3 conv layer, otherwise the stride-two layer is the first 1x1 conv layer. Default: “pytorch”.

  • frozen_stages (int) – Stages to be frozen (all param fixed). -1 means not freezing any parameters. Default: -1.

  • norm_cfg (dict) – dictionary to construct and config norm layer. Default: dict(type=’BN’, requires_grad=True).

  • norm_eval (bool) – Whether to set norm layers to eval mode, namely, freeze running stats (mean and var). Note: Effect on Batch Norm and its variants only. Default: False.

  • with_cp (bool) – Use checkpoint or not. Using checkpoint will save some memory while slowing down the training speed. Default: False.

  • zero_init_residual (bool) – whether to use zero init for last norm layer in resblocks to let them behave as identity. Default: True.

示例

>>> from mmpose.models import RegNet
>>> import torch
>>> self = RegNet(
        arch=dict(
            w0=88,
            wa=26.31,
            wm=2.25,
            group_w=48,
            depth=25,
            bot_mul=1.0))
>>> self.eval()
>>> inputs = torch.rand(1, 3, 32, 32)
>>> level_outputs = self.forward(inputs)
>>> for level_out in level_outputs:
...     print(tuple(level_out.shape))
(1, 96, 8, 8)
(1, 192, 4, 4)
(1, 432, 2, 2)
(1, 1008, 1, 1)
adjust_width_group(widths, bottleneck_ratio, groups)[源代码]

Adjusts the compatibility of widths and groups.

参数
  • widths (list[int]) – Width of each stage.

  • bottleneck_ratio (float) – Bottleneck ratio.

  • groups (int) – number of groups in each stage

返回

The adjusted widths and groups of each stage.

返回类型

tuple(list)

forward(x)[源代码]

Forward function.

static generate_regnet(initial_width, width_slope, width_parameter, depth, divisor=8)[源代码]

Generates per block width from RegNet parameters.

参数
  • initial_width ([int]) – Initial width of the backbone

  • width_slope ([float]) – Slope of the quantized linear function

  • width_parameter ([int]) – Parameter used to quantize the width.

  • depth ([int]) – Depth of the backbone.

  • divisor (int, optional) – The divisor of channels. Defaults to 8.

返回

return a list of widths of each stage and the number of

stages

返回类型

list, int

get_stages_from_blocks(widths)[源代码]

Gets widths/stage_blocks of network at each stage.

参数

widths (list[int]) – Width in each stage.

返回

width and depth of each stage

返回类型

tuple(list)

static quantize_float(number, divisor)[源代码]

Converts a float to closest non-zero int divisible by divior.

参数
  • number (int) – Original number to be quantized.

  • divisor (int) – Divisor used to quantize the number.

返回

quantized number that is divisible by devisor.

返回类型

int

class mmpose.models.backbones.ResNeSt(depth, groups=1, width_per_group=4, radix=2, reduction_factor=4, avg_down_stride=True, **kwargs)[源代码]

ResNeSt backbone.

Please refer to the paper for details.

参数
  • depth (int) – Network depth, from {50, 101, 152, 200}.

  • groups (int) – Groups of conv2 in Bottleneck. Default: 32.

  • width_per_group (int) – Width per group of conv2 in Bottleneck. Default: 4.

  • radix (int) – Radix of SpltAtConv2d. Default: 2

  • reduction_factor (int) – Reduction factor of SplitAttentionConv2d. Default: 4.

  • avg_down_stride (bool) – Whether to use average pool for stride in Bottleneck. Default: True.

  • in_channels (int) – Number of input image channels. Default: 3.

  • stem_channels (int) – Output channels of the stem layer. Default: 64.

  • num_stages (int) – Stages of the network. Default: 4.

  • strides (Sequence[int]) – Strides of the first block of each stage. Default: (1, 2, 2, 2).

  • dilations (Sequence[int]) – Dilation of each stage. Default: (1, 1, 1, 1).

  • out_indices (Sequence[int]) – Output from which stages. If only one stage is specified, a single tensor (feature map) is returned, otherwise multiple stages are specified, a tuple of tensors will be returned. Default: (3, ).

  • style (str) – pytorch or caffe. If set to “pytorch”, the stride-two layer is the 3x3 conv layer, otherwise the stride-two layer is the first 1x1 conv layer.

  • deep_stem (bool) – Replace 7x7 conv in input stem with 3 3x3 conv. Default: False.

  • avg_down (bool) – Use AvgPool instead of stride conv when downsampling in the bottleneck. Default: False.

  • frozen_stages (int) – Stages to be frozen (stop grad and set eval mode). -1 means not freezing any parameters. Default: -1.

  • conv_cfg (dict | None) – The config dict for conv layers. Default: None.

  • norm_cfg (dict) – The config dict for norm layers.

  • norm_eval (bool) – Whether to set norm layers to eval mode, namely, freeze running stats (mean and var). Note: Effect on Batch Norm and its variants only. Default: False.

  • with_cp (bool) – Use checkpoint or not. Using checkpoint will save some memory while slowing down the training speed. Default: False.

  • zero_init_residual (bool) – Whether to use zero init for last norm layer in resblocks to let them behave as identity. Default: True.

make_res_layer(**kwargs)[源代码]

Make a ResLayer.

class mmpose.models.backbones.ResNeXt(depth, groups=32, width_per_group=4, **kwargs)[源代码]

ResNeXt backbone.

Please refer to the paper for details.

参数
  • depth (int) – Network depth, from {50, 101, 152}.

  • groups (int) – Groups of conv2 in Bottleneck. Default: 32.

  • width_per_group (int) – Width per group of conv2 in Bottleneck. Default: 4.

  • in_channels (int) – Number of input image channels. Default: 3.

  • stem_channels (int) – Output channels of the stem layer. Default: 64.

  • num_stages (int) – Stages of the network. Default: 4.

  • strides (Sequence[int]) – Strides of the first block of each stage. Default: (1, 2, 2, 2).

  • dilations (Sequence[int]) – Dilation of each stage. Default: (1, 1, 1, 1).

  • out_indices (Sequence[int]) – Output from which stages. If only one stage is specified, a single tensor (feature map) is returned, otherwise multiple stages are specified, a tuple of tensors will be returned. Default: (3, ).

  • style (str) – pytorch or caffe. If set to “pytorch”, the stride-two layer is the 3x3 conv layer, otherwise the stride-two layer is the first 1x1 conv layer.

  • deep_stem (bool) – Replace 7x7 conv in input stem with 3 3x3 conv. Default: False.

  • avg_down (bool) – Use AvgPool instead of stride conv when downsampling in the bottleneck. Default: False.

  • frozen_stages (int) – Stages to be frozen (stop grad and set eval mode). -1 means not freezing any parameters. Default: -1.

  • conv_cfg (dict | None) – The config dict for conv layers. Default: None.

  • norm_cfg (dict) – The config dict for norm layers.

  • norm_eval (bool) – Whether to set norm layers to eval mode, namely, freeze running stats (mean and var). Note: Effect on Batch Norm and its variants only. Default: False.

  • with_cp (bool) – Use checkpoint or not. Using checkpoint will save some memory while slowing down the training speed. Default: False.

  • zero_init_residual (bool) – Whether to use zero init for last norm layer in resblocks to let them behave as identity. Default: True.

make_res_layer(**kwargs)[源代码]

Make a ResLayer.

class mmpose.models.backbones.ResNet(depth, in_channels=3, stem_channels=64, base_channels=64, expansion=None, num_stages=4, strides=(1, 2, 2, 2), dilations=(1, 1, 1, 1), out_indices=(3,), style='pytorch', deep_stem=False, avg_down=False, frozen_stages=- 1, conv_cfg=None, norm_cfg={'requires_grad': True, 'type': 'BN'}, norm_eval=False, with_cp=False, zero_init_residual=True)[源代码]

ResNet backbone.

Please refer to the paper for details.

参数
  • depth (int) – Network depth, from {18, 34, 50, 101, 152}.

  • in_channels (int) – Number of input image channels. Default: 3.

  • stem_channels (int) – Output channels of the stem layer. Default: 64.

  • base_channels (int) – Middle channels of the first stage. Default: 64.

  • num_stages (int) – Stages of the network. Default: 4.

  • strides (Sequence[int]) – Strides of the first block of each stage. Default: (1, 2, 2, 2).

  • dilations (Sequence[int]) – Dilation of each stage. Default: (1, 1, 1, 1).

  • out_indices (Sequence[int]) – Output from which stages. If only one stage is specified, a single tensor (feature map) is returned, otherwise multiple stages are specified, a tuple of tensors will be returned. Default: (3, ).

  • style (str) – pytorch or caffe. If set to “pytorch”, the stride-two layer is the 3x3 conv layer, otherwise the stride-two layer is the first 1x1 conv layer.

  • deep_stem (bool) – Replace 7x7 conv in input stem with 3 3x3 conv. Default: False.

  • avg_down (bool) – Use AvgPool instead of stride conv when downsampling in the bottleneck. Default: False.

  • frozen_stages (int) – Stages to be frozen (stop grad and set eval mode). -1 means not freezing any parameters. Default: -1.

  • conv_cfg (dict | None) – The config dict for conv layers. Default: None.

  • norm_cfg (dict) – The config dict for norm layers.

  • norm_eval (bool) – Whether to set norm layers to eval mode, namely, freeze running stats (mean and var). Note: Effect on Batch Norm and its variants only. Default: False.

  • with_cp (bool) – Use checkpoint or not. Using checkpoint will save some memory while slowing down the training speed. Default: False.

  • zero_init_residual (bool) – Whether to use zero init for last norm layer in resblocks to let them behave as identity. Default: True.

示例

>>> from mmpose.models import ResNet
>>> import torch
>>> self = ResNet(depth=18)
>>> self.eval()
>>> inputs = torch.rand(1, 3, 32, 32)
>>> level_outputs = self.forward(inputs)
>>> for level_out in level_outputs:
...     print(tuple(level_out.shape))
(1, 64, 8, 8)
(1, 128, 4, 4)
(1, 256, 2, 2)
(1, 512, 1, 1)
forward(x)[源代码]

Forward function.

init_weights(pretrained=None)[源代码]

Initialize the weights in backbone.

参数

pretrained (str, optional) – Path to pre-trained weights. Defaults to None.

make_res_layer(**kwargs)[源代码]

Make a ResLayer.

property norm1

the normalization layer named “norm1”

Type

nn.Module

train(mode=True)[源代码]

Convert the model into training mode.

class mmpose.models.backbones.ResNetV1d(**kwargs)[源代码]

ResNetV1d variant described in Bag of Tricks.

Compared with default ResNet(ResNetV1b), ResNetV1d replaces the 7x7 conv in the input stem with three 3x3 convs. And in the downsampling block, a 2x2 avg_pool with stride 2 is added before conv, whose stride is changed to 1.

class mmpose.models.backbones.SCNet(depth, **kwargs)[源代码]

SCNet backbone.

Improving Convolutional Networks with Self-Calibrated Convolutions, Jiang-Jiang Liu, Qibin Hou, Ming-Ming Cheng, Changhu Wang, Jiashi Feng, IEEE CVPR, 2020. http://mftp.mmcheng.net/Papers/20cvprSCNet.pdf

参数
  • depth (int) – Depth of scnet, from {50, 101}.

  • in_channels (int) – Number of input image channels. Normally 3.

  • base_channels (int) – Number of base channels of hidden layer.

  • num_stages (int) – SCNet stages, normally 4.

  • strides (Sequence[int]) – Strides of the first block of each stage.

  • dilations (Sequence[int]) – Dilation of each stage.

  • out_indices (Sequence[int]) – Output from which stages.

  • style (str) – pytorch or caffe. If set to “pytorch”, the stride-two layer is the 3x3 conv layer, otherwise the stride-two layer is the first 1x1 conv layer.

  • deep_stem (bool) – Replace 7x7 conv in input stem with 3 3x3 conv

  • avg_down (bool) – Use AvgPool instead of stride conv when downsampling in the bottleneck.

  • frozen_stages (int) – Stages to be frozen (stop grad and set eval mode). -1 means not freezing any parameters.

  • norm_cfg (dict) – Dictionary to construct and config norm layer.

  • norm_eval (bool) – Whether to set norm layers to eval mode, namely, freeze running stats (mean and var). Note: Effect on Batch Norm and its variants only.

  • with_cp (bool) – Use checkpoint or not. Using checkpoint will save some memory while slowing down the training speed.

  • zero_init_residual (bool) – Whether to use zero init for last norm layer in resblocks to let them behave as identity.

示例

>>> from mmpose.models import SCNet
>>> import torch
>>> self = SCNet(depth=50)
>>> self.eval()
>>> inputs = torch.rand(1, 3, 224, 224)
>>> level_outputs = self.forward(inputs)
>>> for level_out in level_outputs:
...     print(tuple(level_out.shape))
(1, 64, 56, 56)
(1, 128, 28, 28)
(1, 256, 14, 14)
(1, 512, 7, 7)
class mmpose.models.backbones.SEResNeXt(depth, groups=32, width_per_group=4, **kwargs)[源代码]

SEResNeXt backbone.

Please refer to the paper for details.

参数
  • depth (int) – Network depth, from {50, 101, 152}.

  • groups (int) – Groups of conv2 in Bottleneck. Default: 32.

  • width_per_group (int) – Width per group of conv2 in Bottleneck. Default: 4.

  • se_ratio (int) – Squeeze ratio in SELayer. Default: 16.

  • in_channels (int) – Number of input image channels. Default: 3.

  • stem_channels (int) – Output channels of the stem layer. Default: 64.

  • num_stages (int) – Stages of the network. Default: 4.

  • strides (Sequence[int]) – Strides of the first block of each stage. Default: (1, 2, 2, 2).

  • dilations (Sequence[int]) – Dilation of each stage. Default: (1, 1, 1, 1).

  • out_indices (Sequence[int]) – Output from which stages. If only one stage is specified, a single tensor (feature map) is returned, otherwise multiple stages are specified, a tuple of tensors will be returned. Default: (3, ).

  • style (str) – pytorch or caffe. If set to “pytorch”, the stride-two layer is the 3x3 conv layer, otherwise the stride-two layer is the first 1x1 conv layer.

  • deep_stem (bool) – Replace 7x7 conv in input stem with 3 3x3 conv. Default: False.

  • avg_down (bool) – Use AvgPool instead of stride conv when downsampling in the bottleneck. Default: False.

  • frozen_stages (int) – Stages to be frozen (stop grad and set eval mode). -1 means not freezing any parameters. Default: -1.

  • conv_cfg (dict | None) – The config dict for conv layers. Default: None.

  • norm_cfg (dict) – The config dict for norm layers.

  • norm_eval (bool) – Whether to set norm layers to eval mode, namely, freeze running stats (mean and var). Note: Effect on Batch Norm and its variants only. Default: False.

  • with_cp (bool) – Use checkpoint or not. Using checkpoint will save some memory while slowing down the training speed. Default: False.

  • zero_init_residual (bool) – Whether to use zero init for last norm layer in resblocks to let them behave as identity. Default: True.

make_res_layer(**kwargs)[源代码]

Make a ResLayer.

class mmpose.models.backbones.SEResNet(depth, se_ratio=16, **kwargs)[源代码]

SEResNet backbone.

Please refer to the paper for details.

参数
  • depth (int) – Network depth, from {50, 101, 152}.

  • se_ratio (int) – Squeeze ratio in SELayer. Default: 16.

  • in_channels (int) – Number of input image channels. Default: 3.

  • stem_channels (int) – Output channels of the stem layer. Default: 64.

  • num_stages (int) – Stages of the network. Default: 4.

  • strides (Sequence[int]) – Strides of the first block of each stage. Default: (1, 2, 2, 2).

  • dilations (Sequence[int]) – Dilation of each stage. Default: (1, 1, 1, 1).

  • out_indices (Sequence[int]) – Output from which stages. If only one stage is specified, a single tensor (feature map) is returned, otherwise multiple stages are specified, a tuple of tensors will be returned. Default: (3, ).

  • style (str) – pytorch or caffe. If set to “pytorch”, the stride-two layer is the 3x3 conv layer, otherwise the stride-two layer is the first 1x1 conv layer.

  • deep_stem (bool) – Replace 7x7 conv in input stem with 3 3x3 conv. Default: False.

  • avg_down (bool) – Use AvgPool instead of stride conv when downsampling in the bottleneck. Default: False.

  • frozen_stages (int) – Stages to be frozen (stop grad and set eval mode). -1 means not freezing any parameters. Default: -1.

  • conv_cfg (dict | None) – The config dict for conv layers. Default: None.

  • norm_cfg (dict) – The config dict for norm layers.

  • norm_eval (bool) – Whether to set norm layers to eval mode, namely, freeze running stats (mean and var). Note: Effect on Batch Norm and its variants only. Default: False.

  • with_cp (bool) – Use checkpoint or not. Using checkpoint will save some memory while slowing down the training speed. Default: False.

  • zero_init_residual (bool) – Whether to use zero init for last norm layer in resblocks to let them behave as identity. Default: True.

示例

>>> from mmpose.models import SEResNet
>>> import torch
>>> self = SEResNet(depth=50)
>>> self.eval()
>>> inputs = torch.rand(1, 3, 224, 224)
>>> level_outputs = self.forward(inputs)
>>> for level_out in level_outputs:
...     print(tuple(level_out.shape))
(1, 64, 56, 56)
(1, 128, 28, 28)
(1, 256, 14, 14)
(1, 512, 7, 7)
make_res_layer(**kwargs)[源代码]

Make a ResLayer.

class mmpose.models.backbones.ShuffleNetV1(groups=3, widen_factor=1.0, out_indices=(2,), frozen_stages=- 1, conv_cfg=None, norm_cfg={'type': 'BN'}, act_cfg={'type': 'ReLU'}, norm_eval=False, with_cp=False)[源代码]

ShuffleNetV1 backbone.

参数
  • groups (int, optional) – The number of groups to be used in grouped 1x1 convolutions in each ShuffleUnit. Default: 3.

  • widen_factor (float, optional) – Width multiplier - adjusts the number of channels in each layer by this amount. Default: 1.0.

  • out_indices (Sequence[int]) – Output from which stages. Default: (2, )

  • frozen_stages (int) – Stages to be frozen (all param fixed). Default: -1, which means not freezing any parameters.

  • conv_cfg (dict) – Config dict for convolution layer. Default: None, which means using conv2d.

  • norm_cfg (dict) – Config dict for normalization layer. Default: dict(type=’BN’).

  • act_cfg (dict) – Config dict for activation layer. Default: dict(type=’ReLU’).

  • norm_eval (bool) – Whether to set norm layers to eval mode, namely, freeze running stats (mean and var). Note: Effect on Batch Norm and its variants only. Default: False.

  • with_cp (bool) – Use checkpoint or not. Using checkpoint will save some memory while slowing down the training speed. Default: False.

forward(x)[源代码]

Forward function.

参数

x (tensor | tuple[tensor]) – x could be a Torch.tensor or a tuple of Torch.tensor, containing input data for forward computation.

init_weights(pretrained=None)[源代码]

Init backbone weights.

参数

pretrained (str | None) – If pretrained is a string, then it initializes backbone weights by loading the pretrained checkpoint. If pretrained is None, then it follows default initializer or customized initializer in subclasses.

make_layer(out_channels, num_blocks, first_block=False)[源代码]

Stack ShuffleUnit blocks to make a layer.

参数
  • out_channels (int) – out_channels of the block.

  • num_blocks (int) – Number of blocks.

  • first_block (bool, optional) – Whether is the first ShuffleUnit of a sequential ShuffleUnits. Default: False, which means not using the grouped 1x1 convolution.

train(mode=True)[源代码]

Sets the module in training mode.

This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g. Dropout, BatchNorm, etc.

参数

mode (bool) – whether to set training mode (True) or evaluation mode (False). Default: True.

返回

self

返回类型

Module

class mmpose.models.backbones.ShuffleNetV2(widen_factor=1.0, out_indices=(3,), frozen_stages=- 1, conv_cfg=None, norm_cfg={'type': 'BN'}, act_cfg={'type': 'ReLU'}, norm_eval=False, with_cp=False)[源代码]

ShuffleNetV2 backbone.

参数
  • widen_factor (float) – Width multiplier - adjusts the number of channels in each layer by this amount. Default: 1.0.

  • out_indices (Sequence[int]) – Output from which stages. Default: (0, 1, 2, 3).

  • frozen_stages (int) – Stages to be frozen (all param fixed). Default: -1, which means not freezing any parameters.

  • conv_cfg (dict) – Config dict for convolution layer. Default: None, which means using conv2d.

  • norm_cfg (dict) – Config dict for normalization layer. Default: dict(type=’BN’).

  • act_cfg (dict) – Config dict for activation layer. Default: dict(type=’ReLU’).

  • norm_eval (bool) – Whether to set norm layers to eval mode, namely, freeze running stats (mean and var). Note: Effect on Batch Norm and its variants only. Default: False.

  • with_cp (bool) – Use checkpoint or not. Using checkpoint will save some memory while slowing down the training speed. Default: False.

forward(x)[源代码]

Forward function.

参数

x (tensor | tuple[tensor]) – x could be a Torch.tensor or a tuple of Torch.tensor, containing input data for forward computation.

init_weights(pretrained=None)[源代码]

Init backbone weights.

参数

pretrained (str | None) – If pretrained is a string, then it initializes backbone weights by loading the pretrained checkpoint. If pretrained is None, then it follows default initializer or customized initializer in subclasses.

train(mode=True)[源代码]

Sets the module in training mode.

This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g. Dropout, BatchNorm, etc.

参数

mode (bool) – whether to set training mode (True) or evaluation mode (False). Default: True.

返回

self

返回类型

Module

class mmpose.models.backbones.TCN(in_channels, stem_channels=1024, num_blocks=2, kernel_sizes=(3, 3, 3), dropout=0.25, causal=False, residual=True, use_stride_conv=False, conv_cfg={'type': 'Conv1d'}, norm_cfg={'type': 'BN1d'}, max_norm=None)[源代码]

TCN backbone.

Temporal Convolutional Networks. More details can be found in the paper .

参数
  • in_channels (int) – Number of input channels, which equals to num_keypoints * num_features.

  • stem_channels (int) – Number of feature channels. Default: 1024.

  • num_blocks (int) – NUmber of basic temporal convolutional blocks. Default: 2.

  • kernel_sizes (Sequence[int]) – Sizes of the convolving kernel of each basic block. Default: (3, 3, 3).

  • dropout (float) – Dropout rate. Default: 0.25.

  • causal (bool) – Use causal convolutions instead of symmetric convolutions (for real-time applications). Default: False.

  • residual (bool) – Use residual connection. Default: True.

  • use_stride_conv (bool) – Use TCN backbone optimized for single-frame batching, i.e. where batches have input length = receptive field, and output length = 1. This implementation replaces dilated convolutions with strided convolutions to avoid generating unused intermediate results. The weights are interchangeable with the reference implementation. Default: False

  • conv_cfg (dict) – dictionary to construct and config conv layer. Default: dict(type=’Conv1d’).

  • norm_cfg (dict) – dictionary to construct and config norm layer. Default: dict(type=’BN1d’).

  • max_norm (float|None) – if not None, the weight of convolution layers will be clipped to have a maximum norm of max_norm.

示例

>>> from mmpose.models import TCN
>>> import torch
>>> self = TCN(in_channels=34)
>>> self.eval()
>>> inputs = torch.rand(1, 34, 243)
>>> level_outputs = self.forward(inputs)
>>> for level_out in level_outputs:
...     print(tuple(level_out.shape))
(1, 1024, 235)
(1, 1024, 217)
forward(x)[源代码]

Forward function.

init_weights(pretrained=None)[源代码]

Initialize the weights.

class mmpose.models.backbones.VGG(depth, num_classes=- 1, num_stages=5, dilations=(1, 1, 1, 1, 1), out_indices=None, frozen_stages=- 1, conv_cfg=None, norm_cfg=None, act_cfg={'type': 'ReLU'}, norm_eval=False, ceil_mode=False, with_last_pool=True)[源代码]

VGG backbone.

参数
  • depth (int) – Depth of vgg, from {11, 13, 16, 19}.

  • with_norm (bool) – Use BatchNorm or not.

  • num_classes (int) – number of classes for classification.

  • num_stages (int) – VGG stages, normally 5.

  • dilations (Sequence[int]) – Dilation of each stage.

  • out_indices (Sequence[int]) – Output from which stages. If only one stage is specified, a single tensor (feature map) is returned, otherwise multiple stages are specified, a tuple of tensors will be returned. When it is None, the default behavior depends on whether num_classes is specified. If num_classes <= 0, the default value is (4, ), outputing the last feature map before classifier. If num_classes > 0, the default value is (5, ), outputing the classification score. Default: None.

  • frozen_stages (int) – Stages to be frozen (all param fixed). -1 means not freezing any parameters.

  • norm_eval (bool) – Whether to set norm layers to eval mode, namely, freeze running stats (mean and var). Note: Effect on Batch Norm and its variants only. Default: False.

  • ceil_mode (bool) – Whether to use ceil_mode of MaxPool. Default: False.

  • with_last_pool (bool) – Whether to keep the last pooling before classifier. Default: True.

forward(x)[源代码]

Forward function.

参数

x (tensor | tuple[tensor]) – x could be a Torch.tensor or a tuple of Torch.tensor, containing input data for forward computation.

init_weights(pretrained=None)[源代码]

Init backbone weights.

参数

pretrained (str | None) – If pretrained is a string, then it initializes backbone weights by loading the pretrained checkpoint. If pretrained is None, then it follows default initializer or customized initializer in subclasses.

train(mode=True)[源代码]

Sets the module in training mode.

This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g. Dropout, BatchNorm, etc.

参数

mode (bool) – whether to set training mode (True) or evaluation mode (False). Default: True.

返回

self

返回类型

Module

class mmpose.models.backbones.ViPNAS_ResNet(depth, in_channels=3, num_stages=4, strides=(1, 2, 2, 2), dilations=(1, 1, 1, 1), out_indices=(3,), style='pytorch', deep_stem=False, avg_down=False, frozen_stages=- 1, conv_cfg=None, norm_cfg={'requires_grad': True, 'type': 'BN'}, norm_eval=False, with_cp=False, zero_init_residual=True, wid=[48, 80, 160, 304, 608], expan=[None, 1, 1, 1, 1], dep=[None, 4, 6, 7, 3], ks=[7, 3, 5, 5, 5], group=[None, 16, 16, 16, 16], att=[None, True, False, True, True])[源代码]

ResNet backbone.

ViPNAS: Efficient Video Pose Estimation via Neural Architecture Search. More details can be found in the paper .

参数
  • depth (int) – Network depth, from {18, 34, 50, 101, 152}.

  • in_channels (int) – Number of input image channels. Default: 3.

  • num_stages (int) – Stages of the network. Default: 4.

  • strides (Sequence[int]) – Strides of the first block of each stage. Default: (1, 2, 2, 2).

  • dilations (Sequence[int]) – Dilation of each stage. Default: (1, 1, 1, 1).

  • out_indices (Sequence[int]) – Output from which stages. If only one stage is specified, a single tensor (feature map) is returned, otherwise multiple stages are specified, a tuple of tensors will be returned. Default: (3, ).

  • style (str) – pytorch or caffe. If set to “pytorch”, the stride-two layer is the 3x3 conv layer, otherwise the stride-two layer is the first 1x1 conv layer.

  • deep_stem (bool) – Replace 7x7 conv in input stem with 3 3x3 conv. Default: False.

  • avg_down (bool) – Use AvgPool instead of stride conv when downsampling in the bottleneck. Default: False.

  • frozen_stages (int) – Stages to be frozen (stop grad and set eval mode). -1 means not freezing any parameters. Default: -1.

  • conv_cfg (dict | None) – The config dict for conv layers. Default: None.

  • norm_cfg (dict) – The config dict for norm layers.

  • norm_eval (bool) – Whether to set norm layers to eval mode, namely, freeze running stats (mean and var). Note: Effect on Batch Norm and its variants only. Default: False.

  • with_cp (bool) – Use checkpoint or not. Using checkpoint will save some memory while slowing down the training speed. Default: False.

  • zero_init_residual (bool) – Whether to use zero init for last norm layer in resblocks to let them behave as identity. Default: True.

  • wid (list(int)) – searched width config for each stage.

  • expan (list(int)) – searched expansion ratio config for each stage.

  • dep (list(int)) – searched depth config for each stage.

  • ks (list(int)) – searched kernel size config for each stage.

  • group (list(int)) – searched group number config for each stage.

  • att (list(int)) – searched attention config for each stage.

forward(x)[源代码]

Forward function.

init_weights(pretrained=None)[源代码]

Init backbone weights.

参数

pretrained (str | None) – If pretrained is a string, then it initializes backbone weights by loading the pretrained checkpoint. If pretrained is None, then it follows default initializer or customized initializer in subclasses.

make_res_layer(**kwargs)[源代码]

Make a ViPNAS ResLayer.

property norm1

the normalization layer named “norm1”

Type

nn.Module

train(mode=True)[源代码]

Convert the model into training mode.

necks

class mmpose.models.necks.GlobalAveragePooling[源代码]

Global Average Pooling neck.

Note that we use view to remove extra channel after pooling. We do not use squeeze as it will also remove the batch dimension when the tensor has a batch dimension of size 1, which can lead to unexpected errors.

forward(inputs)[源代码]

Defines the computation performed at every call.

Should be overridden by all subclasses.

注解

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

detectors

class mmpose.models.detectors.AssociativeEmbedding(backbone, keypoint_head=None, train_cfg=None, test_cfg=None, pretrained=None, loss_pose=None)[源代码]

Associative embedding pose detectors.

参数
  • backbone (dict) – Backbone modules to extract feature.

  • keypoint_head (dict) – Keypoint head to process feature.

  • train_cfg (dict) – Config for training. Default: None.

  • test_cfg (dict) – Config for testing. Default: None.

  • pretrained (str) – Path to the pretrained models.

  • loss_pose (None) – Deprecated arguments. Please use loss_keypoint for heads instead.

forward(img=None, targets=None, masks=None, joints=None, img_metas=None, return_loss=True, return_heatmap=False, **kwargs)[源代码]

Calls either forward_train or forward_test depending on whether return_loss is True. .. note:

batch_size: N
num_keypoints: K
num_img_channel: C
img_width: imgW
img_height: imgH
heatmaps weight: W
heatmaps height: H
max_num_people: M
参数
  • img (torch.Tensor[NxCximgHximgW]) – Input image.

  • targets (List(torch.Tensor[NxKxHxW])) – Multi-scale target heatmaps.

  • masks (List(torch.Tensor[NxHxW])) – Masks of multi-scale target heatmaps

  • joints (List(torch.Tensor[NxMxKx2])) – Joints of multi-scale target heatmaps for ae loss

  • img_metas (dict) – Information about val&test By default this includes: - “image_file”: image path - “aug_data”: input - “test_scale_factor”: test scale factor - “base_size”: base size of input - “center”: center of image - “scale”: scale of image - “flip_index”: flip index of keypoints

  • loss (return) – Option to ‘return_loss’. ‘return_loss=True’ for training, ‘return_loss=False’ for validation & test

  • return_heatmap (bool) – Option to return heatmap.

返回

if ‘return_loss’ is true, then return losses.

Otherwise, return predicted poses, scores, image paths and heatmaps.

返回类型

dict|tuple

forward_dummy(img)[源代码]

Used for computing network FLOPs.

See tools/get_flops.py.

参数

img (torch.Tensor) – Input image.

返回

Outputs.

返回类型

Tensor

forward_test(img, img_metas, return_heatmap=False, **kwargs)[源代码]

Inference the bottom-up model.

注解

Batchsize = N (currently support batchsize = 1) num_img_channel: C img_width: imgW img_height: imgH

参数
  • flip_index (List(int)) –

  • aug_data (List(Tensor[NxCximgHximgW])) – Multi-scale image

  • test_scale_factor (List(float)) – Multi-scale factor

  • base_size (Tuple(int)) – Base size of image when scale is 1

  • center (np.ndarray) – center of image

  • scale (np.ndarray) – the scale of image

forward_train(img, targets, masks, joints, img_metas, **kwargs)[源代码]

Forward the bottom-up model and calculate the loss.

注解

batch_size: N num_keypoints: K num_img_channel: C img_width: imgW img_height: imgH heatmaps weight: W heatmaps height: H max_num_people: M

参数
  • img (torch.Tensor[NxCximgHximgW]) – Input image.

  • targets (List(torch.Tensor[NxKxHxW])) – Multi-scale target heatmaps.

  • masks (List(torch.Tensor[NxHxW])) – Masks of multi-scale target heatmaps

  • joints (List(torch.Tensor[NxMxKx2])) – Joints of multi-scale target heatmaps for ae loss

  • img_metas (dict) – Information about val&test By default this includes: - “image_file”: image path - “aug_data”: input - “test_scale_factor”: test scale factor - “base_size”: base size of input - “center”: center of image - “scale”: scale of image - “flip_index”: flip index of keypoints

返回

The total loss for bottom-up

返回类型

dict

init_weights(pretrained=None)[源代码]

Weight initialization for model.

show_result(img, result, skeleton=None, kpt_score_thr=0.3, bbox_color=None, pose_kpt_color=None, pose_limb_color=None, radius=4, thickness=1, font_scale=0.5, win_name='', show=False, show_keypoint_weight=False, wait_time=0, out_file=None)[源代码]

Draw result over img.

参数
  • img (str or Tensor) – The image to be displayed.

  • result (list[dict]) – The results to draw over img (bbox_result, pose_result).

  • skeleton (list[list]) – The connection of keypoints.

  • kpt_score_thr (float, optional) – Minimum score of keypoints to be shown. Default: 0.3.

  • pose_kpt_color (np.array[Nx3]`) – Color of N keypoints. If None, do not draw keypoints.

  • pose_limb_color (np.array[Mx3]) – Color of M limbs. If None, do not draw limbs.

  • radius (int) – Radius of circles.

  • thickness (int) – Thickness of lines.

  • font_scale (float) – Font scales of texts.

  • win_name (str) – The window name.

  • show (bool) – Whether to show the image. Default: False.

  • show_keypoint_weight (bool) – Whether to change the transparency using the predicted confidence scores of keypoints.

  • wait_time (int) – Value of waitKey param. Default: 0.

  • out_file (str or None) – The filename to write the image. Default: None.

返回

Visualized image only if not show or out_file

返回类型

Tensor

property with_keypoint

Check if has keypoint_head.

class mmpose.models.detectors.Interhand3D(backbone, neck=None, keypoint_head=None, train_cfg=None, test_cfg=None, pretrained=None, loss_pose=None)[源代码]

Top-down interhand 3D pose detector of paper ref: Gyeongsik Moon.

“InterHand2.6M: A Dataset and Baseline for 3D Interacting Hand Pose Estimation from a Single RGB Image”. A child class of TopDown detector.

forward(img, target=None, target_weight=None, img_metas=None, return_loss=True, **kwargs)[源代码]

Calls either forward_train or forward_test depending on whether return_loss=True. Note this setting will change the expected inputs. When return_loss=True, img and img_meta are single-nested (i.e. Tensor and List[dict]), and when resturn_loss=False, img and img_meta should be double nested (i.e. List[Tensor], List[List[dict]]), with the outer list indicating test time augmentations.

注解

batch_size: N num_keypoints: K num_img_channel: C (Default: 3) img height: imgH img width: imgW heatmaps height: H heatmaps weight: W

参数
  • img (torch.Tensor[NxCximgHximgW]) – Input images.

  • target (list[torch.Tensor]) – Target heatmaps, relative hand

  • type. (relative hand root depth and hand) –

  • target_weight (list[torch.Tensor]) – Weights for target

  • heatmaps

  • type.

  • img_metas (list(dict)) – Information about data augmentation By default this includes: - “image_file: path to the image file - “center”: center of the bbox - “scale”: scale of the bbox - “rotation”: rotation of the bbox - “bbox_score”: score of bbox - “heatmap3d_depth_bound”: depth bound of hand keypoint 3D heatmap - “root_depth_bound”: depth bound of relative root depth 1D heatmap

  • return_loss (bool) – Option to return loss. return loss=True for training, return loss=False for validation & test.

返回

if return loss is true, then return losses.

Otherwise, return predicted poses, boxes, image paths, heatmaps, relative hand root depth and hand type.

返回类型

dict|tuple

forward_test(img, img_metas, **kwargs)[源代码]

Defines the computation performed at every call when testing.

show_result(result, img=None, skeleton=None, kpt_score_thr=0.3, radius=8, bbox_color='green', thickness=2, pose_kpt_color=None, pose_limb_color=None, vis_height=400, num_instances=- 1, win_name='', show=False, wait_time=0, out_file=None)[源代码]

Visualize 3D pose estimation results.

参数
  • result (list[dict]) –

    The pose estimation results containing: - “keypoints_3d” ([K,4]): 3D keypoints - “keypoints” ([K,3] or [T,K,3]): Optional for visualizing

    2D inputs. If a sequence is given, only the last frame will be used for visualization

    • ”bbox” ([4,] or [T,4]): Optional for visualizing 2D inputs

    • ”title” (str): title for the subplot

  • img (str or Tensor) – Optional. The image to visualize 2D inputs on.

  • skeleton (list of [idx_i,idx_j]) – Skeleton described by a list of limbs, each is a pair of joint indices.

  • kpt_score_thr (float, optional) – Minimum score of keypoints to be shown. Default: 0.3.

  • radius (int) – Radius of circles.

  • bbox_color (str or tuple or Color) – Color of bbox lines.

  • thickness (int) – Thickness of lines.

  • pose_kpt_color (np.array[Nx3]`) – Color of N keypoints. If None, do not draw keypoints.

  • pose_limb_color (np.array[Mx3]) – Color of M limbs. If None, do not draw limbs.

  • vis_height (int) – The image hight of the visualization. The width will be N*vis_height depending on the number of visualized items.

  • num_instances (int) – Number of instances to be shown in 3D. If smaller than 0, all the instances in the pose_result will be shown. Otherwise, pad or truncate the pose_result to a length of num_instances.

  • win_name (str) – The window name.

  • show (bool) – Whether to show the image. Default: False.

  • wait_time (int) – Value of waitKey param. Default: 0.

  • out_file (str or None) – The filename to write the image. Default: None.

返回

Visualized img, only if not show or out_file.

返回类型

Tensor

class mmpose.models.detectors.MultiTask(backbone, heads, necks=None, head2neck=None, pretrained=None)[源代码]

Multi-task detectors.

参数
  • backbone (dict) – Backbone modules to extract feature.

  • heads (List[dict]) – heads to output predictions.

  • necks (List[dict] | None) – necks to process feature.

  • (dict{int (head2neck) – int}): head index to neck index.

  • pretrained (str) – Path to the pretrained models.

forward(img, target=None, target_weight=None, img_metas=None, return_loss=True, **kwargs)[源代码]

Calls either forward_train or forward_test depending on whether return_loss=True. Note this setting will change the expected inputs. When return_loss=True, img and img_meta are single-nested (i.e. Tensor and List[dict]), and when resturn_loss=False, img and img_meta should be double nested (i.e. List[Tensor], List[List[dict]]), with the outer list indicating test time augmentations.

注解

batch_size: N num_keypoints: K num_img_channel: C (Default: 3) img height: imgH img weight: imgW heatmaps height: H heatmaps weight: W

参数
  • img (torch.Tensor[NxCximgHximgW]) – Input images.

  • target (List[torch.Tensor]) – Targets.

  • target_weight (List[torch.Tensor]) – Weights.

  • img_metas (list(dict)) – Information about data augmentation By default this includes: - “image_file: path to the image file - “center”: center of the bbox - “scale”: scale of the bbox - “rotation”: rotation of the bbox - “bbox_score”: score of bbox

  • return_loss (bool) – Option to return loss. return loss=True for training, return loss=False for validation & test.

返回

if return loss is true, then return losses.
Otherwise, return predicted poses, boxes, image paths

and heatmaps.

返回类型

dict|tuple

forward_dummy(img)[源代码]

Used for computing network FLOPs.

See tools/get_flops.py.

参数

img (torch.Tensor) – Input image.

返回

Outputs.

返回类型

List[Tensor]

forward_test(img, img_metas, **kwargs)[源代码]

Defines the computation performed at every call when testing.

forward_train(img, target, target_weight, img_metas, **kwargs)[源代码]

Defines the computation performed at every call when training.

init_weights(pretrained=None)[源代码]

Weight initialization for model.

property with_necks

Check if has keypoint_head.

class mmpose.models.detectors.ParametricMesh(backbone, mesh_head, smpl, disc=None, loss_gan=None, loss_mesh=None, train_cfg=None, test_cfg=None, pretrained=None)[源代码]

Model-based 3D human mesh detector. Take a single color image as input and output 3D joints, SMPL parameters and camera parameters.

参数
  • backbone (dict) – Backbone modules to extract feature.

  • mesh_head (dict) – Mesh head to process feature.

  • smpl (dict) – Config for SMPL model.

  • disc (dict) – Discriminator for SMPL parameters. Default: None.

  • loss_gan (dict) – Config for adversarial loss. Default: None.

  • loss_mesh (dict) – Config for mesh loss. Default: None.

  • train_cfg (dict) – Config for training. Default: None.

  • test_cfg (dict) – Config for testing. Default: None.

  • pretrained (str) – Path to the pretrained models.

forward(img, img_metas=None, return_loss=False, **kwargs)[源代码]

Forward function.

Calls either forward_train or forward_test depending on whether return_loss=True.

注解

batch_size: N num_img_channel: C (Default: 3) img height: imgH img width: imgW

参数
  • img (torch.Tensor[N x C x imgH x imgW]) – Input images.

  • img_metas (list(dict)) – Information about data augmentation By default this includes: - “image_file: path to the image file - “center”: center of the bbox - “scale”: scale of the bbox - “rotation”: rotation of the bbox - “bbox_score”: score of bbox

  • return_loss (bool) – Option to return loss. return loss=True for training, return loss=False for validation & test.

返回

Return predicted 3D joints, SMPL parameters, boxes and image paths.

forward_dummy(img)[源代码]

Used for computing network FLOPs.

See tools/get_flops.py.

参数

img (torch.Tensor) – Input image.

返回

Outputs.

返回类型

Tensor

forward_test(img, img_metas, **kwargs)[源代码]

Defines the computation performed at every call when testing.

forward_train(*args, **kwargs)[源代码]

Forward function for training.

For ParametricMesh, we do not use this interface.

get_3d_joints_from_mesh(vertices)[源代码]

Get 3D joints from 3D mesh using predefined joints regressor.

init_weights(pretrained=None)[源代码]

Weight initialization for model.

show_result(**kwargs)[源代码]

Visualize the results.

train_step(data_batch, optimizer, **kwargs)[源代码]

Train step function.

In this function, the detector will finish the train step following the pipeline: 1. get fake and real SMPL parameters 2. optimize discriminator (if have) 3. optimize generator

If self.train_cfg.disc_step > 1, the train step will contain multiple iterations for optimizing discriminator with different input data and only one iteration for optimizing generator after disc_step iterations for discriminator.

参数
  • data_batch (torch.Tensor) – Batch of data as input.

  • optimizer (dict[torch.optim.Optimizer]) – Dict with optimizers for generator and discriminator (if have).

返回

Dict with loss, information for logger, the number of samples.

返回类型

outputs (dict)

val_step(data_batch, **kwargs)[源代码]

Forward function for evaluation.

参数

data_batch (dict) – Contain data for forward.

返回

Contain the results from model.

返回类型

dict

class mmpose.models.detectors.PoseLifter(backbone, neck=None, keypoint_head=None, traj_backbone=None, traj_neck=None, traj_head=None, loss_semi=None, train_cfg=None, test_cfg=None, pretrained=None)[源代码]

Pose lifter that lifts 2D pose to 3D pose.

The basic model is a pose model that predicts root-relative pose. If traj_head is not None, a trajectory model that predicts absolute root joint position is also built.

参数
  • backbone (dict) – Config for the backbone of pose model.

  • neck (dict|None) – Config for the neck of pose model.

  • keypoint_head (dict|None) – Config for the head of pose model.

  • traj_backbone (dict|None) – Config for the backbone of trajectory model. If traj_backbone is None and traj_head is not None, trajectory model will share backbone with pose model.

  • traj_neck (dict|None) – Config for the neck of trajectory model.

  • traj_head (dict|None) – Config for the head of trajectory model.

  • loss_semi (dict|None) – Config for semi-supervision loss.

  • train_cfg (dict|None) – Config for keypoint head during training.

  • test_cfg (dict|None) – Config for keypoint head during testing.

  • pretrained (str|None) – Path to pretrained weights.

forward(input, target=None, target_weight=None, metas=None, return_loss=True, **kwargs)[源代码]

Calls either forward_train or forward_test depending on whether return_loss=True.

注解

Note: batch_size: N num_input_keypoints: Ki input_keypoint_dim: Ci input_sequence_len: Ti num_output_keypoints: Ko output_keypoint_dim: Co input_sequence_len: To

参数
  • input (torch.Tensor[NxKixCixTi]) – Input keypoint coordinates.

  • target (torch.Tensor[NxKoxCoxTo]) – Output keypoint coordinates. Defaults to None.

  • target_weight (torch.Tensor[NxKox1]) – Weights across different joint types. Defaults to None.

  • metas (list(dict)) – Information about data augmentation

  • return_loss (bool) – Option to return loss. return loss=True for training, return loss=False for validation & test.

返回

if reutrn_loss is true, return losses. Otherwise

return predicted poses

返回类型

dict|Tensor

forward_dummy(input)[源代码]

Used for computing network FLOPs.

See tools/get_flops.py.

参数

input (torch.Tensor) – Input pose

返回

Model output

返回类型

Tensor

forward_test(input, metas, **kwargs)[源代码]

Defines the computation performed at every call when training.

forward_train(input, target, target_weight, metas, **kwargs)[源代码]

Defines the computation performed at every call when training.

init_weights(pretrained=None)[源代码]

Weight initialization for model.

show_result(result, img=None, skeleton=None, pose_kpt_color=None, pose_limb_color=None, radius=8, thickness=2, vis_height=400, num_instances=- 1, win_name='', show=False, wait_time=0, out_file=None)[源代码]

Visualize 3D pose estimation results.

参数
  • result (list[dict]) –

    The pose estimation results containing: - “keypoints_3d” ([K,4]): 3D keypoints - “keypoints” ([K,3] or [T,K,3]): Optional for visualizing

    2D inputs. If a sequence is given, only the last frame will be used for visualization

    • ”bbox” ([4,] or [T,4]): Optional for visualizing 2D inputs

    • ”title” (str): title for the subplot

  • img (str or Tensor) – Optional. The image to visualize 2D inputs on.

  • skeleton (list of [idx_i,idx_j]) – Skeleton described by a list of limbs, each is a pair of joint indices.

  • pose_kpt_color (np.array[Nx3]`) – Color of N keypoints. If None, do not draw keypoints.

  • pose_limb_color (np.array[Mx3]) – Color of M limbs. If None, do not draw limbs.

  • radius (int) – Radius of circles.

  • thickness (int) – Thickness of lines.

  • vis_height (int) – The image hight of the visualization. The width will be N*vis_height depending on the number of visualized items.

  • num_instances (int) – Number of instances to be shown in 3D. If smaller than 0, all the instances in the pose_result will be shown. Otherwise, pad or truncate the pose_result to a length of num_instances.

  • win_name (str) – The window name.

  • wait_time (int) – Value of waitKey param. Default: 0.

  • out_file (str or None) – The filename to write the image. Default: None.

返回

Visualized img, only if not show or out_file.

返回类型

Tensor

property with_keypoint

Check if has keypoint_head.

property with_neck

Check if has keypoint_neck.

property with_traj

Check if has trajectory_head.

property with_traj_backbone

Check if has trajectory_backbone.

property with_traj_neck

Check if has trajectory_neck.

class mmpose.models.detectors.TopDown(backbone, neck=None, keypoint_head=None, train_cfg=None, test_cfg=None, pretrained=None, loss_pose=None)[源代码]

Top-down pose detectors.

参数
  • backbone (dict) – Backbone modules to extract feature.

  • keypoint_head (dict) – Keypoint head to process feature.

  • train_cfg (dict) – Config for training. Default: None.

  • test_cfg (dict) – Config for testing. Default: None.

  • pretrained (str) – Path to the pretrained models.

  • loss_pose (None) – Deprecated arguments. Please use loss_keypoint for heads instead.

forward(img, target=None, target_weight=None, img_metas=None, return_loss=True, return_heatmap=False, **kwargs)[源代码]

Calls either forward_train or forward_test depending on whether return_loss=True. Note this setting will change the expected inputs. When return_loss=True, img and img_meta are single-nested (i.e. Tensor and List[dict]), and when resturn_loss=False, img and img_meta should be double nested (i.e. List[Tensor], List[List[dict]]), with the outer list indicating test time augmentations.

注解

batch_size: N num_keypoints: K num_img_channel: C (Default: 3) img height: imgH img width: imgW heatmaps height: H heatmaps weight: W

参数
  • img (torch.Tensor[NxCximgHximgW]) – Input images.

  • target (torch.Tensor[NxKxHxW]) – Target heatmaps.

  • target_weight (torch.Tensor[NxKx1]) – Weights across different joint types.

  • img_metas (list(dict)) – Information about data augmentation By default this includes: - “image_file: path to the image file - “center”: center of the bbox - “scale”: scale of the bbox - “rotation”: rotation of the bbox - “bbox_score”: score of bbox

  • return_loss (bool) – Option to return loss. return loss=True for training, return loss=False for validation & test.

  • return_heatmap (bool) – Option to return heatmap.

返回

if return loss is true, then return losses.
Otherwise, return predicted poses, boxes, image paths

and heatmaps.

返回类型

dict|tuple

forward_dummy(img)[源代码]

Used for computing network FLOPs.

See tools/get_flops.py.

参数

img (torch.Tensor) – Input image.

返回

Output heatmaps.

返回类型

Tensor

forward_test(img, img_metas, return_heatmap=False, **kwargs)[源代码]

Defines the computation performed at every call when testing.

forward_train(img, target, target_weight, img_metas, **kwargs)[源代码]

Defines the computation performed at every call when training.

init_weights(pretrained=None)[源代码]

Weight initialization for model.

show_result(img, result, skeleton=None, kpt_score_thr=0.3, bbox_color='green', pose_kpt_color=None, pose_limb_color=None, text_color='white', radius=4, thickness=1, font_scale=0.5, bbox_thickness=1, win_name='', show=False, show_keypoint_weight=False, wait_time=0, out_file=None)[源代码]

Draw result over img.

参数
  • img (str or Tensor) – The image to be displayed.

  • result (list[dict]) – The results to draw over img (bbox_result, pose_result).

  • skeleton (list[list]) – The connection of keypoints.

  • kpt_score_thr (float, optional) – Minimum score of keypoints to be shown. Default: 0.3.

  • bbox_color (str or tuple or Color) – Color of bbox lines.

  • pose_kpt_color (np.array[Nx3]`) – Color of N keypoints. If None, do not draw keypoints.

  • pose_limb_color (np.array[Mx3]) – Color of M limbs. If None, do not draw limbs.

  • text_color (str or tuple or Color) – Color of texts.

  • radius (int) – Radius of circles.

  • thickness (int) – Thickness of lines.

  • font_scale (float) – Font scales of texts.

  • win_name (str) – The window name.

  • show (bool) – Whether to show the image. Default: False.

  • show_keypoint_weight (bool) – Whether to change the transparency using the predicted confidence scores of keypoints.

  • wait_time (int) – Value of waitKey param. Default: 0.

  • out_file (str or None) – The filename to write the image. Default: None.

返回

Visualized img, only if not show or out_file.

返回类型

Tensor

property with_keypoint

Check if has keypoint_head.

property with_neck

Check if has keypoint_head.

heads

class mmpose.models.heads.AEHigherResolutionHead(in_channels, num_joints, tag_per_joint=True, extra=None, num_deconv_layers=1, num_deconv_filters=(32,), num_deconv_kernels=(4,), num_basic_blocks=4, cat_output=None, with_ae_loss=None, loss_keypoint=None)[源代码]

Associative embedding with higher resolution head. paper ref: Bowen Cheng et al. “HigherHRNet: Scale-Aware Representation Learning for Bottom- Up Human Pose Estimation”.

参数
  • in_channels (int) – Number of input channels.

  • num_joints (int) – Number of joints

  • tag_per_joint (bool) – If tag_per_joint is True, the dimension of tags equals to num_joints, else the dimension of tags is 1. Default: True

  • extra

  • num_deconv_layers (int) – Number of deconv layers. num_deconv_layers should >= 0. Note that 0 means no deconv layers.

  • num_deconv_filters (list|tuple) – Number of filters. If num_deconv_layers > 0, the length of

  • num_deconv_kernels (list|tuple) – Kernel sizes.

  • cat_output (list[bool]) – Option to concat outputs.

  • with_ae_loss (list[bool]) – Option to use ae loss.

  • loss_keypoint (dict) – Config for loss. Default: None.

forward(x)[源代码]

Forward function.

get_loss(output, targets, masks, joints)[源代码]

Calculate bottom-up keypoint loss.

注解

batch_size: N num_keypoints: K num_outputs: O heatmaps height: H heatmaps weight: W

参数
  • output (torch.Tensor[NxKxHxW]) – Output heatmaps.

  • targets (List(torch.Tensor[NxKxHxW])) – Multi-scale target heatmaps.

  • masks (List(torch.Tensor[NxHxW])) – Masks of multi-scale target heatmaps

  • joints (List(torch.Tensor[NxMxKx2])) – Joints of multi-scale target heatmaps for ae loss

init_weights()[源代码]

Initialize model weights.

class mmpose.models.heads.AESimpleHead(in_channels, num_joints, num_deconv_layers=3, num_deconv_filters=(256, 256, 256), num_deconv_kernels=(4, 4, 4), tag_per_joint=True, with_ae_loss=None, extra=None, loss_keypoint=None)[源代码]

Associative embedding simple head. paper ref: Alejandro Newell et al. “Associative Embedding: End-to-end Learning for Joint Detection and Grouping”

参数
  • in_channels (int) – Number of input channels.

  • num_joints (int) – Number of joints.

  • num_deconv_layers (int) – Number of deconv layers. num_deconv_layers should >= 0. Note that 0 means no deconv layers.

  • num_deconv_filters (list|tuple) – Number of filters. If num_deconv_layers > 0, the length of

  • num_deconv_kernels (list|tuple) – Kernel sizes.

  • tag_per_joint (bool) – If tag_per_joint is True, the dimension of tags equals to num_joints, else the dimension of tags is 1. Default: True

  • with_ae_loss (list[bool]) – Option to use ae loss or not.

  • loss_keypoint (dict) – Config for loss. Default: None.

forward(x)[源代码]

Forward function.

get_loss(output, targets, masks, joints)[源代码]

Calculate bottom-up keypoint loss.

注解

batch_size: N num_keypoints: K num_outputs: O heatmaps height: H heatmaps weight: W

参数
  • output (torch.Tensor[NxKxHxW]) – Output heatmaps.

  • targets (List(torch.Tensor[NxKxHxW])) – Multi-scale target heatmaps.

  • masks (List(torch.Tensor[NxHxW])) – Masks of multi-scale target heatmaps

  • joints (List(torch.Tensor[NxMxKx2])) – Joints of multi-scale target heatmaps for ae loss

init_weights()[源代码]

Initialize model weights.

class mmpose.models.heads.DeepposeRegressionHead(in_channels, num_joints, loss_keypoint=None, train_cfg=None, test_cfg=None)[源代码]

Deeppose regression head with fully connected layers.

paper ref: Alexander Toshev and Christian Szegedy, ``DeepPose: Human Pose Estimation via Deep Neural Networks.’’.

参数
  • in_channels (int) – Number of input channels

  • num_joints (int) – Number of joints

  • loss_keypoint (dict) – Config for keypoint loss. Default: None.

decode(img_metas, output, **kwargs)[源代码]

Decode the keypoints from output regression.

参数
  • img_metas (list(dict)) – Information about data augmentation By default this includes: - “image_file: path to the image file - “center”: center of the bbox - “scale”: scale of the bbox - “rotation”: rotation of the bbox - “bbox_score”: score of bbox

  • output (np.ndarray[N, K, 2]) – predicted regression vector.

  • kwargs – dict contains ‘img_size’. img_size (tuple(img_width, img_height)): input image size.

forward(x)[源代码]

Forward function.

get_accuracy(output, target, target_weight)[源代码]

Calculate accuracy for top-down keypoint loss.

注解

batch_size: N num_keypoints: K

参数
  • output (torch.Tensor[N, K, 2]) – Output keypoints.

  • target (torch.Tensor[N, K, 2]) – Target keypoints.

  • target_weight (torch.Tensor[N, K, 2]) – Weights across different joint types.

get_loss(output, target, target_weight)[源代码]

Calculate top-down keypoint loss.

注解

batch_size: N num_keypoints: K

参数
  • output (torch.Tensor[N, K, 2]) – Output keypoints.

  • target (torch.Tensor[N, K, 2]) – Target keypoints.

  • target_weight (torch.Tensor[N, K, 2]) – Weights across different joint types.

inference_model(x, flip_pairs=None)[源代码]

Inference function.

返回

Output regression.

返回类型

output_regression (np.ndarray)

参数
  • x (torch.Tensor[N, K, 2]) – Input features.

  • flip_pairs (None | list[tuple()) – Pairs of keypoints which are mirrored.

class mmpose.models.heads.HMRMeshHead(in_channels, smpl_mean_params=None, n_iter=3)[源代码]

SMPL parameters regressor head of simple baseline paper ref: Angjoo Kanazawa. ``End-to-end Recovery of Human Shape and Pose’’.

参数
  • in_channels (int) – Number of input channels

  • in_res (int) – The resolution of input feature map.

  • smpl_mean_parameters (str) – The file name of the mean SMPL parameters

  • n_iter (int) – The iterations of estimating delta parameters

forward(x)[源代码]

Forward function.

x is the image feature map and is expected to be in shape (batch size x channel number x height x width)

init_weights()[源代码]

Initialize model weights.

class mmpose.models.heads.Interhand3DHead(keypoint_head_cfg, root_head_cfg, hand_type_head_cfg, loss_keypoint=None, loss_root_depth=None, loss_hand_type=None, train_cfg=None, test_cfg=None)[源代码]

Interhand 3D head of paper ref: Gyeongsik Moon. “InterHand2.6M: A Dataset and Baseline for 3D Interacting Hand Pose Estimation from a Single RGB Image”.

参数
  • keypoint_head_cfg (dict) – Configs of Heatmap3DHead for hand

  • estimation. (hand root depth) –

  • root_head_cfg (dict) – Configs of Heatmap1DHead for relative

  • estimation.

  • hand_type_head_cfg (dict) – Configs of MultilabelClassificationHead

  • classification. (for hand type) –

  • loss_keypoint (dict) – Config for keypoint loss. Default: None.

  • loss_root_depth (dict) – Config for relative root depth loss.

  • Default (loss.) – None.

  • loss_hand_type (dict) – Config for hand type classification

  • Default – None.

decode(img_metas, output, **kwargs)[源代码]

Decode hand keypoint, relative root depth and hand type.

参数
  • img_metas (list(dict)) –

    Information about data augmentation By default this includes: - “image_file: path to the image file - “center”: center of the bbox - “scale”: scale of the bbox - “rotation”: rotation of the bbox - “bbox_score”: score of bbox - “heatmap3d_depth_bound”: depth bound of hand keypoint

    3D heatmap

    • ”root_depth_bound”: depth bound of relative root depth

    1D heatmap

  • output (list[np.ndarray]) – model predicted 3D heatmaps, relative

  • type. (root depth and hand) –

forward(x)[源代码]

Forward function.

get_accuracy(output, target, target_weight)[源代码]

Calculate accuracy for hand type.

参数
  • output (list[Tensor]) – a list of outputs from multiple heads.

  • target (list[Tensor]) – a list of targets for multiple heads.

  • target_weight (list[Tensor]) – a list of targets weight for

  • heads. (multiple) –

get_loss(output, target, target_weight)[源代码]

Calculate loss for hand keypoint heatmaps, relative root depth and hand type.

参数
  • output (list[Tensor]) – a list of outputs from multiple heads.

  • target (list[Tensor]) – a list of targets for multiple heads.

  • target_weight (list[Tensor]) – a list of targets weight for

  • heads. (multiple) –

inference_model(x, flip_pairs=None)[源代码]

Inference function.

返回

list of output hand keypoint heatmaps, relative root depth and hand type.

返回类型

output (list[np.ndarray])

参数
  • x (torch.Tensor[NxKxHxW]) – Input features.

  • flip_pairs (None | list[tuple()) – Pairs of keypoints which are mirrored.

class mmpose.models.heads.TemporalRegressionHead(in_channels, num_joints, max_norm=None, loss_keypoint=None, is_trajectory=False, train_cfg=None, test_cfg=None)[源代码]

Regression head of VideoPose3D.

Paper ref: Dario Pavllo. ``3D human pose estimation in video with temporal convolutions and

semi-supervised training``

Args:

in_channels (int): Number of input channels num_joints (int): Number of joints loss_keypoint (dict): Config for keypoint loss. Default: None. max_norm (float|None): if not None, the weight of convolution layers

will be clipped to have a maximum norm of max_norm.

is_trajectory (bool): If the model only predicts root joint

position, then this arg should be set to True. In this case, traj_loss will be calculated. Otherwise, it should be set to False. Default: False.

decode(metas, output)[源代码]

Decode the keypoints from output regression.

参数
  • metas (list(dict)) – Information about data augmentation. By default this includes: - “target_image_path”: path to the image file

  • output (np.ndarray[N, K, 3]) – predicted regression vector.

  • metas

    Information about data augmentation including: - target_image_path (str): Optional, path to the image file - target_mean (float): Optional, normalization parameter of

    the target pose.

    • target_std (float): Optional, normalization parameter of the

      target pose.

    • root_position (np.ndarray[3,1]): Optional, global

      position of the root joint.

    • root_index (torch.ndarray[1,]): Optional, original index of

      the root joint before root-centering.

forward(x)[源代码]

Forward function.

get_accuracy(output, target, target_weight, metas)[源代码]

Calculate accuracy for keypoint loss.

注解

batch_size: N num_keypoints: K

参数
  • output (torch.Tensor[N, K, 3]) – Output keypoints.

  • target (torch.Tensor[N, K, 3]) – Target keypoints.

  • target_weight (torch.Tensor[N, K, 3]) – Weights across different joint types.

  • metas (list(dict)) –

    Information about data augmentation including: - target_image_path (str): Optional, path to the image file - target_mean (float): Optional, normalization parameter of

    the target pose.

    • target_std (float): Optional, normalization parameter of the

      target pose.

    • root_position (np.ndarray[3,1]): Optional, global

      position of the root joint.

    • root_index (torch.ndarray[1,]): Optional, original index of

      the root joint before root-centering.

get_loss(output, target, target_weight)[源代码]

Calculate keypoint loss.

注解

batch_size: N num_keypoints: K

参数
  • output (torch.Tensor[N, K, 3]) – Output keypoints.

  • target (torch.Tensor[N, K, 3]) – Target keypoints.

  • target_weight (torch.Tensor[N, K, 3]) – Weights across different joint types. If self.is_trajectory is True and target_weight is None, target_weight will be set inversely proportional to joint depth.

inference_model(x, flip_pairs=None)[源代码]

Inference function.

返回

Output regression.

返回类型

output_regression (np.ndarray)

参数
  • x (torch.Tensor[N, K, 2]) – Input features.

  • flip_pairs (None | list[tuple()) – Pairs of keypoints which are mirrored.

init_weights()[源代码]

Initialize the weights.

class mmpose.models.heads.TopdownHeatmapBaseHead[源代码]

Base class for top-down heatmap heads.

All top-down heatmap heads should subclass it. All subclass should overwrite:

Methods:get_loss, supporting to calculate loss. Methods:get_accuracy, supporting to calculate accuracy. Methods:forward, supporting to forward model. Methods:inference_model, supporting to inference model.

decode(img_metas, output, **kwargs)[源代码]

Decode keypoints from heatmaps.

参数
  • img_metas (list(dict)) – Information about data augmentation By default this includes: - “image_file: path to the image file - “center”: center of the bbox - “scale”: scale of the bbox - “rotation”: rotation of the bbox - “bbox_score”: score of bbox

  • output (np.ndarray[N, K, H, W]) – model predicted heatmaps.

abstract forward(**kwargs)[源代码]

Forward function.

abstract get_accuracy(**kwargs)[源代码]

Gets the accuracy.

abstract get_loss(**kwargs)[源代码]

Gets the loss.

abstract inference_model(**kwargs)[源代码]

Inference function.

class mmpose.models.heads.TopdownHeatmapMSMUHead(out_shape, unit_channels=256, out_channels=17, num_stages=4, num_units=4, use_prm=False, norm_cfg={'type': 'BN'}, loss_keypoint=None, train_cfg=None, test_cfg=None)[源代码]

Heads for multi-stage multi-unit heads used in Multi-Stage Pose estimation Network (MSPN), and Residual Steps Networks (RSN).

参数
  • unit_channels (int) – Number of input channels.

  • out_channels (int) – Number of output channels.

  • out_shape (tuple) – Shape of the output heatmap.

  • num_stages (int) – Number of stages.

  • num_units (int) – Number of units in each stage.

  • use_prm (bool) – Whether to use pose refine machine (PRM). Default: False.

  • norm_cfg (dict) – dictionary to construct and config norm layer. Default: dict(type=’BN’)

  • loss_keypoint (dict) – Config for keypoint loss. Default: None.

forward(x)[源代码]

Forward function.

返回

a list of heatmaps from multiple stages

and units.

返回类型

out (list[Tensor])

get_accuracy(output, target, target_weight)[源代码]

Calculate accuracy for top-down keypoint loss.

注解

batch_size: N num_keypoints: K heatmaps height: H heatmaps weight: W

参数
  • output (torch.Tensor[NxKxHxW]) – Output heatmaps.

  • target (torch.Tensor[NxKxHxW]) – Target heatmaps.

  • target_weight (torch.Tensor[NxKx1]) – Weights across different joint types.

get_loss(output, target, target_weight)[源代码]

Calculate top-down keypoint loss.

注解

batch_size: N num_keypoints: K num_outputs: O heatmaps height: H heatmaps weight: W

参数
  • output (torch.Tensor[NxOxKxHxW]) – Output heatmaps.

  • target (torch.Tensor[NxOxKxHxW]) – Target heatmaps.

  • target_weight (torch.Tensor[NxOxKx1]) – Weights across different joint types.

inference_model(x, flip_pairs=None)[源代码]

Inference function.

返回

Output heatmaps.

返回类型

output_heatmap (np.ndarray)

参数
  • x (List[torch.Tensor[NxKxHxW]]) – Input features.

  • flip_pairs (None | list[tuple()) – Pairs of keypoints which are mirrored.

init_weights()[源代码]

Initialize model weights.

class mmpose.models.heads.TopdownHeatmapMultiStageHead(in_channels=512, out_channels=17, num_stages=1, num_deconv_layers=3, num_deconv_filters=(256, 256, 256), num_deconv_kernels=(4, 4, 4), extra=None, loss_keypoint=None, train_cfg=None, test_cfg=None)[源代码]

Top-down heatmap multi-stage head.

TopdownHeatmapMultiStageHead is consisted of multiple branches, each of which has num_deconv_layers(>=0) number of deconv layers and a simple conv2d layer.

参数
  • in_channels (int) – Number of input channels.

  • out_channels (int) – Number of output channels.

  • num_stages (int) – Number of stages.

  • num_deconv_layers (int) – Number of deconv layers. num_deconv_layers should >= 0. Note that 0 means no deconv layers.

  • num_deconv_filters (list|tuple) – Number of filters. If num_deconv_layers > 0, the length of

  • num_deconv_kernels (list|tuple) – Kernel sizes.

  • loss_keypoint (dict) – Config for keypoint loss. Default: None.

forward(x)[源代码]

Forward function.

返回

a list of heatmaps from multiple stages.

返回类型

out (list[Tensor])

get_accuracy(output, target, target_weight)[源代码]

Calculate accuracy for top-down keypoint loss.

注解

batch_size: N num_keypoints: K heatmaps height: H heatmaps weight: W

参数
  • output (torch.Tensor[NxKxHxW]) – Output heatmaps.

  • target (torch.Tensor[NxKxHxW]) – Target heatmaps.

  • target_weight (torch.Tensor[NxKx1]) – Weights across different joint types.

get_loss(output, target, target_weight)[源代码]

Calculate top-down keypoint loss.

注解

batch_size: N num_keypoints: K num_outputs: O heatmaps height: H heatmaps weight: W

参数
  • output (torch.Tensor[NxKxHxW]) – Output heatmaps.

  • target (torch.Tensor[NxKxHxW]) – Target heatmaps.

  • target_weight (torch.Tensor[NxKx1]) – Weights across different joint types.

inference_model(x, flip_pairs=None)[源代码]

Inference function.

返回

Output heatmaps.

返回类型

output_heatmap (np.ndarray)

参数
  • x (List[torch.Tensor[NxKxHxW]]) – Input features.

  • flip_pairs (None | list[tuple()) – Pairs of keypoints which are mirrored.

init_weights()[源代码]

Initialize model weights.

class mmpose.models.heads.TopdownHeatmapSimpleHead(in_channels, out_channels, num_deconv_layers=3, num_deconv_filters=(256, 256, 256), num_deconv_kernels=(4, 4, 4), extra=None, in_index=0, input_transform=None, align_corners=False, loss_keypoint=None, train_cfg=None, test_cfg=None)[源代码]

Top-down heatmap simple head. paper ref: Bin Xiao et al. Simple Baselines for Human Pose Estimation and Tracking.

TopdownHeatmapSimpleHead is consisted of (>=0) number of deconv layers and a simple conv2d layer.

参数
  • in_channels (int) – Number of input channels

  • out_channels (int) – Number of output channels

  • num_deconv_layers (int) – Number of deconv layers. num_deconv_layers should >= 0. Note that 0 means no deconv layers.

  • num_deconv_filters (list|tuple) – Number of filters. If num_deconv_layers > 0, the length of

  • num_deconv_kernels (list|tuple) – Kernel sizes.

  • in_index (int|Sequence[int]) – Input feature index. Default: -1

  • input_transform (str|None) –

    Transformation type of input features. Options: ‘resize_concat’, ‘multiple_select’, None. ‘resize_concat’: Multiple feature maps will be resize to the

    same size as first one and than concat together. Usually used in FCN head of HRNet.

    ’multiple_select’: Multiple feature maps will be bundle into

    a list and passed into decode head.

    None: Only one select feature map is allowed. Default: None.

  • align_corners (bool) – align_corners argument of F.interpolate. Default: False.

  • loss_keypoint (dict) – Config for keypoint loss. Default: None.

forward(x)[源代码]

Forward function.

get_accuracy(output, target, target_weight)[源代码]

Calculate accuracy for top-down keypoint loss.

注解

batch_size: N num_keypoints: K heatmaps height: H heatmaps weight: W

参数
  • output (torch.Tensor[NxKxHxW]) – Output heatmaps.

  • target (torch.Tensor[NxKxHxW]) – Target heatmaps.

  • target_weight (torch.Tensor[NxKx1]) – Weights across different joint types.

get_loss(output, target, target_weight)[源代码]

Calculate top-down keypoint loss.

注解

batch_size: N num_keypoints: K heatmaps height: H heatmaps weight: W

参数
  • output (torch.Tensor[NxKxHxW]) – Output heatmaps.

  • target (torch.Tensor[NxKxHxW]) – Target heatmaps.

  • target_weight (torch.Tensor[NxKx1]) – Weights across different joint types.

inference_model(x, flip_pairs=None)[源代码]

Inference function.

返回

Output heatmaps.

返回类型

output_heatmap (np.ndarray)

参数
  • x (torch.Tensor[NxKxHxW]) – Input features.

  • flip_pairs (None | list[tuple()) – Pairs of keypoints which are mirrored.

init_weights()[源代码]

Initialize model weights.

class mmpose.models.heads.ViPNASHeatmapSimpleHead(in_channels, out_channels, num_deconv_layers=3, num_deconv_filters=(144, 144, 144), num_deconv_kernels=(4, 4, 4), num_deconv_groups=(16, 16, 16), extra=None, in_index=0, input_transform=None, align_corners=False, loss_keypoint=None, train_cfg=None, test_cfg=None)[源代码]

ViPNAS heatmap simple head.

ViPNAS: Efficient Video Pose Estimation via Neural Architecture Search. More details can be found in the paper .

TopdownHeatmapSimpleHead is consisted of (>=0) number of deconv layers and a simple conv2d layer.

参数
  • in_channels (int) – Number of input channels

  • out_channels (int) – Number of output channels

  • num_deconv_layers (int) – Number of deconv layers. num_deconv_layers should >= 0. Note that 0 means no deconv layers.

  • num_deconv_filters (list|tuple) – Number of filters. If num_deconv_layers > 0, the length of

  • num_deconv_kernels (list|tuple) – Kernel sizes.

  • num_deconv_groups (list|tuple) – Group number.

  • in_index (int|Sequence[int]) – Input feature index. Default: -1

  • input_transform (str|None) –

    Transformation type of input features. Options: ‘resize_concat’, ‘multiple_select’, None. ‘resize_concat’: Multiple feature maps will be resize to the

    same size as first one and than concat together. Usually used in FCN head of HRNet.

    ’multiple_select’: Multiple feature maps will be bundle into

    a list and passed into decode head.

    None: Only one select feature map is allowed. Default: None.

  • align_corners (bool) – align_corners argument of F.interpolate. Default: False.

  • loss_keypoint (dict) – Config for keypoint loss. Default: None.

forward(x)[源代码]

Forward function.

get_accuracy(output, target, target_weight)[源代码]

Calculate accuracy for top-down keypoint loss.

注解

batch_size: N num_keypoints: K heatmaps height: H heatmaps weight: W

参数
  • output (torch.Tensor[NxKxHxW]) – Output heatmaps.

  • target (torch.Tensor[NxKxHxW]) – Target heatmaps.

  • target_weight (torch.Tensor[NxKx1]) – Weights across different joint types.

get_loss(output, target, target_weight)[源代码]

Calculate top-down keypoint loss.

注解

batch_size: N num_keypoints: K heatmaps height: H heatmaps weight: W

参数
  • output (torch.Tensor[NxKxHxW]) – Output heatmaps.

  • target (torch.Tensor[NxKxHxW]) – Target heatmaps.

  • target_weight (torch.Tensor[NxKx1]) – Weights across different joint types.

inference_model(x, flip_pairs=None)[源代码]

Inference function.

返回

Output heatmaps.

返回类型

output_heatmap (np.ndarray)

参数
  • x (torch.Tensor[NxKxHxW]) – Input features.

  • flip_pairs (None | list[tuple()) – Pairs of keypoints which are mirrored.

init_weights()[源代码]

Initialize model weights.

losses

class mmpose.models.losses.AELoss(loss_type)[源代码]

Associative Embedding loss.

Associative Embedding: End-to-End Learning for Joint Detection and Grouping <https://arxiv.org/abs/1611.05424v2>

forward(tags, joints)[源代码]

Accumulate the tag loss for each image in the batch.

注解

batch_size: N heatmaps weight: W heatmaps height: H max_num_people: M num_keypoints: K

参数
  • tags (torch.Tensor[Nx(KxHxW)x1]) – tag channels of output.

  • joints (torch.Tensor[NxMxKx2]) – joints information.

singleTagLoss(pred_tag, joints)[源代码]

Associative embedding loss for one image.

注解

heatmaps weight: W heatmaps height: H max_num_people: M num_keypoints: K

参数
  • pred_tag (torch.Tensor[(KxHxW)x1]) – tag of output for one image.

  • joints (torch.Tensor[MxKx2]) – joints information for one image.

class mmpose.models.losses.BCELoss(use_target_weight=False, loss_weight=1.0)[源代码]

Binary Cross Entropy loss.

forward(output, target, target_weight)[源代码]

Forward function.

注解

batch_size: N num_labels: K

参数
  • output (torch.Tensor[N, K]) – Output classification.

  • target (torch.Tensor[N, K]) – Target classification.

  • target_weight (torch.Tensor[N, K] or torch.Tensor[N]) – Weights across different labels.

class mmpose.models.losses.BoneLoss(joint_parents, use_target_weight=False, loss_weight=1.0)[源代码]

Bone length loss.

参数
  • joint_parents (list) – Indices of each joint’s parent joint.

  • use_target_weight (bool) – Option to use weighted bone loss. Different bone types may have different target weights.

  • loss_weight (float) – Weight of the loss. Default: 1.0.

forward(output, target, target_weight)[源代码]

Forward function.

注解

batch_size: N num_keypoints: K dimension of keypoints: D (D=2 or D=3)

参数
  • output (torch.Tensor[N, K, D]) – Output regression.

  • target (torch.Tensor[N, K, D]) – Target regression.

  • target_weight (torch.Tensor[N, K-1]) – Weights across different bone types.

class mmpose.models.losses.GANLoss(gan_type, real_label_val=1.0, fake_label_val=0.0, loss_weight=1.0)[源代码]

Define GAN loss.

参数
  • gan_type (str) – Support ‘vanilla’, ‘lsgan’, ‘wgan’, ‘hinge’.

  • real_label_val (float) – The value for real label. Default: 1.0.

  • fake_label_val (float) – The value for fake label. Default: 0.0.

  • loss_weight (float) – Loss weight. Default: 1.0. Note that loss_weight is only for generators; and it is always 1.0 for discriminators.

forward(input, target_is_real, is_disc=False)[源代码]
参数
  • input (Tensor) – The input for the loss module, i.e., the network prediction.

  • target_is_real (bool) – Whether the targe is real or fake.

  • is_disc (bool) – Whether the loss for discriminators or not. Default: False.

返回

GAN loss value.

返回类型

Tensor

get_target_label(input, target_is_real)[源代码]

Get target label.

参数
  • input (Tensor) – Input tensor.

  • target_is_real (bool) – Whether the target is real or fake.

返回

Target tensor. Return bool for wgan, otherwise,

return Tensor.

返回类型

(bool | Tensor)

class mmpose.models.losses.HeatmapLoss(supervise_empty=True)[源代码]

Accumulate the heatmap loss for each image in the batch.

参数

supervise_empty (bool) – Whether to supervise empty channels.

forward(pred, gt, mask)[源代码]

注解

batch_size: N heatmaps weight: W heatmaps height: H max_num_people: M num_keypoints: K

参数
  • pred (torch.Tensor[NxKxHxW]) – heatmap of output.

  • gt (torch.Tensor[NxKxHxW]) – target heatmap.

  • mask (torch.Tensor[NxHxW]) – mask of target.

class mmpose.models.losses.JointsMSELoss(use_target_weight=False, loss_weight=1.0)[源代码]

MSE loss for heatmaps.

参数
  • use_target_weight (bool) – Option to use weighted MSE loss. Different joint types may have different target weights.

  • loss_weight (float) – Weight of the loss. Default: 1.0.

forward(output, target, target_weight)[源代码]

Forward function.

class mmpose.models.losses.JointsOHKMMSELoss(use_target_weight=False, topk=8, loss_weight=1.0)[源代码]

MSE loss with online hard keypoint mining.

参数
  • use_target_weight (bool) – Option to use weighted MSE loss. Different joint types may have different target weights.

  • topk (int) – Only top k joint losses are kept.

  • loss_weight (float) – Weight of the loss. Default: 1.0.

forward(output, target, target_weight)[源代码]

Forward function.

class mmpose.models.losses.L1Loss(use_target_weight=False, loss_weight=1.0)[源代码]

L1Loss loss .

forward(output, target, target_weight)[源代码]

Forward function.

注解

batch_size: N num_keypoints: K

参数
  • output (torch.Tensor[N, K, 2]) – Output regression.

  • target (torch.Tensor[N, K, 2]) – Target regression.

  • target_weight (torch.Tensor[N, K, 2]) – Weights across different joint types.

class mmpose.models.losses.MPJPELoss(use_target_weight=False, loss_weight=1.0)[源代码]

MPJPE (Mean Per Joint Position Error) loss.

参数
  • use_target_weight (bool) – Option to use weighted MSE loss. Different joint types may have different target weights.

  • loss_weight (float) – Weight of the loss. Default: 1.0.

forward(output, target, target_weight)[源代码]

Forward function.

注解

batch_size: N num_keypoints: K dimension of keypoints: D (D=2 or D=3)

参数
  • output (torch.Tensor[N, K, D]) – Output regression.

  • target (torch.Tensor[N, K, D]) – Target regression.

  • target_weight (torch.Tensor[N, K, D]) – Weights across different joint types.

class mmpose.models.losses.MSELoss(use_target_weight=False, loss_weight=1.0)[源代码]

MSE loss for coordinate regression.

forward(output, target, target_weight)[源代码]

Forward function.

注解

batch_size: N num_keypoints: K

参数
  • output (torch.Tensor[N, K, 2]) – Output regression.

  • target (torch.Tensor[N, K, 2]) – Target regression.

  • target_weight (torch.Tensor[N, K, 2]) – Weights across different joint types.

class mmpose.models.losses.MeshLoss(joints_2d_loss_weight, joints_3d_loss_weight, vertex_loss_weight, smpl_pose_loss_weight, smpl_beta_loss_weight, img_res, focal_length=5000)[源代码]

Mix loss for 3D human mesh. It is composed of loss on 2D joints, 3D joints, mesh vertices and smpl paramters (if any).

参数
  • joints_2d_loss_weight (float) – Weight for loss on 2D joints.

  • joints_3d_loss_weight (float) – Weight for loss on 3D joints.

  • vertex_loss_weight (float) – Weight for loss on 3D verteices.

  • smpl_pose_loss_weight (float) – Weight for loss on SMPL pose parameters.

  • smpl_beta_loss_weight (float) – Weight for loss on SMPL shape parameters.

  • img_res (int) – Input image resolution.

  • focal_length (float) – Focal length of camera model. Default=5000.

forward(output, target)[源代码]

Forward function.

参数
  • output (dict) – dict of network predicted results. Keys: ‘vertices’, ‘joints_3d’, ‘camera’, ‘pose’(optional), ‘beta’(optional)

  • target (dict) – dict of ground-truth labels. Keys: ‘vertices’, ‘joints_3d’, ‘joints_3d_visible’, ‘joints_2d’, ‘joints_2d_visible’, ‘pose’, ‘beta’, ‘has_smpl’

返回

dict of losses.

返回类型

losses (dict)

joints_2d_loss(pred_joints_2d, gt_joints_2d, joints_2d_visible)[源代码]

Compute 2D reprojection loss on the joints.

The loss is weighted by joints_2d_visible.

joints_3d_loss(pred_joints_3d, gt_joints_3d, joints_3d_visible)[源代码]

Compute 3D joints loss for the examples that 3D joint annotations are available.

The loss is weighted by joints_3d_visible.

project_points(points_3d, camera)[源代码]

Perform orthographic projection of 3D points using the camera parameters, return projected 2D points in image plane.

提示

batch size: B point number: N

参数
  • points_3d (Tensor([B, N, 3])) – 3D points.

  • camera (Tensor([B, 3])) – camera parameters with the 3 channel as (scale, translation_x, translation_y)

返回

projected 2D points

in image space.

返回类型

points_2d (Tensor([B, N, 2]))

smpl_losses(pred_rotmat, pred_betas, gt_pose, gt_betas, has_smpl)[源代码]

Compute SMPL parameters loss for the examples that SMPL parameter annotations are available.

The loss is weighted by has_smpl.

vertex_loss(pred_vertices, gt_vertices, has_smpl)[源代码]

Compute 3D vertex loss for the examples that 3D human mesh annotations are available.

The loss is weighted by the has_smpl.

class mmpose.models.losses.MultiLossFactory(num_joints, num_stages, ae_loss_type, with_ae_loss, push_loss_factor, pull_loss_factor, with_heatmaps_loss, heatmaps_loss_factor, supervise_empty=True)[源代码]

Loss for bottom-up models.

参数
  • num_joints (int) – Number of keypoints.

  • num_stages (int) – Number of stages.

  • ae_loss_type (str) – Type of ae loss.

  • with_ae_loss (list[bool]) – Use ae loss or not in multi-heatmap.

  • push_loss_factor (list[float]) – Parameter of push loss in multi-heatmap.

  • pull_loss_factor (list[float]) – Parameter of pull loss in multi-heatmap.

  • with_heatmap_loss (list[bool]) – Use heatmap loss or not in multi-heatmap.

  • heatmaps_loss_factor (list[float]) – Parameter of heatmap loss in multi-heatmap.

  • supervise_empty (bool) – Whether to supervise empty channels.

forward(outputs, heatmaps, masks, joints)[源代码]

Forward function to calculate losses.

注解

batch_size: N heatmaps weight: W heatmaps height: H max_num_people: M num_keypoints: K output_channel: C C=2K if use ae loss else K

参数
  • outputs (List(torch.Tensor[NxCxHxW])) – outputs of stages.

  • heatmaps (List(torch.Tensor[NxKxHxW])) – target of heatmaps.

  • masks (List(torch.Tensor[NxHxW])) – masks of heatmaps.

  • joints (List(torch.Tensor[NxMxKx2])) – joints of ae loss.

class mmpose.models.losses.SemiSupervisionLoss(joint_parents, projection_loss_weight=1.0, bone_loss_weight=1.0, warmup_iterations=0)[源代码]

Semi-supervision loss for unlabeled data. It is composed of projection loss and bone loss.

Paper ref: 3D human pose estimation in video with temporal convolutions and semi-supervised training Dario Pavllo et al. CVPR’2019.

参数
  • joint_parents (list) – Indices of each joint’s parent joint.

  • projection_loss_weight (float) – Weight for projection loss.

  • bone_loss_weight (float) – Weight for bone loss.

  • warmup_iterations (int) – Number of warmup iterations. In the first warmup_iterations iterations, the model is trained only on labeled data, and semi-supervision loss will be 0. This is a workaround since currently we cannot access epoch number in loss functions. Note that the iteration number in an epoch can be changed due to different GPU numbers in multi-GPU settings. So please set this parameter carefully. warmup_iterations = dataset_size // samples_per_gpu // gpu_num * warmup_epochs

forward(output, target)[源代码]

Defines the computation performed at every call.

Should be overridden by all subclasses.

注解

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

static project_joints(x, intrinsics)[源代码]

Project 3D joint coordinates to 2D image plane using camera intrinsic parameters.

参数
  • x (torch.Tensor[N, K, 3]) – 3D joint coordinates.

  • intrinsics (torch.Tensor[N, 4] | torch.Tensor[N, 9]) – Camera intrinsics: f (2), c (2), k (3), p (2).

class mmpose.models.losses.SmoothL1Loss(use_target_weight=False, loss_weight=1.0)[源代码]

SmoothL1Loss loss .

参数
  • use_target_weight (bool) – Option to use weighted MSE loss. Different joint types may have different target weights.

  • loss_weight (float) – Weight of the loss. Default: 1.0.

forward(output, target, target_weight)[源代码]

Forward function.

注解

batch_size: N num_keypoints: K dimension of keypoints: D (D=2 or D=3)

参数
  • output (torch.Tensor[N, K, D]) – Output regression.

  • target (torch.Tensor[N, K, D]) – Target regression.

  • target_weight (torch.Tensor[N, K, D]) – Weights across different joint types.

class mmpose.models.losses.WingLoss(omega=10.0, epsilon=2.0, use_target_weight=False, loss_weight=1.0)[源代码]

Wing Loss ‘Wing Loss for Robust Facial Landmark Localisation with Convolutional Neural Networks’ Feng et al. CVPR’2018.

参数
  • omega (float), epsilon (float) –

  • use_target_weight (bool) – Option to use weighted MSE loss. Different joint types may have different target weights.

  • loss_weight (float) – Weight of the loss. Default: 1.0.

criterion(pred, target)[源代码]

Criterion of wingloss.

注解

batch_size: N num_keypoints: K dimension of keypoints: D (D=2 or D=3)

参数
  • pred (torch.Tensor[N, K, D]) – Output regression.

  • target (torch.Tensor[N, K, D]) – Target regression.

forward(output, target, target_weight)[源代码]

Forward function.

注解

batch_size: N num_keypoints: K dimension of keypoints: D (D=2 or D=3)

参数
  • output (torch.Tensor[N, K, D]) – Output regression.

  • target (torch.Tensor[N, K, D]) – Target regression.

  • target_weight (torch.Tensor[N, K, D]) – Weights across different joint types.

misc

mmpose.datasets

class mmpose.datasets.AnimalATRWDataset(ann_file, img_prefix, data_cfg, pipeline, test_mode=False)[源代码]

ATRW dataset for animal pose estimation.

ATRW: A Benchmark for Amur Tiger Re-identification in the Wild’ ACM MM’2020 More details can be found in the `paper .

The dataset loads raw features and apply specified transforms to return a dict containing the image tensors and other information.

ATRW keypoint indexes:

0: "left_ear",
1: "right_ear",
2: "nose",
3: "right_shoulder",
4: "right_front_paw",
5: "left_shoulder",
6: "left_front_paw",
7: "right_hip",
8: "right_knee",
9: "right_back_paw",
10: "left_hip",
11: "left_knee",
12: "left_back_paw",
13: "tail",
14: "center"
参数
  • ann_file (str) – Path to the annotation file.

  • img_prefix (str) – Path to a directory where images are held. Default: None.

  • data_cfg (dict) – config

  • pipeline (list[dict | callable]) – A sequence of data transforms.

  • test_mode (bool) – Store True when building test or validation dataset. Default: False.

evaluate(outputs, res_folder, metric='mAP', **kwargs)[源代码]

Evaluate coco keypoint results. The pose prediction results will be saved in ${res_folder}/result_keypoints.json.

注解

batch_size: N num_keypoints: K heatmap height: H heatmap width: W

参数
  • outputs (list(dict)) –

    preds (np.ndarray[N,K,3])

    The first two dimensions are coordinates, score is the third dimension of the array.

    boxes (np.ndarray[N,6])

    [center[0], center[1], scale[0] , scale[1],area, score]

    image_paths (list[str])

    For example, [‘data/coco/val2017 /000000393226.jpg’]

    heatmap (np.ndarray[N, K, H, W])

    model output heatmap

    :bbox_id (list(int)).

  • res_folder (str) – Path of directory to save the results.

  • metric (str | list[str]) – Metric to be performed. Defaults: ‘mAP’.

返回

Evaluation results for evaluation metric.

返回类型

dict

class mmpose.datasets.AnimalFlyDataset(ann_file, img_prefix, data_cfg, pipeline, test_mode=False)[源代码]

AnimalFlyDataset for animal pose estimation.

`Fast animal pose estimation using deep neural networks’ Nature methods’2019. More details can be found in the `paper <https://www.biorxiv.org/content/

biorxiv/early/2018/05/25/331181.full.pdf>`__ .

The dataset loads raw features and apply specified transforms to return a dict containing the image tensors and other information.

Vinegar Fly keypoint indexes:

0: "head",
1: "eyeL",
2: "eyeR",
3: "neck",
4: "thorax",
5: "abdomen",
6: "forelegR1",
7: "forelegR2",
8: "forelegR3",
9: "forelegR4",
10: "midlegR1",
11: "midlegR2",
12: "midlegR3",
13: "midlegR4",
14: "hindlegR1",
15: "hindlegR2",
16: "hindlegR3",
17: "hindlegR4",
18: "forelegL1",
19: "forelegL2",
20: "forelegL3",
21: "forelegL4",
22: "midlegL1",
23: "midlegL2",
24: "midlegL3",
25: "midlegL4",
26: "hindlegL1",
27: "hindlegL2",
28: "hindlegL3",
29: "hindlegL4",
30: "wingL",
31: "wingR"
参数
  • ann_file (str) – Path to the annotation file.

  • img_prefix (str) – Path to a directory where images are held. Default: None.

  • data_cfg (dict) – config

  • pipeline (list[dict | callable]) – A sequence of data transforms.

  • test_mode (bool) – Store True when building test or validation dataset. Default: False.

evaluate(outputs, res_folder, metric='PCK', **kwargs)[源代码]

Evaluate Fly keypoint results. The pose prediction results will be saved in ${res_folder}/result_keypoints.json.

注解

batch_size: N num_keypoints: K heatmap height: H heatmap width: W

参数
  • outputs (list(preds, boxes, image_path, output_heatmap)) –

    preds (np.ndarray[N,K,3])

    The first two dimensions are coordinates, score is the third dimension of the array.

    boxes (np.ndarray[N,6])

    [center[0], center[1], scale[0] , scale[1],area, score]

    image_paths (list[str])

    For example, [‘Test/source/0.jpg’]

    output_heatmap (np.ndarray[N, K, H, W])

    model outpus.

  • res_folder (str) – Path of directory to save the results.

  • metric (str | list[str]) – Metric to be performed. Options: ‘PCK’, ‘AUC’, ‘EPE’.

返回

Evaluation results for evaluation metric.

返回类型

dict

class mmpose.datasets.AnimalHorse10Dataset(ann_file, img_prefix, data_cfg, pipeline, test_mode=False)[源代码]

AnimalHorse10Dataset for animal pose estimation.

Pretraining boosts out-of-domain robustness for pose estimation’ WACV’2021. More details can be found in the `paper .

The dataset loads raw features and apply specified transforms to return a dict containing the image tensors and other information.

Horse-10 keypoint indexes:

0: 'Nose',
1: 'Eye',
2: 'Nearknee',
3: 'Nearfrontfetlock',
4: 'Nearfrontfoot',
5: 'Offknee',
6: 'Offfrontfetlock',
7: 'Offfrontfoot',
8: 'Shoulder',
9: 'Midshoulder',
10: 'Elbow',
11: 'Girth',
12: 'Wither',
13: 'Nearhindhock',
14: 'Nearhindfetlock',
15: 'Nearhindfoot',
16: 'Hip',
17: 'Stifle',
18: 'Offhindhock',
19: 'Offhindfetlock',
20: 'Offhindfoot',
21: 'Ischium'
参数
  • ann_file (str) – Path to the annotation file.

  • img_prefix (str) – Path to a directory where images are held. Default: None.

  • data_cfg (dict) – config

  • pipeline (list[dict | callable]) – A sequence of data transforms.

  • test_mode (bool) – Store True when building test or validation dataset. Default: False.

evaluate(outputs, res_folder, metric='PCK', **kwargs)[源代码]

Evaluate horse-10 keypoint results. The pose prediction results will be saved in ${res_folder}/result_keypoints.json.

注解

batch_size: N num_keypoints: K heatmap height: H heatmap width: W

参数
  • outputs (list(preds, boxes, image_path, output_heatmap)) –

    preds (np.ndarray[N,K,3])

    The first two dimensions are coordinates, score is the third dimension of the array.

    boxes (np.ndarray[N,6])

    [center[0], center[1], scale[0] , scale[1],area, score]

    image_paths (list[str])

    For example, [‘Test/source/0.jpg’]

    output_heatmap (np.ndarray[N, K, H, W])

    model outpus.

  • res_folder (str) – Path of directory to save the results.

  • metric (str | list[str]) – Metric to be performed. Options: ‘PCK’, ‘NME’.

返回

Evaluation results for evaluation metric.

返回类型

dict

class mmpose.datasets.AnimalLocustDataset(ann_file, img_prefix, data_cfg, pipeline, test_mode=False)[源代码]

AnimalLocustDataset for animal pose estimation.

`DeepPoseKit, a software toolkit for fast and robust animal

pose estimation using deep learning’

Elife’2019. More details can be found in the `paper.

The dataset loads raw features and apply specified transforms to return a dict containing the image tensors and other information.

Desert Locust keypoint indexes:

0: "head",
1: "neck",
2: "thorax",
3: "abdomen1",
4: "abdomen2",
5: "anttipL",
6: "antbaseL",
7: "eyeL",
8: "forelegL1",
9: "forelegL2",
10: "forelegL3",
11: "forelegL4",
12: "midlegL1",
13: "midlegL2",
14: "midlegL3",
15: "midlegL4",
16: "hindlegL1",
17: "hindlegL2",
18: "hindlegL3",
19: "hindlegL4",
20: "anttipR",
21: "antbaseR",
22: "eyeR",
23: "forelegR1",
24: "forelegR2",
25: "forelegR3",
26: "forelegR4",
27: "midlegR1",
28: "midlegR2",
29: "midlegR3",
30: "midlegR4",
31: "hindlegR1",
32: "hindlegR2",
33: "hindlegR3",
34: "hindlegR4"
参数
  • ann_file (str) – Path to the annotation file.

  • img_prefix (str) – Path to a directory where images are held. Default: None.

  • data_cfg (dict) – config

  • pipeline (list[dict | callable]) – A sequence of data transforms.

  • test_mode (bool) – Store True when building test or validation dataset. Default: False.

evaluate(outputs, res_folder, metric='PCK', **kwargs)[源代码]

Evaluate Fly keypoint results. The pose prediction results will be saved in ${res_folder}/result_keypoints.json.

注解

batch_size: N num_keypoints: K heatmap height: H heatmap width: W

参数
  • outputs (list(preds, boxes, image_path, output_heatmap)) –

    preds (np.ndarray[N,K,3])

    The first two dimensions are coordinates, score is the third dimension of the array.

    boxes (np.ndarray[N,6])

    [center[0], center[1], scale[0] , scale[1],area, score]

    image_paths (list[str])

    For example, [‘Test/source/0.jpg’]

    output_heatmap (np.ndarray[N, K, H, W])

    model outpus.

  • res_folder (str) – Path of directory to save the results.

  • metric (str | list[str]) – Metric to be performed. Options: ‘PCK’, ‘AUC’, ‘EPE’.

返回

Evaluation results for evaluation metric.

返回类型

dict

class mmpose.datasets.AnimalMacaqueDataset(ann_file, img_prefix, data_cfg, pipeline, test_mode=False)[源代码]

MacaquePose dataset for animal pose estimation.

MacaquePose: A novel ‘in the wild’ macaque monkey pose dataset for markerless motion capture’ bioRxiv’2020 More details can be found in the `paper .

The dataset loads raw features and apply specified transforms to return a dict containing the image tensors and other information.

Macaque keypoint indexes:

0: 'nose',
1: 'left_eye',
2: 'right_eye',
3: 'left_ear',
4: 'right_ear',
5: 'left_shoulder',
6: 'right_shoulder',
7: 'left_elbow',
8: 'right_elbow',
9: 'left_wrist',
10: 'right_wrist',
11: 'left_hip',
12: 'right_hip',
13: 'left_knee',
14: 'right_knee',
15: 'left_ankle',
16: 'right_ankle'
参数
  • ann_file (str) – Path to the annotation file.

  • img_prefix (str) – Path to a directory where images are held. Default: None.

  • data_cfg (dict) – config

  • pipeline (list[dict | callable]) – A sequence of data transforms.

  • test_mode (bool) – Store True when building test or validation dataset. Default: False.

evaluate(outputs, res_folder, metric='mAP', **kwargs)[源代码]

Evaluate coco keypoint results. The pose prediction results will be saved in ${res_folder}/result_keypoints.json.

注解

batch_size: N num_keypoints: K heatmap height: H heatmap width: W

参数
  • outputs (list(dict)) –

    preds (np.ndarray[N,K,3])

    The first two dimensions are coordinates, score is the third dimension of the array.

    boxes (np.ndarray[N,6])

    [center[0], center[1], scale[0] , scale[1],area, score]

    image_paths (list[str])

    For example, [‘data/coco/val2017 /000000393226.jpg’]

    heatmap (np.ndarray[N, K, H, W])

    model output heatmap

    :bbox_id (list(int)).

  • res_folder (str) – Path of directory to save the results.

  • metric (str | list[str]) – Metric to be performed. Defaults: ‘mAP’.

返回

Evaluation results for evaluation metric.

返回类型

dict

class mmpose.datasets.AnimalPoseDataset(ann_file, img_prefix, data_cfg, pipeline, test_mode=False)[源代码]

Animal-Pose dataset for animal pose estimation.

Cross-domain Adaptation For Animal Pose Estimation’ ICCV’2019 More details can be found in the `paper .

The dataset loads raw features and apply specified transforms to return a dict containing the image tensors and other information.

Animal-Pose keypoint indexes:

0: 'L_Eye',
1: 'R_Eye',
2: 'L_EarBase',
3: 'R_EarBase',
4: 'Nose',
5: 'Throat',
6: 'TailBase',
7: 'Withers',
8: 'L_F_Elbow',
9: 'R_F_Elbow',
10: 'L_B_Elbow',
11: 'R_B_Elbow',
12: 'L_F_Knee',
13: 'R_F_Knee',
14: 'L_B_Knee',
15: 'R_B_Knee',
16: 'L_F_Paw',
17: 'R_F_Paw',
18: 'L_B_Paw',
19: 'R_B_Paw'
参数
  • ann_file (str) – Path to the annotation file.

  • img_prefix (str) – Path to a directory where images are held. Default: None.

  • data_cfg (dict) – config

  • pipeline (list[dict | callable]) – A sequence of data transforms.

  • test_mode (bool) – Store True when building test or validation dataset. Default: False.

evaluate(outputs, res_folder, metric='mAP', **kwargs)[源代码]

Evaluate coco keypoint results. The pose prediction results will be saved in ${res_folder}/result_keypoints.json.

注解

batch_size: N num_keypoints: K heatmap height: H heatmap width: W

参数
  • outputs (list(dict)) –

    preds (np.ndarray[N,K,3])

    The first two dimensions are coordinates, score is the third dimension of the array.

    boxes (np.ndarray[N,6])

    [center[0], center[1], scale[0] , scale[1],area, score]

    image_paths (list[str])

    For example, [‘data/coco/val2017 /000000393226.jpg’]

    heatmap (np.ndarray[N, K, H, W])

    model output heatmap

    :bbox_id (list(int)).

  • res_folder (str) – Path of directory to save the results.

  • metric (str | list[str]) – Metric to be performed. Defaults: ‘mAP’.

返回

Evaluation results for evaluation metric.

返回类型

dict

class mmpose.datasets.AnimalZebraDataset(ann_file, img_prefix, data_cfg, pipeline, test_mode=False)[源代码]

AnimalZebraDataset for animal pose estimation.

`DeepPoseKit, a software toolkit for fast and robust animal

pose estimation using deep learning’

Elife’2019. More details can be found in the `paper.

The dataset loads raw features and apply specified transforms to return a dict containing the image tensors and other information.

Desert Locust keypoint indexes:

0: "snout",
1: "head",
2: "neck",
3: "forelegL1",
4: "forelegR1",
5: "hindlegL1",
6: "hindlegR1",
7: "tailbase",
8: "tailtip"
参数
  • ann_file (str) – Path to the annotation file.

  • img_prefix (str) – Path to a directory where images are held. Default: None.

  • data_cfg (dict) – config

  • pipeline (list[dict | callable]) – A sequence of data transforms.

  • test_mode (bool) – Store True when building test or validation dataset. Default: False.

evaluate(outputs, res_folder, metric='PCK', **kwargs)[源代码]

Evaluate Fly keypoint results. The pose prediction results will be saved in ${res_folder}/result_keypoints.json.

注解

batch_size: N num_keypoints: K heatmap height: H heatmap width: W

参数
  • outputs (list(preds, boxes, image_path, output_heatmap)) –

    preds (np.ndarray[N,K,3])

    The first two dimensions are coordinates, score is the third dimension of the array.

    boxes (np.ndarray[N,6])

    [center[0], center[1], scale[0] , scale[1],area, score]

    image_paths (list[str])

    For example, [‘Test/source/0.jpg’]

    output_heatmap (np.ndarray[N, K, H, W])

    model outpus.

  • res_folder (str) – Path of directory to save the results.

  • metric (str | list[str]) – Metric to be performed. Options: ‘PCK’, ‘AUC’, ‘EPE’.

返回

Evaluation results for evaluation metric.

返回类型

dict

class mmpose.datasets.BottomUpCocoDataset(ann_file, img_prefix, data_cfg, pipeline, test_mode=False)[源代码]

COCO dataset for bottom-up pose estimation.

The dataset loads raw features and apply specified transforms to return a dict containing the image tensors and other information.

COCO keypoint indexes:

0: 'nose',
1: 'left_eye',
2: 'right_eye',
3: 'left_ear',
4: 'right_ear',
5: 'left_shoulder',
6: 'right_shoulder',
7: 'left_elbow',
8: 'right_elbow',
9: 'left_wrist',
10: 'right_wrist',
11: 'left_hip',
12: 'right_hip',
13: 'left_knee',
14: 'right_knee',
15: 'left_ankle',
16: 'right_ankle'
参数
  • ann_file (str) – Path to the annotation file.

  • img_prefix (str) – Path to a directory where images are held. Default: None.

  • data_cfg (dict) – config

  • pipeline (list[dict | callable]) – A sequence of data transforms.

  • test_mode (bool) – Store True when building test or validation dataset. Default: False.

evaluate(outputs, res_folder, metric='mAP', **kwargs)[源代码]

Evaluate coco keypoint results. The pose prediction results will be saved in ${res_folder}/result_keypoints.json.

注解

num_people: P num_keypoints: K

参数
  • outputs (list(preds, scores, image_path, heatmap)) –

    • preds (list[np.ndarray(P, K, 3+tag_num)]): Pose predictions for all people in images.

    • scores (list[P]):

    • image_path (list[str]): For example, [‘coco/images/

    val2017/000000397133.jpg’] * heatmap (np.ndarray[N, K, H, W]): model outputs.

  • res_folder (str) – Path of directory to save the results.

  • metric (str | list[str]) – Metric to be performed. Defaults: ‘mAP’.

返回

Evaluation results for evaluation metric.

返回类型

dict

class mmpose.datasets.BottomUpCocoWholeBodyDataset(ann_file, img_prefix, data_cfg, pipeline, test_mode=False)[源代码]

CocoWholeBodyDataset dataset for bottom-up pose estimation.

Whole-Body Human Pose Estimation in the Wild’ ECCV’2020 More details can be found in the `paper .

The dataset loads raw features and apply specified transforms to return a dict containing the image tensors and other information.

In total, we have 133 keypoints for wholebody pose estimation.

COCO-WholeBody keypoint indexes::

0-16: 17 body keypoints 17-22: 6 foot keypoints 23-90: 68 face keypoints 91-132: 42 hand keypoints

参数
  • ann_file (str) – Path to the annotation file.

  • img_prefix (str) – Path to a directory where images are held. Default: None.

  • data_cfg (dict) – config

  • pipeline (list[dict | callable]) – A sequence of data transforms.

  • test_mode (bool) – Store True when building test or validation dataset. Default: False.

class mmpose.datasets.BottomUpCrowdPoseDataset(ann_file, img_prefix, data_cfg, pipeline, test_mode=False)[源代码]

CrowdPose dataset for bottom-up pose estimation.

The dataset loads raw features and apply specified transforms to return a dict containing the image tensors and other information.

CrowdPose keypoint indexes:

0: 'left_shoulder',
1: 'right_shoulder',
2: 'left_elbow',
3: 'right_elbow',
4: 'left_wrist',
5: 'right_wrist',
6: 'left_hip',
7: 'right_hip',
8: 'left_knee',
9: 'right_knee',
10: 'left_ankle',
11: 'right_ankle',
12: 'top_head',
13: 'neck'
参数
  • ann_file (str) – Path to the annotation file.

  • img_prefix (str) – Path to a directory where images are held. Default: None.

  • data_cfg (dict) – config

  • pipeline (list[dict | callable]) – A sequence of data transforms.

  • test_mode (bool) – Store True when building test or validation dataset. Default: False.

class mmpose.datasets.BottomUpMhpDataset(ann_file, img_prefix, data_cfg, pipeline, test_mode=False)[源代码]

MHPv2.0 dataset for top-down pose estimation.

The Multi-Human Parsing project of Learning and Vision (LV) Group, National University of Singapore (NUS) is proposed to push the frontiers of fine-grained visual understanding of humans in crowd scene. <https://lv-mhp.github.io/>

The dataset loads raw features and apply specified transforms to return a dict containing the image tensors and other information.

MHP keypoint indexes:

0: "right ankle",
1: "right knee",
2: "right hip",
3: "left hip",
4: "left knee",
5: "left ankle",
6: "pelvis",
7: "thorax",
8: "upper neck",
9: "head top",
10: "right wrist",
11: "right elbow",
12: "right shoulder",
13: "left shoulder",
14: "left elbow",
15: "left wrist",
参数
  • ann_file (str) – Path to the annotation file.

  • img_prefix (str) – Path to a directory where images are held. Default: None.

  • data_cfg (dict) – config

  • pipeline (list[dict | callable]) – A sequence of data transforms.

  • test_mode (bool) – Store True when building test or validation dataset. Default: False.

class mmpose.datasets.Compose(transforms)[源代码]

Compose a data pipeline with a sequence of transforms.

参数

transforms (list[dict | callable]) – Either config dicts of transforms or transform objects.

class mmpose.datasets.DeepFashionDataset(ann_file, img_prefix, subset, data_cfg, pipeline, test_mode=False)[源代码]

DeepFashion dataset (full-body clothes) for fashion landmark detection.

`DeepFashion: Powering Robust Clothes Recognition and Retrieval with Rich Annotations’ CVPR’2016 and `Fashion Landmark Detection in the Wild’ ECCV’2016

The dataset loads raw features and apply specified transforms to return a dict containing the image tensors and other information.

The dataset contains 3 categories for full-body, upper-body and lower-body.

Fashion landmark indexes for upper-body clothes:

0: 'left collar',
1: 'right collar',
2: 'left sleeve',
3: 'right sleeve',
4: 'left hem',
5: 'right hem'

Fashion landmark indexes for lower-body clothes:

0: 'left waistline',
1: 'right waistline',
2: 'left hem',
3: 'right hem'

Fashion landmark indexes for full-body clothes:

0: 'left collar',
1: 'right collar',
2: 'left sleeve',
3: 'right sleeve',
4: 'left waistline',
5: 'right waistline',
6: 'left hem',
7: 'right hem'
参数
  • ann_file (str) – Path to the annotation file.

  • img_prefix (str) – Path to a directory where images are held. Default: None.

  • subset (str) – The FLD dataset has 3 subsets, ‘upper’, ‘lower’, and ‘full’, denoting different types of clothes.

  • data_cfg (dict) – config

  • pipeline (list[dict | callable]) – A sequence of data transforms.

  • test_mode (bool) – Store True when building test or validation dataset. Default: False.

evaluate(outputs, res_folder, metric='PCK', **kwargs)[源代码]

Evaluate freihand keypoint results. The pose prediction results will be saved in ${res_folder}/result_keypoints.json.

注解

batch_size: N num_keypoints: K heatmap height: H heatmap width: W

参数
  • outputs (list(preds, boxes, image_path, output_heatmap)) –

    preds (np.ndarray[N,K,3])

    The first two dimensions are coordinates, score is the third dimension of the array.

    boxes (np.ndarray[N,6])

    [center[0], center[1], scale[0] , scale[1],area, score]

    image_paths (list[str])

    For example, [ ‘img_00000001.jpg’]

    output_heatmap (np.ndarray[N, K, H, W])

    model outpus.

  • res_folder (str) – Path of directory to save the results.

  • metric (str | list[str]) – Metric to be performed. Options: ‘PCK’, ‘AUC’, ‘EPE’.

返回

Evaluation results for evaluation metric.

返回类型

dict

class mmpose.datasets.DistributedSampler(dataset, num_replicas=None, rank=None, shuffle=True, seed=0)[源代码]

DistributedSampler inheriting from torch.utils.data.DistributedSampler.

In pytorch of lower versions, there is no shuffle argument. This child class will port one to DistributedSampler.

class mmpose.datasets.Face300WDataset(ann_file, img_prefix, data_cfg, pipeline, test_mode=False)[源代码]

Face300W dataset for top-down face keypoint localization.

300 faces In-the-wild challenge: Database and results. Image and Vision Computing (IMAVIS) 2019.

The dataset loads raw images and apply specified transforms to return a dict containing the image tensors and other information.

The landmark annotations follow the 68 points mark-up. The definition can be found in https://ibug.doc.ic.ac.uk/resources/300-W/.

参数
  • ann_file (str) – Path to the annotation file.

  • img_prefix (str) – Path to a directory where images are held. Default: None.

  • data_cfg (dict) – config

  • pipeline (list[dict | callable]) – A sequence of data transforms.

  • test_mode (bool) – Store True when building test or validation dataset. Default: False.

evaluate(outputs, res_folder, metric='NME', **kwargs)[源代码]

Evaluate freihand keypoint results. The pose prediction results will be saved in ${res_folder}/result_keypoints.json.

注解

batch_size: N num_keypoints: K heatmap height: H heatmap width: W

参数
  • outputs (list(preds, boxes, image_path, output_heatmap)) –

    preds (np.ndarray[1,K,3])

    The first two dimensions are coordinates, score is the third dimension of the array.

    boxes (np.ndarray[1,6])

    [center[0], center[1], scale[0] , scale[1],area, score]

    image_path (list[str])

    For example, [‘3’, ‘0’, ‘0’, ‘W’, ‘/’, ‘i’, ‘b’, ‘u’, ‘g’, ‘/’, ‘i’, ‘m’, ‘a’, ‘g’, ‘e’, ‘_’, ‘0’, ‘1’, ‘8’, ‘.’, ‘j’, ‘p’, ‘g’]

    output_heatmap (np.ndarray[N, K, H, W])

    model outpus.

  • res_folder (str) – Path of directory to save the results.

  • metric (str | list[str]) – Metric to be performed. Options: ‘NME’.

返回

Evaluation results for evaluation metric.

返回类型

dict

class mmpose.datasets.FreiHandDataset(ann_file, img_prefix, data_cfg, pipeline, test_mode=False)[源代码]

FreiHand dataset for top-down hand pose estimation.

FreiHAND: A Dataset for Markerless Capture of Hand Pose and Shape from Single RGB Images’ ICCV’2019 More details can be found in the `paper .

The dataset loads raw features and apply specified transforms to return a dict containing the image tensors and other information.

FreiHand keypoint indexes:

0: 'wrist',
1: 'thumb1',
2: 'thumb2',
3: 'thumb3',
4: 'thumb4',
5: 'forefinger1',
6: 'forefinger2',
7: 'forefinger3',
8: 'forefinger4',
9: 'middle_finger1',
10: 'middle_finger2',
11: 'middle_finger3',
12: 'middle_finger4',
13: 'ring_finger1',
14: 'ring_finger2',
15: 'ring_finger3',
16: 'ring_finger4',
17: 'pinky_finger1',
18: 'pinky_finger2',
19: 'pinky_finger3',
20: 'pinky_finger4'
参数
  • ann_file (str) – Path to the annotation file.

  • img_prefix (str) – Path to a directory where images are held. Default: None.

  • data_cfg (dict) – config

  • pipeline (list[dict | callable]) – A sequence of data transforms.

  • test_mode (bool) – Store True when building test or validation dataset. Default: False.

evaluate(outputs, res_folder, metric='PCK', **kwargs)[源代码]

Evaluate freihand keypoint results. The pose prediction results will be saved in ${res_folder}/result_keypoints.json.

注解

batch_size: N num_keypoints: K heatmap height: H heatmap width: W

参数
  • outputs (list(preds, boxes, image_path, output_heatmap)) –

    preds (np.ndarray[N,K,3])

    The first two dimensions are coordinates, score is the third dimension of the array.

    boxes (np.ndarray[N,6])

    [center[0], center[1], scale[0] , scale[1],area, score]

    image_paths (list[str])

    For example, [‘training/rgb/ 00031426.jpg’]

    output_heatmap (np.ndarray[N, K, H, W])

    model outpus.

  • res_folder (str) – Path of directory to save the results.

  • metric (str | list[str]) – Metric to be performed. Options: ‘PCK’, ‘AUC’, ‘EPE’.

返回

Evaluation results for evaluation metric.

返回类型

dict

class mmpose.datasets.InterHand2DDataset(ann_file, camera_file, joint_file, img_prefix, data_cfg, pipeline, test_mode=False)[源代码]

InterHand2.6M 2D dataset for top-down hand pose estimation.

InterHand2.6M: A Dataset and Baseline for 3D Interacting Hand Pose Estimation from a Single RGB Image’ Moon, Gyeongsik etal. ECCV’2020 More details can be found in the `paper .

The dataset loads raw features and apply specified transforms to return a dict containing the image tensors and other information.

InterHand2.6M keypoint indexes:

0: 'thumb4',
1: 'thumb3',
2: 'thumb2',
3: 'thumb1',
4: 'forefinger4',
5: 'forefinger3',
6: 'forefinger2',
7: 'forefinger1',
8: 'middle_finger4',
9: 'middle_finger3',
10: 'middle_finger2',
11: 'middle_finger1',
12: 'ring_finger4',
13: 'ring_finger3',
14: 'ring_finger2',
15: 'ring_finger1',
16: 'pinky_finger4',
17: 'pinky_finger3',
18: 'pinky_finger2',
19: 'pinky_finger1',
20: 'wrist'
参数
  • ann_file (str) – Path to the annotation file.

  • img_prefix (str) – Path to a directory where images are held. Default: None.

  • data_cfg (dict) – config

  • pipeline (list[dict | callable]) – A sequence of data transforms.

  • test_mode (str) – Store True when building test or validation dataset. Default: False.

evaluate(outputs, res_folder, metric='PCK', **kwargs)[源代码]

Evaluate interhand2d keypoint results. The pose prediction results will be saved in ${res_folder}/result_keypoints.json.

注解

batch_size: N num_keypoints: K heatmap height: H heatmap width: W

参数
  • outputs (list(preds, boxes, image_path, output_heatmap)) –

    preds (np.ndarray[N,K,3])

    The first two dimensions are coordinates, score is the third dimension of the array.

    boxes (np.ndarray[N,6])

    [center[0], center[1], scale[0] , scale[1],area, score]

    image_paths (list[str])

    For example, [‘C’, ‘a’, ‘p’, ‘t’, ‘u’, ‘r’, ‘e’, ‘1’, ‘2’, ‘/’, ‘0’, ‘3’, ‘9’, ‘0’, ‘_’, ‘d’, ‘h’, ‘_’, ‘t’, ‘o’, ‘u’, ‘c’, ‘h’, ‘R’, ‘O’, ‘M’, ‘/’, ‘c’, ‘a’, ‘m’, ‘4’, ‘1’, ‘0’, ‘2’, ‘0’, ‘9’, ‘/’, ‘i’, ‘m’, ‘a’, ‘g’, ‘e’, ‘6’, ‘2’, ‘4’, ‘3’, ‘4’, ‘.’, ‘j’, ‘p’, ‘g’]

    output_heatmap (np.ndarray[N, K, H, W])

    model outpus.

  • res_folder (str) – Path of directory to save the results.

  • metric (str | list[str]) – Metric to be performed. Options: ‘PCK’, ‘AUC’, ‘EPE’.

返回

Evaluation results for evaluation metric.

返回类型

dict

class mmpose.datasets.MeshAdversarialDataset(train_dataset, adversarial_dataset)[源代码]

Mix Dataset for the adversarial training in 3D human mesh estimation task.

The dataset combines data from two datasets and return a dict containing data from two datasets.

参数
  • train_dataset (Dataset) – Dataset for 3D human mesh estimation.

  • adversarial_dataset (Dataset) – Dataset for adversarial learning, provides real SMPL parameters.

class mmpose.datasets.MeshH36MDataset(ann_file, img_prefix, data_cfg, pipeline, test_mode=False)[源代码]

Human3.6M Dataset for 3D human mesh estimation. It inherits all function from MeshBaseDataset and has its own evaluate fuction.

The dataset loads raw features and apply specified transforms to return a dict containing the image tensors and other information.

参数
  • ann_file (str) – Path to the annotation file.

  • img_prefix (str) – Path to a directory where images are held. Default: None.

  • data_cfg (dict) – config

  • pipeline (list[dict | callable]) – A sequence of data transforms.

  • test_mode (bool) – Store True when building test or validation dataset. Default: False.

evaluate(outputs, res_folder, metric='joint_error', logger=None)[源代码]

Evaluate 3D keypoint results.

static evaluate_kernel(pred_joints_3d, joints_3d, joints_3d_visible)[源代码]

Evaluate one example.

class mmpose.datasets.MeshMixDataset(configs, partition)[源代码]

Mix Dataset for 3D human mesh estimation.

The dataset combines data from multiple datasets (MeshBaseDataset) and sample the data from different datasets with the provided proportions. The dataset loads raw features and apply specified transforms to return a dict containing the image tensors and other information.

参数
  • configs (list) – List of configs for multiple datasets.

  • partition (list) – Sample proportion of multiple datasets. The length of partition should be same with that of configs. The elements of it should be non-negative and is not necessary summing up to one.

示例

>>> from mmpose.datasets import MeshMixDataset
>>> data_cfg = dict(
>>>     image_size=[256, 256],
>>>     iuv_size=[64, 64],
>>>     num_joints=24,
>>>     use_IUV=True,
>>>     uv_type='BF')
>>>
>>> mix_dataset = MeshMixDataset(
>>>     configs=[
>>>         dict(
>>>             ann_file='tests/data/h36m/test_h36m.npz',
>>>             img_prefix='tests/data/h36m',
>>>             data_cfg=data_cfg,
>>>             pipeline=[]),
>>>         dict(
>>>             ann_file='tests/data/h36m/test_h36m.npz',
>>>             img_prefix='tests/data/h36m',
>>>             data_cfg=data_cfg,
>>>             pipeline=[]),
>>>     ],
>>>     partition=[0.6, 0.4])
class mmpose.datasets.MoshDataset(ann_file, pipeline, test_mode=False)[源代码]

Mosh Dataset for the adversarial training in 3D human mesh estimation task.

The dataset return a dict containing real-world SMPL parameters.

参数
  • ann_file (str) – Path to the annotation file.

  • pipeline (list[dict | callable]) – A sequence of data transforms.

  • test_mode (bool) – Store True when building test or validation dataset. Default: False.

class mmpose.datasets.OneHand10KDataset(ann_file, img_prefix, data_cfg, pipeline, test_mode=False)[源代码]

OneHand10K dataset for top-down hand pose estimation.

Mask-pose Cascaded CNN for 2D Hand Pose Estimation from Single Color Images’ TCSVT’2019 More details can be found in the `paper .

The dataset loads raw features and apply specified transforms to return a dict containing the image tensors and other information.

OneHand10K keypoint indexes:

0: 'wrist',
1: 'thumb1',
2: 'thumb2',
3: 'thumb3',
4: 'thumb4',
5: 'forefinger1',
6: 'forefinger2',
7: 'forefinger3',
8: 'forefinger4',
9: 'middle_finger1',
10: 'middle_finger2',
11: 'middle_finger3',
12: 'middle_finger4',
13: 'ring_finger1',
14: 'ring_finger2',
15: 'ring_finger3',
16: 'ring_finger4',
17: 'pinky_finger1',
18: 'pinky_finger2',
19: 'pinky_finger3',
20: 'pinky_finger4'
参数
  • ann_file (str) – Path to the annotation file.

  • img_prefix (str) – Path to a directory where images are held. Default: None.

  • data_cfg (dict) – config

  • pipeline (list[dict | callable]) – A sequence of data transforms.

  • test_mode (bool) – Store True when building test or validation dataset. Default: False.

evaluate(outputs, res_folder, metric='PCK', **kwargs)[源代码]

Evaluate onehand10k keypoint results. The pose prediction results will be saved in ${res_folder}/result_keypoints.json.

注解

batch_size: N num_keypoints: K heatmap height: H heatmap width: W

参数
  • outputs (list(preds, boxes, image_path, output_heatmap)) –

    preds (np.ndarray[N,K,3])

    The first two dimensions are coordinates, score is the third dimension of the array.

    boxes (np.ndarray[N,6])

    [center[0], center[1], scale[0] , scale[1],area, score]

    image_paths (list[str])

    For example, [‘Test/source/0.jpg’]

    output_heatmap (np.ndarray[N, K, H, W])

    model outpus.

  • res_folder (str) – Path of directory to save the results.

  • metric (str | list[str]) – Metric to be performed. Options: ‘PCK’, ‘AUC’, ‘EPE’.

返回

Evaluation results for evaluation metric.

返回类型

dict

class mmpose.datasets.PanopticDataset(ann_file, img_prefix, data_cfg, pipeline, test_mode=False)[源代码]

Panoptic dataset for top-down hand pose estimation.

Hand Keypoint Detection in Single Images using Multiview Bootstrapping’ CVPR’2017 More details can be found in the `paper .

The dataset loads raw features and apply specified transforms to return a dict containing the image tensors and other information.

Panoptic keypoint indexes:

0: 'wrist',
1: 'thumb1',
2: 'thumb2',
3: 'thumb3',
4: 'thumb4',
5: 'forefinger1',
6: 'forefinger2',
7: 'forefinger3',
8: 'forefinger4',
9: 'middle_finger1',
10: 'middle_finger2',
11: 'middle_finger3',
12: 'middle_finger4',
13: 'ring_finger1',
14: 'ring_finger2',
15: 'ring_finger3',
16: 'ring_finger4',
17: 'pinky_finger1',
18: 'pinky_finger2',
19: 'pinky_finger3',
20: 'pinky_finger4'
参数
  • ann_file (str) – Path to the annotation file.

  • img_prefix (str) – Path to a directory where images are held. Default: None.

  • data_cfg (dict) – config

  • pipeline (list[dict | callable]) – A sequence of data transforms.

  • test_mode (bool) – Store True when building test or validation dataset. Default: False.

evaluate(outputs, res_folder, metric='PCKh', **kwargs)[源代码]

Evaluate panoptic keypoint results. The pose prediction results will be saved in ${res_folder}/result_keypoints.json.

注解

batch_size: N num_keypoints: K heatmap height: H heatmap width: W

参数
  • outputs (list(preds, boxes, image_path, output_heatmap)) –

    preds (np.ndarray[N,K,3])

    The first two dimensions are coordinates, score is the third dimension of the array.

    boxes (np.ndarray[N,6])

    [center[0], center[1], scale[0] , scale[1],area, score]

    image_paths (list[str])

    For example, [‘hand_labels/’ ‘manual_test/000648952_02_l.jpg’]

    output_heatmap (np.ndarray[N, K, H, W])

    model outpus.

  • res_folder (str) – Path of directory to save the results.

  • metric (str | list[str]) – Metric to be performed. Options: ‘PCKh’, ‘AUC’, ‘EPE’.

返回

Evaluation results for evaluation metric.

返回类型

dict

class mmpose.datasets.TopDownAicDataset(ann_file, img_prefix, data_cfg, pipeline, test_mode=False)[源代码]

AicDataset dataset for top-down pose estimation.

AI Challenger : A Large-scale Dataset for Going Deeper in Image Understanding

The dataset loads raw features and apply specified transforms to return a dict containing the image tensors and other information.

AIC keypoint indexes::

0: “right_shoulder”, 1: “right_elbow”, 2: “right_wrist”, 3: “left_shoulder”, 4: “left_elbow”, 5: “left_wrist”, 6: “right_hip”, 7: “right_knee”, 8: “right_ankle”, 9: “left_hip”, 10: “left_knee”, 11: “left_ankle”, 12: “head_top”, 13: “neck”

参数
  • ann_file (str) – Path to the annotation file.

  • img_prefix (str) – Path to a directory where images are held. Default: None.

  • data_cfg (dict) – config

  • pipeline (list[dict | callable]) – A sequence of data transforms.

  • test_mode (bool) – Store True when building test or validation dataset. Default: False.

class mmpose.datasets.TopDownCocoDataset(ann_file, img_prefix, data_cfg, pipeline, test_mode=False)[源代码]

CocoDataset dataset for top-down pose estimation.

Microsoft COCO: Common Objects in Context’ ECCV’2014 More details can be found in the `paper .

The dataset loads raw features and apply specified transforms to return a dict containing the image tensors and other information.

COCO keypoint indexes:

0: 'nose',
1: 'left_eye',
2: 'right_eye',
3: 'left_ear',
4: 'right_ear',
5: 'left_shoulder',
6: 'right_shoulder',
7: 'left_elbow',
8: 'right_elbow',
9: 'left_wrist',
10: 'right_wrist',
11: 'left_hip',
12: 'right_hip',
13: 'left_knee',
14: 'right_knee',
15: 'left_ankle',
16: 'right_ankle'
参数
  • ann_file (str) – Path to the annotation file.

  • img_prefix (str) – Path to a directory where images are held. Default: None.

  • data_cfg (dict) – config

  • pipeline (list[dict | callable]) – A sequence of data transforms.

  • test_mode (bool) – Store True when building test or validation dataset. Default: False.

evaluate(outputs, res_folder, metric='mAP', **kwargs)[源代码]

Evaluate coco keypoint results. The pose prediction results will be saved in ${res_folder}/result_keypoints.json.

注解

batch_size: N num_keypoints: K heatmap height: H heatmap width: W

参数
  • outputs (list(dict)) –

    preds (np.ndarray[N,K,3])

    The first two dimensions are coordinates, score is the third dimension of the array.

    boxes (np.ndarray[N,6])

    [center[0], center[1], scale[0] , scale[1],area, score]

    image_paths (list[str])

    For example, [‘data/coco/val2017 /000000393226.jpg’]

    heatmap (np.ndarray[N, K, H, W])

    model output heatmap

    :bbox_id (list(int)).

  • res_folder (str) – Path of directory to save the results.

  • metric (str | list[str]) – Metric to be performed. Defaults: ‘mAP’.

返回

Evaluation results for evaluation metric.

返回类型

dict

class mmpose.datasets.TopDownCocoWholeBodyDataset(ann_file, img_prefix, data_cfg, pipeline, test_mode=False)[源代码]

CocoWholeBodyDataset dataset for top-down pose estimation.

Whole-Body Human Pose Estimation in the Wild’ ECCV’2020 More details can be found in the `paper .

The dataset loads raw features and apply specified transforms to return a dict containing the image tensors and other information.

In total, we have 133 keypoints for wholebody pose estimation.

COCO-WholeBody keypoint indexes::

0-16: 17 body keypoints 17-22: 6 foot keypoints 23-90: 68 face keypoints 91-132: 42 hand keypoints

参数
  • ann_file (str) – Path to the annotation file.

  • img_prefix (str) – Path to a directory where images are held. Default: None.

  • data_cfg (dict) – config

  • pipeline (list[dict | callable]) – A sequence of data transforms.

  • test_mode (bool) – Store True when building test or validation dataset. Default: False.

class mmpose.datasets.TopDownCrowdPoseDataset(ann_file, img_prefix, data_cfg, pipeline, test_mode=False)[源代码]

CrowdPoseDataset dataset for top-down pose estimation.

The dataset loads raw features and apply specified transforms to return a dict containing the image tensors and other information.

CrowdPose keypoint indexes:

0: 'left_shoulder',
1: 'right_shoulder',
2: 'left_elbow',
3: 'right_elbow',
4: 'left_wrist',
5: 'right_wrist',
6: 'left_hip',
7: 'right_hip',
8: 'left_knee',
9: 'right_knee',
10: 'left_ankle',
11: 'right_ankle',
12: 'top_head',
13: 'neck'
参数
  • ann_file (str) – Path to the annotation file.

  • img_prefix (str) – Path to a directory where images are held. Default: None.

  • data_cfg (dict) – config

  • pipeline (list[dict | callable]) – A sequence of data transforms.

  • test_mode (bool) – Store True when building test or validation dataset. Default: False.

class mmpose.datasets.TopDownFreiHandDataset(*args, **kwargs)[源代码]

Deprecated TopDownFreiHandDataset.

evaluate(cfg, preds, output_dir, *args, **kwargs)[源代码]

Evaluate keypoint results.

class mmpose.datasets.TopDownJhmdbDataset(ann_file, img_prefix, data_cfg, pipeline, test_mode=False)[源代码]

JhmdbDataset dataset for top-down pose estimation.

`Towards understanding action recognition

<https://openaccess.thecvf.com/content_iccv_2013/papers/ Jhuang_Towards_Understanding_Action_2013_ICCV_paper.pdf>`__

The dataset loads raw features and apply specified transforms to return a dict containing the image tensors and other information.

sub-JHMDB keypoint indexes::

0: “neck”, 1: “belly”, 2: “head”, 3: “right_shoulder”, 4: “left_shoulder”, 5: “right_hip”, 6: “left_hip”, 7: “right_elbow”, 8: “left_elbow”, 9: “right_knee”, 10: “left_knee”, 11: “right_wrist”, 12: “left_wrist”, 13: “right_ankle”, 14: “left_ankle”

参数
  • ann_file (str) – Path to the annotation file.

  • img_prefix (str) – Path to a directory where images are held. Default: None.

  • data_cfg (dict) – config

  • pipeline (list[dict | callable]) – A sequence of data transforms.

  • test_mode (bool) – Store True when building test or validation dataset. Default: False.

evaluate(outputs, res_folder, metric='PCK', **kwargs)[源代码]

Evaluate onehand10k keypoint results. The pose prediction results will be saved in ${res_folder}/result_keypoints.json.

注解

batch_size: N num_keypoints: K heatmap height: H heatmap width: W

参数
  • outputs (list(preds, boxes, image_path, output_heatmap)) –

    preds (np.ndarray[N,K,3])

    The first two dimensions are coordinates, score is the third dimension of the array.

    boxes (np.ndarray[N,6])

    [center[0], center[1], scale[0] , scale[1],area, score]

    :image_path (list[str]) :output_heatmap (np.ndarray[N, K, H, W]): model outpus.

  • res_folder (str) – Path of directory to save the results.

  • metric (str | list[str]) – Metric to be performed. Options: ‘PCK’, ‘tPCK’. PCK means normalized by the bounding boxes, while tPCK means normalized by the torso size.

返回

Evaluation results for evaluation metric.

返回类型

dict

class mmpose.datasets.TopDownMhpDataset(ann_file, img_prefix, data_cfg, pipeline, test_mode=False)[源代码]

MHPv2.0 dataset for top-down pose estimation.

The Multi-Human Parsing project of Learning and Vision (LV) Group, National University of Singapore (NUS) is proposed to push the frontiers of fine-grained visual understanding of humans in crowd scene. <https://lv-mhp.github.io/>

Note that, the evaluation metric used here is mAP (adapted from COCO), which may be different from the official evaluation codes. ‘https://github.com/ZhaoJ9014/Multi-Human-Parsing/tree/master/’ ‘Evaluation/Multi-Human-Pose’ Please be cautious if you use the results in papers.

The dataset loads raw features and apply specified transforms to return a dict containing the image tensors and other information.

MHP keypoint indexes:

0: "right ankle",
1: "right knee",
2: "right hip",
3: "left hip",
4: "left knee",
5: "left ankle",
6: "pelvis",
7: "thorax",
8: "upper neck",
9: "head top",
10: "right wrist",
11: "right elbow",
12: "right shoulder",
13: "left shoulder",
14: "left elbow",
15: "left wrist",
参数
  • ann_file (str) – Path to the annotation file.

  • img_prefix (str) – Path to a directory where images are held. Default: None.

  • data_cfg (dict) – config

  • pipeline (list[dict | callable]) – A sequence of data transforms.

  • test_mode (bool) – Store True when building test or validation dataset. Default: False.

class mmpose.datasets.TopDownMpiiDataset(ann_file, img_prefix, data_cfg, pipeline, test_mode=False)[源代码]

MPII Dataset for top-down pose estimation.

2D Human Pose Estimation: New Benchmark and State of the Art Analysis’ CVPR’2014. More details can be found in the `paper .

The dataset loads raw features and apply specified transforms to return a dict containing the image tensors and other information.

MPII keypoint indexes:

0: 'right_ankle'
1: 'right_knee',
2: 'right_hip',
3: 'left_hip',
4: 'left_knee',
5: 'left_ankle',
6: 'pelvis',
7: 'thorax',
8: 'upper_neck',
9: 'head_top',
10: 'right_wrist',
11: 'right_elbow',
12: 'right_shoulder',
13: 'left_shoulder',
14: 'left_elbow',
15: 'left_wrist'
参数
  • ann_file (str) – Path to the annotation file.

  • img_prefix (str) – Path to a directory where images are held. Default: None.

  • data_cfg (dict) – config

  • pipeline (list[dict | callable]) – A sequence of data transforms.

  • test_mode (bool) – Store True when building test or validation dataset. Default: False.

evaluate(outputs, res_folder, metric='PCKh', **kwargs)[源代码]

Evaluate PCKh for MPII dataset. Adapted from https://github.com/leoxiaobin/deep-high-resolution-net.pytorch Copyright (c) Microsoft, under the MIT License.

注解

batch_size: N num_keypoints: K heatmap height: H heatmap width: W

参数
  • outputs (list(preds, boxes, image_path, heatmap)) –

    • preds (np.ndarray[N,K,3]): The first two dimensions are coordinates, score is the third dimension of the array.

    • boxes (np.ndarray[N,6]): [center[0], center[1], scale[0] , scale[1],area, score]

    • image_paths (list[str]): For example, [‘/val2017/000000 397133.jpg’]

    • heatmap (np.ndarray[N, K, H, W]): model output heatmap.

  • res_folder (str) – Path of directory to save the results.

  • metric (str | list[str]) – Metrics to be performed. Defaults: ‘PCKh’.

返回

PCKh for each joint

返回类型

dict

class mmpose.datasets.TopDownMpiiTrbDataset(ann_file, img_prefix, data_cfg, pipeline, test_mode=False)[源代码]

MPII-TRB Dataset dataset for top-down pose estimation.

TRB: A Novel Triplet Representation for Understanding 2D Human Body ICCV’2019 More details can be found in the paper .

The dataset loads raw features and apply specified transforms to return a dict containing the image tensors and other information.

MPII-TRB keypoint indexes:

0: 'left_shoulder'
1: 'right_shoulder'
2: 'left_elbow'
3: 'right_elbow'
4: 'left_wrist'
5: 'right_wrist'
6: 'left_hip'
7: 'right_hip'
8: 'left_knee'
9: 'right_knee'
10: 'left_ankle'
11: 'right_ankle'
12: 'head'
13: 'neck'

14: 'right_neck'
15: 'left_neck'
16: 'medial_right_shoulder'
17: 'lateral_right_shoulder'
18: 'medial_right_bow'
19: 'lateral_right_bow'
20: 'medial_right_wrist'
21: 'lateral_right_wrist'
22: 'medial_left_shoulder'
23: 'lateral_left_shoulder'
24: 'medial_left_bow'
25: 'lateral_left_bow'
26: 'medial_left_wrist'
27: 'lateral_left_wrist'
28: 'medial_right_hip'
29: 'lateral_right_hip'
30: 'medial_right_knee'
31: 'lateral_right_knee'
32: 'medial_right_ankle'
33: 'lateral_right_ankle'
34: 'medial_left_hip'
35: 'lateral_left_hip'
36: 'medial_left_knee'
37: 'lateral_left_knee'
38: 'medial_left_ankle'
39: 'lateral_left_ankle'
参数
  • ann_file (str) – Path to the annotation file.

  • img_prefix (str) – Path to a directory where images are held. Default: None.

  • data_cfg (dict) – config

  • pipeline (list[dict | callable]) – A sequence of data transforms.

  • test_mode (bool) – Store True when building test or validation dataset. Default: False.

evaluate(outputs, res_folder, metric='PCKh', **kwargs)[源代码]

Evaluate PCKh for MPII-TRB dataset.

注解

batch_size: N num_keypoints: K heatmap height: H heatmap width: W

参数
  • outputs (list(preds, boxes, image_paths, heatmap)) –

    • preds (np.ndarray[N,K,3]): The first two dimensions are coordinates, score is the third dimension of the array.

    • boxes (np.ndarray[N,6]): [center[0], center[1], scale[0] , scale[1],area, score]

    • image_paths (list[str]): For example, [‘/val2017/000000 397133.jpg’]

    • heatmap (np.ndarray[N, K, H, W]): model output heatmap.

    • bbox_ids (list[str]): For example, [‘27407’]

  • res_folder (str) – Path of directory to save the results.

  • metric (str | list[str]) – Metrics to be performed. Defaults: ‘PCKh’.

返回

PCKh for each joint

返回类型

dict

class mmpose.datasets.TopDownOCHumanDataset(ann_file, img_prefix, data_cfg, pipeline, test_mode=False)[源代码]

OChuman dataset for top-down pose estimation.

Pose2Seg: Detection Free Human Instance Segmentation’ CVPR’2019 More details can be found in the `paper .

“Occluded Human (OCHuman)” dataset contains 8110 heavily occluded human instances within 4731 images. OCHuman dataset is designed for validation and testing. To evaluate on OCHuman, the model should be trained on COCO training set, and then test the robustness of the model to occlusion using OCHuman.

OCHuman keypoint indexes (same as COCO):

0: 'nose',
1: 'left_eye',
2: 'right_eye',
3: 'left_ear',
4: 'right_ear',
5: 'left_shoulder',
6: 'right_shoulder',
7: 'left_elbow',
8: 'right_elbow',
9: 'left_wrist',
10: 'right_wrist',
11: 'left_hip',
12: 'right_hip',
13: 'left_knee',
14: 'right_knee',
15: 'left_ankle',
16: 'right_ankle'
参数
  • ann_file (str) – Path to the annotation file.

  • img_prefix (str) – Path to a directory where images are held. Default: None.

  • data_cfg (dict) – config

  • pipeline (list[dict | callable]) – A sequence of data transforms.

  • test_mode (bool) – Store True when building test or validation dataset. Default: False.

class mmpose.datasets.TopDownOneHand10KDataset(*args, **kwargs)[源代码]

Deprecated TopDownOneHand10KDataset.

evaluate(cfg, preds, output_dir, *args, **kwargs)[源代码]

Evaluate keypoint results.

class mmpose.datasets.TopDownPanopticDataset(*args, **kwargs)[源代码]

Deprecated TopDownPanopticDataset.

evaluate(cfg, preds, output_dir, *args, **kwargs)[源代码]

Evaluate keypoint results.

class mmpose.datasets.TopDownPoseTrack18Dataset(ann_file, img_prefix, data_cfg, pipeline, test_mode=False)[源代码]

PoseTrack18 dataset for top-down pose estimation.

Posetrack: A benchmark for human pose estimation and tracking’ CVPR’2018 More details can be found in the `paper .

The dataset loads raw features and apply specified transforms to return a dict containing the image tensors and other information.

PoseTrack2018 keypoint indexes::

0: ‘nose’, 1: ‘head_bottom’, 2: ‘head_top’, 3: ‘left_ear’, 4: ‘right_ear’, 5: ‘left_shoulder’, 6: ‘right_shoulder’, 7: ‘left_elbow’, 8: ‘right_elbow’, 9: ‘left_wrist’, 10: ‘right_wrist’, 11: ‘left_hip’, 12: ‘right_hip’, 13: ‘left_knee’, 14: ‘right_knee’, 15: ‘left_ankle’, 16: ‘right_ankle’

参数
  • ann_file (str) – Path to the annotation file.

  • img_prefix (str) – Path to a directory where images are held. Default: None.

  • data_cfg (dict) – config

  • pipeline (list[dict | callable]) – A sequence of data transforms.

  • test_mode (bool) – Store True when building test or validation dataset. Default: False.

evaluate(outputs, res_folder, metric='mAP', **kwargs)[源代码]

Evaluate coco keypoint results. The pose prediction results will be saved in ${res_folder}/result_keypoints.json.

注解

num_keypoints: K

参数
  • outputs (list(preds, boxes, image_paths)) –

    preds (np.ndarray[N,K,3])

    The first two dimensions are coordinates, score is the third dimension of the array.

    boxes (np.ndarray[N,6])

    [center[0], center[1], scale[0] , scale[1],area, score]

    image_paths (list[str])

    For example, [‘val/010016_mpii_test /000024.jpg’]

    heatmap (np.ndarray[N, K, H, W])

    model output heatmap.

    :bbox_id (list(int))

  • res_folder (str) – Path of directory to save the results.

  • metric (str | list[str]) – Metric to be performed. Defaults: ‘mAP’.

返回

Evaluation results for evaluation metric.

返回类型

dict

mmpose.datasets.build_dataloader(dataset, samples_per_gpu, workers_per_gpu, num_gpus=1, dist=True, shuffle=True, seed=None, drop_last=True, pin_memory=True, **kwargs)[源代码]

Build PyTorch DataLoader.

In distributed training, each GPU/process has a dataloader. In non-distributed training, there is only one dataloader for all GPUs.

参数
  • dataset (Dataset) – A PyTorch dataset.

  • samples_per_gpu (int) – Number of training samples on each GPU, i.e., batch size of each GPU.

  • workers_per_gpu (int) – How many subprocesses to use for data loading for each GPU.

  • num_gpus (int) – Number of GPUs. Only used in non-distributed training.

  • dist (bool) – Distributed training/test or not. Default: True.

  • shuffle (bool) – Whether to shuffle the data at every epoch. Default: True.

  • drop_last (bool) – Whether to drop the last incomplete batch in epoch. Default: True

  • pin_memory (bool) – Whether to use pin_memory in DataLoader. Default: True

  • kwargs – any keyword argument to be used to initialize DataLoader

返回

A PyTorch dataloader.

返回类型

DataLoader

mmpose.datasets.build_dataset(cfg, default_args=None)[源代码]

Build a dataset from config dict.

参数
  • cfg (dict) – Config dict. It should at least contain the key “type”.

  • default_args (dict, optional) – Default initialization arguments. Default: None.

返回

The constructed dataset.

返回类型

Dataset

datasets

class mmpose.datasets.datasets.top_down.TopDownAicDataset(ann_file, img_prefix, data_cfg, pipeline, test_mode=False)[源代码]

AicDataset dataset for top-down pose estimation.

AI Challenger : A Large-scale Dataset for Going Deeper in Image Understanding

The dataset loads raw features and apply specified transforms to return a dict containing the image tensors and other information.

AIC keypoint indexes::

0: “right_shoulder”, 1: “right_elbow”, 2: “right_wrist”, 3: “left_shoulder”, 4: “left_elbow”, 5: “left_wrist”, 6: “right_hip”, 7: “right_knee”, 8: “right_ankle”, 9: “left_hip”, 10: “left_knee”, 11: “left_ankle”, 12: “head_top”, 13: “neck”

参数
  • ann_file (str) – Path to the annotation file.

  • img_prefix (str) – Path to a directory where images are held. Default: None.

  • data_cfg (dict) – config

  • pipeline (list[dict | callable]) – A sequence of data transforms.

  • test_mode (bool) – Store True when building test or validation dataset. Default: False.

class mmpose.datasets.datasets.top_down.TopDownCocoDataset(ann_file, img_prefix, data_cfg, pipeline, test_mode=False)[源代码]

CocoDataset dataset for top-down pose estimation.

Microsoft COCO: Common Objects in Context’ ECCV’2014 More details can be found in the `paper .

The dataset loads raw features and apply specified transforms to return a dict containing the image tensors and other information.

COCO keypoint indexes:

0: 'nose',
1: 'left_eye',
2: 'right_eye',
3: 'left_ear',
4: 'right_ear',
5: 'left_shoulder',
6: 'right_shoulder',
7: 'left_elbow',
8: 'right_elbow',
9: 'left_wrist',
10: 'right_wrist',
11: 'left_hip',
12: 'right_hip',
13: 'left_knee',
14: 'right_knee',
15: 'left_ankle',
16: 'right_ankle'
参数
  • ann_file (str) – Path to the annotation file.

  • img_prefix (str) – Path to a directory where images are held. Default: None.

  • data_cfg (dict) – config

  • pipeline (list[dict | callable]) – A sequence of data transforms.

  • test_mode (bool) – Store True when building test or validation dataset. Default: False.

evaluate(outputs, res_folder, metric='mAP', **kwargs)[源代码]

Evaluate coco keypoint results. The pose prediction results will be saved in ${res_folder}/result_keypoints.json.

注解

batch_size: N num_keypoints: K heatmap height: H heatmap width: W

参数
  • outputs (list(dict)) –

    preds (np.ndarray[N,K,3])

    The first two dimensions are coordinates, score is the third dimension of the array.

    boxes (np.ndarray[N,6])

    [center[0], center[1], scale[0] , scale[1],area, score]

    image_paths (list[str])

    For example, [‘data/coco/val2017 /000000393226.jpg’]

    heatmap (np.ndarray[N, K, H, W])

    model output heatmap

    :bbox_id (list(int)).

  • res_folder (str) – Path of directory to save the results.

  • metric (str | list[str]) – Metric to be performed. Defaults: ‘mAP’.

返回

Evaluation results for evaluation metric.

返回类型

dict

class mmpose.datasets.datasets.top_down.TopDownCocoWholeBodyDataset(ann_file, img_prefix, data_cfg, pipeline, test_mode=False)[源代码]

CocoWholeBodyDataset dataset for top-down pose estimation.

Whole-Body Human Pose Estimation in the Wild’ ECCV’2020 More details can be found in the `paper .

The dataset loads raw features and apply specified transforms to return a dict containing the image tensors and other information.

In total, we have 133 keypoints for wholebody pose estimation.

COCO-WholeBody keypoint indexes::

0-16: 17 body keypoints 17-22: 6 foot keypoints 23-90: 68 face keypoints 91-132: 42 hand keypoints

参数
  • ann_file (str) – Path to the annotation file.

  • img_prefix (str) – Path to a directory where images are held. Default: None.

  • data_cfg (dict) – config

  • pipeline (list[dict | callable]) – A sequence of data transforms.

  • test_mode (bool) – Store True when building test or validation dataset. Default: False.

class mmpose.datasets.datasets.top_down.TopDownCrowdPoseDataset(ann_file, img_prefix, data_cfg, pipeline, test_mode=False)[源代码]

CrowdPoseDataset dataset for top-down pose estimation.

The dataset loads raw features and apply specified transforms to return a dict containing the image tensors and other information.

CrowdPose keypoint indexes:

0: 'left_shoulder',
1: 'right_shoulder',
2: 'left_elbow',
3: 'right_elbow',
4: 'left_wrist',
5: 'right_wrist',
6: 'left_hip',
7: 'right_hip',
8: 'left_knee',
9: 'right_knee',
10: 'left_ankle',
11: 'right_ankle',
12: 'top_head',
13: 'neck'
参数
  • ann_file (str) – Path to the annotation file.

  • img_prefix (str) – Path to a directory where images are held. Default: None.

  • data_cfg (dict) – config

  • pipeline (list[dict | callable]) – A sequence of data transforms.

  • test_mode (bool) – Store True when building test or validation dataset. Default: False.

class mmpose.datasets.datasets.top_down.TopDownH36MDataset(ann_file, img_prefix, data_cfg, pipeline, test_mode=False)[源代码]

Human3.6M dataset for top-down 2D pose estimation.

Human3.6M: Large Scale Datasets and Predictive Methods for 3D Human Sensing in Natural Environments’ TPAMI`2014 More details can be found in the `paper.

Human3.6M keypoint indexes::

0: ‘root (pelvis)’, 1: ‘right_hip’, 2: ‘right_knee’, 3: ‘right_foot’, 4: ‘left_hip’, 5: ‘left_knee’, 6: ‘left_foot’, 7: ‘spine’, 8: ‘thorax’, 9: ‘neck_base’, 10: ‘head’, 11: ‘left_shoulder’, 12: ‘left_elbow’, 13: ‘left_wrist’, 14: ‘right_shoulder’, 15: ‘right_elbow’, 16: ‘right_wrist’

参数
  • ann_file (str) – Path to the annotation file.

  • img_prefix (str) – Path to a directory where images are held. Default: None.

  • data_cfg (dict) – config

  • pipeline (list[dict | callable]) – A sequence of data transforms.

  • test_mode (bool) – Store True when building test or validation dataset. Default: False.

evaluate(outputs, res_folder, metric, **kwargs)[源代码]

Evaluate human3.6m 2d keypoint results. The pose prediction results will be saved in ${res_folder}/result_keypoints.json.

注解

batch_size: N num_keypoints: K heatmap height: H heatmap width: W

参数
  • outputs (list(dict)) –

    preds (np.ndarray[N,K,3])

    The first two dimensions are coordinates, score is the third dimension of the array.

    boxes (np.ndarray[N,6])

    [center[0], center[1], scale[0] , scale[1],area, score]

    image_paths (list[str])

    For example, [‘data/coco/val2017 /000000393226.jpg’]

    heatmap (np.ndarray[N, K, H, W])

    model output heatmap

    :bbox_id (list(int)).

  • res_folder (str) – Path of directory to save the results.

  • metric (str | list[str]) – Metric to be performed. Defaults: ‘mAP’.

返回

Evaluation results for evaluation metric.

返回类型

dict

class mmpose.datasets.datasets.top_down.TopDownJhmdbDataset(ann_file, img_prefix, data_cfg, pipeline, test_mode=False)[源代码]

JhmdbDataset dataset for top-down pose estimation.

`Towards understanding action recognition

<https://openaccess.thecvf.com/content_iccv_2013/papers/ Jhuang_Towards_Understanding_Action_2013_ICCV_paper.pdf>`__

The dataset loads raw features and apply specified transforms to return a dict containing the image tensors and other information.

sub-JHMDB keypoint indexes::

0: “neck”, 1: “belly”, 2: “head”, 3: “right_shoulder”, 4: “left_shoulder”, 5: “right_hip”, 6: “left_hip”, 7: “right_elbow”, 8: “left_elbow”, 9: “right_knee”, 10: “left_knee”, 11: “right_wrist”, 12: “left_wrist”, 13: “right_ankle”, 14: “left_ankle”

参数
  • ann_file (str) – Path to the annotation file.

  • img_prefix (str) – Path to a directory where images are held. Default: None.

  • data_cfg (dict) – config

  • pipeline (list[dict | callable]) – A sequence of data transforms.

  • test_mode (bool) – Store True when building test or validation dataset. Default: False.

evaluate(outputs, res_folder, metric='PCK', **kwargs)[源代码]

Evaluate onehand10k keypoint results. The pose prediction results will be saved in ${res_folder}/result_keypoints.json.

注解

batch_size: N num_keypoints: K heatmap height: H heatmap width: W

参数
  • outputs (list(preds, boxes, image_path, output_heatmap)) –

    preds (np.ndarray[N,K,3])

    The first two dimensions are coordinates, score is the third dimension of the array.

    boxes (np.ndarray[N,6])

    [center[0], center[1], scale[0] , scale[1],area, score]

    :image_path (list[str]) :output_heatmap (np.ndarray[N, K, H, W]): model outpus.

  • res_folder (str) – Path of directory to save the results.

  • metric (str | list[str]) – Metric to be performed. Options: ‘PCK’, ‘tPCK’. PCK means normalized by the bounding boxes, while tPCK means normalized by the torso size.

返回

Evaluation results for evaluation metric.

返回类型

dict

class mmpose.datasets.datasets.top_down.TopDownMhpDataset(ann_file, img_prefix, data_cfg, pipeline, test_mode=False)[源代码]

MHPv2.0 dataset for top-down pose estimation.

The Multi-Human Parsing project of Learning and Vision (LV) Group, National University of Singapore (NUS) is proposed to push the frontiers of fine-grained visual understanding of humans in crowd scene. <https://lv-mhp.github.io/>

Note that, the evaluation metric used here is mAP (adapted from COCO), which may be different from the official evaluation codes. ‘https://github.com/ZhaoJ9014/Multi-Human-Parsing/tree/master/’ ‘Evaluation/Multi-Human-Pose’ Please be cautious if you use the results in papers.

The dataset loads raw features and apply specified transforms to return a dict containing the image tensors and other information.

MHP keypoint indexes:

0: "right ankle",
1: "right knee",
2: "right hip",
3: "left hip",
4: "left knee",
5: "left ankle",
6: "pelvis",
7: "thorax",
8: "upper neck",
9: "head top",
10: "right wrist",
11: "right elbow",
12: "right shoulder",
13: "left shoulder",
14: "left elbow",
15: "left wrist",
参数
  • ann_file (str) – Path to the annotation file.

  • img_prefix (str) – Path to a directory where images are held. Default: None.

  • data_cfg (dict) – config

  • pipeline (list[dict | callable]) – A sequence of data transforms.

  • test_mode (bool) – Store True when building test or validation dataset. Default: False.

class mmpose.datasets.datasets.top_down.TopDownMpiiDataset(ann_file, img_prefix, data_cfg, pipeline, test_mode=False)[源代码]

MPII Dataset for top-down pose estimation.

2D Human Pose Estimation: New Benchmark and State of the Art Analysis’ CVPR’2014. More details can be found in the `paper .

The dataset loads raw features and apply specified transforms to return a dict containing the image tensors and other information.

MPII keypoint indexes:

0: 'right_ankle'
1: 'right_knee',
2: 'right_hip',
3: 'left_hip',
4: 'left_knee',
5: 'left_ankle',
6: 'pelvis',
7: 'thorax',
8: 'upper_neck',
9: 'head_top',
10: 'right_wrist',
11: 'right_elbow',
12: 'right_shoulder',
13: 'left_shoulder',
14: 'left_elbow',
15: 'left_wrist'
参数
  • ann_file (str) – Path to the annotation file.

  • img_prefix (str) – Path to a directory where images are held. Default: None.

  • data_cfg (dict) – config

  • pipeline (list[dict | callable]) – A sequence of data transforms.

  • test_mode (bool) – Store True when building test or validation dataset. Default: False.

evaluate(outputs, res_folder, metric='PCKh', **kwargs)[源代码]

Evaluate PCKh for MPII dataset. Adapted from https://github.com/leoxiaobin/deep-high-resolution-net.pytorch Copyright (c) Microsoft, under the MIT License.

注解

batch_size: N num_keypoints: K heatmap height: H heatmap width: W

参数
  • outputs (list(preds, boxes, image_path, heatmap)) –

    • preds (np.ndarray[N,K,3]): The first two dimensions are coordinates, score is the third dimension of the array.

    • boxes (np.ndarray[N,6]): [center[0], center[1], scale[0] , scale[1],area, score]

    • image_paths (list[str]): For example, [‘/val2017/000000 397133.jpg’]

    • heatmap (np.ndarray[N, K, H, W]): model output heatmap.

  • res_folder (str) – Path of directory to save the results.

  • metric (str | list[str]) – Metrics to be performed. Defaults: ‘PCKh’.

返回

PCKh for each joint

返回类型

dict

class mmpose.datasets.datasets.top_down.TopDownMpiiTrbDataset(ann_file, img_prefix, data_cfg, pipeline, test_mode=False)[源代码]

MPII-TRB Dataset dataset for top-down pose estimation.

TRB: A Novel Triplet Representation for Understanding 2D Human Body ICCV’2019 More details can be found in the paper .

The dataset loads raw features and apply specified transforms to return a dict containing the image tensors and other information.

MPII-TRB keypoint indexes:

0: 'left_shoulder'
1: 'right_shoulder'
2: 'left_elbow'
3: 'right_elbow'
4: 'left_wrist'
5: 'right_wrist'
6: 'left_hip'
7: 'right_hip'
8: 'left_knee'
9: 'right_knee'
10: 'left_ankle'
11: 'right_ankle'
12: 'head'
13: 'neck'

14: 'right_neck'
15: 'left_neck'
16: 'medial_right_shoulder'
17: 'lateral_right_shoulder'
18: 'medial_right_bow'
19: 'lateral_right_bow'
20: 'medial_right_wrist'
21: 'lateral_right_wrist'
22: 'medial_left_shoulder'
23: 'lateral_left_shoulder'
24: 'medial_left_bow'
25: 'lateral_left_bow'
26: 'medial_left_wrist'
27: 'lateral_left_wrist'
28: 'medial_right_hip'
29: 'lateral_right_hip'
30: 'medial_right_knee'
31: 'lateral_right_knee'
32: 'medial_right_ankle'
33: 'lateral_right_ankle'
34: 'medial_left_hip'
35: 'lateral_left_hip'
36: 'medial_left_knee'
37: 'lateral_left_knee'
38: 'medial_left_ankle'
39: 'lateral_left_ankle'
参数
  • ann_file (str) – Path to the annotation file.

  • img_prefix (str) – Path to a directory where images are held. Default: None.

  • data_cfg (dict) – config

  • pipeline (list[dict | callable]) – A sequence of data transforms.

  • test_mode (bool) – Store True when building test or validation dataset. Default: False.

evaluate(outputs, res_folder, metric='PCKh', **kwargs)[源代码]

Evaluate PCKh for MPII-TRB dataset.

注解

batch_size: N num_keypoints: K heatmap height: H heatmap width: W

参数
  • outputs (list(preds, boxes, image_paths, heatmap)) –

    • preds (np.ndarray[N,K,3]): The first two dimensions are coordinates, score is the third dimension of the array.

    • boxes (np.ndarray[N,6]): [center[0], center[1], scale[0] , scale[1],area, score]

    • image_paths (list[str]): For example, [‘/val2017/000000 397133.jpg’]

    • heatmap (np.ndarray[N, K, H, W]): model output heatmap.

    • bbox_ids (list[str]): For example, [‘27407’]

  • res_folder (str) – Path of directory to save the results.

  • metric (str | list[str]) – Metrics to be performed. Defaults: ‘PCKh’.

返回

PCKh for each joint

返回类型

dict

class mmpose.datasets.datasets.top_down.TopDownOCHumanDataset(ann_file, img_prefix, data_cfg, pipeline, test_mode=False)[源代码]

OChuman dataset for top-down pose estimation.

Pose2Seg: Detection Free Human Instance Segmentation’ CVPR’2019 More details can be found in the `paper .

“Occluded Human (OCHuman)” dataset contains 8110 heavily occluded human instances within 4731 images. OCHuman dataset is designed for validation and testing. To evaluate on OCHuman, the model should be trained on COCO training set, and then test the robustness of the model to occlusion using OCHuman.

OCHuman keypoint indexes (same as COCO):

0: 'nose',
1: 'left_eye',
2: 'right_eye',
3: 'left_ear',
4: 'right_ear',
5: 'left_shoulder',
6: 'right_shoulder',
7: 'left_elbow',
8: 'right_elbow',
9: 'left_wrist',
10: 'right_wrist',
11: 'left_hip',
12: 'right_hip',
13: 'left_knee',
14: 'right_knee',
15: 'left_ankle',
16: 'right_ankle'
参数
  • ann_file (str) – Path to the annotation file.

  • img_prefix (str) – Path to a directory where images are held. Default: None.

  • data_cfg (dict) – config

  • pipeline (list[dict | callable]) – A sequence of data transforms.

  • test_mode (bool) – Store True when building test or validation dataset. Default: False.

class mmpose.datasets.datasets.top_down.TopDownPoseTrack18Dataset(ann_file, img_prefix, data_cfg, pipeline, test_mode=False)[源代码]

PoseTrack18 dataset for top-down pose estimation.

Posetrack: A benchmark for human pose estimation and tracking’ CVPR’2018 More details can be found in the `paper .

The dataset loads raw features and apply specified transforms to return a dict containing the image tensors and other information.

PoseTrack2018 keypoint indexes::

0: ‘nose’, 1: ‘head_bottom’, 2: ‘head_top’, 3: ‘left_ear’, 4: ‘right_ear’, 5: ‘left_shoulder’, 6: ‘right_shoulder’, 7: ‘left_elbow’, 8: ‘right_elbow’, 9: ‘left_wrist’, 10: ‘right_wrist’, 11: ‘left_hip’, 12: ‘right_hip’, 13: ‘left_knee’, 14: ‘right_knee’, 15: ‘left_ankle’, 16: ‘right_ankle’

参数
  • ann_file (str) – Path to the annotation file.

  • img_prefix (str) – Path to a directory where images are held. Default: None.

  • data_cfg (dict) – config

  • pipeline (list[dict | callable]) – A sequence of data transforms.

  • test_mode (bool) – Store True when building test or validation dataset. Default: False.

evaluate(outputs, res_folder, metric='mAP', **kwargs)[源代码]

Evaluate coco keypoint results. The pose prediction results will be saved in ${res_folder}/result_keypoints.json.

注解

num_keypoints: K

参数
  • outputs (list(preds, boxes, image_paths)) –

    preds (np.ndarray[N,K,3])

    The first two dimensions are coordinates, score is the third dimension of the array.

    boxes (np.ndarray[N,6])

    [center[0], center[1], scale[0] , scale[1],area, score]

    image_paths (list[str])

    For example, [‘val/010016_mpii_test /000024.jpg’]

    heatmap (np.ndarray[N, K, H, W])

    model output heatmap.

    :bbox_id (list(int))

  • res_folder (str) – Path of directory to save the results.

  • metric (str | list[str]) – Metric to be performed. Defaults: ‘mAP’.

返回

Evaluation results for evaluation metric.

返回类型

dict

class mmpose.datasets.datasets.bottom_up.BottomUpAicDataset(ann_file, img_prefix, data_cfg, pipeline, test_mode=False)[源代码]

Aic dataset for bottom-up pose estimation.

AI Challenger : A Large-scale Dataset for Going Deeper in Image Understanding

The dataset loads raw features and apply specified transforms to return a dict containing the image tensors and other information.

AIC keypoint indexes::

0: “right_shoulder”, 1: “right_elbow”, 2: “right_wrist”, 3: “left_shoulder”, 4: “left_elbow”, 5: “left_wrist”, 6: “right_hip”, 7: “right_knee”, 8: “right_ankle”, 9: “left_hip”, 10: “left_knee”, 11: “left_ankle”, 12: “head_top”, 13: “neck”

参数
  • ann_file (str) – Path to the annotation file.

  • img_prefix (str) – Path to a directory where images are held. Default: None.

  • data_cfg (dict) – config

  • pipeline (list[dict | callable]) – A sequence of data transforms.

  • test_mode (bool) – Store True when building test or validation dataset. Default: False.

class mmpose.datasets.datasets.bottom_up.BottomUpCocoDataset(ann_file, img_prefix, data_cfg, pipeline, test_mode=False)[源代码]

COCO dataset for bottom-up pose estimation.

The dataset loads raw features and apply specified transforms to return a dict containing the image tensors and other information.

COCO keypoint indexes:

0: 'nose',
1: 'left_eye',
2: 'right_eye',
3: 'left_ear',
4: 'right_ear',
5: 'left_shoulder',
6: 'right_shoulder',
7: 'left_elbow',
8: 'right_elbow',
9: 'left_wrist',
10: 'right_wrist',
11: 'left_hip',
12: 'right_hip',
13: 'left_knee',
14: 'right_knee',
15: 'left_ankle',
16: 'right_ankle'
参数
  • ann_file (str) – Path to the annotation file.

  • img_prefix (str) – Path to a directory where images are held. Default: None.

  • data_cfg (dict) – config

  • pipeline (list[dict | callable]) – A sequence of data transforms.

  • test_mode (bool) – Store True when building test or validation dataset. Default: False.

evaluate(outputs, res_folder, metric='mAP', **kwargs)[源代码]

Evaluate coco keypoint results. The pose prediction results will be saved in ${res_folder}/result_keypoints.json.

注解

num_people: P num_keypoints: K

参数
  • outputs (list(preds, scores, image_path, heatmap)) –

    • preds (list[np.ndarray(P, K, 3+tag_num)]): Pose predictions for all people in images.

    • scores (list[P]):

    • image_path (list[str]): For example, [‘coco/images/

    val2017/000000397133.jpg’] * heatmap (np.ndarray[N, K, H, W]): model outputs.

  • res_folder (str) – Path of directory to save the results.

  • metric (str | list[str]) – Metric to be performed. Defaults: ‘mAP’.

返回

Evaluation results for evaluation metric.

返回类型

dict

class mmpose.datasets.datasets.bottom_up.BottomUpCocoWholeBodyDataset(ann_file, img_prefix, data_cfg, pipeline, test_mode=False)[源代码]

CocoWholeBodyDataset dataset for bottom-up pose estimation.

Whole-Body Human Pose Estimation in the Wild’ ECCV’2020 More details can be found in the `paper .

The dataset loads raw features and apply specified transforms to return a dict containing the image tensors and other information.

In total, we have 133 keypoints for wholebody pose estimation.

COCO-WholeBody keypoint indexes::

0-16: 17 body keypoints 17-22: 6 foot keypoints 23-90: 68 face keypoints 91-132: 42 hand keypoints

参数
  • ann_file (str) – Path to the annotation file.

  • img_prefix (str) – Path to a directory where images are held. Default: None.

  • data_cfg (dict) – config

  • pipeline (list[dict | callable]) – A sequence of data transforms.

  • test_mode (bool) – Store True when building test or validation dataset. Default: False.

class mmpose.datasets.datasets.bottom_up.BottomUpCrowdPoseDataset(ann_file, img_prefix, data_cfg, pipeline, test_mode=False)[源代码]

CrowdPose dataset for bottom-up pose estimation.

The dataset loads raw features and apply specified transforms to return a dict containing the image tensors and other information.

CrowdPose keypoint indexes:

0: 'left_shoulder',
1: 'right_shoulder',
2: 'left_elbow',
3: 'right_elbow',
4: 'left_wrist',
5: 'right_wrist',
6: 'left_hip',
7: 'right_hip',
8: 'left_knee',
9: 'right_knee',
10: 'left_ankle',
11: 'right_ankle',
12: 'top_head',
13: 'neck'
参数
  • ann_file (str) – Path to the annotation file.

  • img_prefix (str) – Path to a directory where images are held. Default: None.

  • data_cfg (dict) – config

  • pipeline (list[dict | callable]) – A sequence of data transforms.

  • test_mode (bool) – Store True when building test or validation dataset. Default: False.

class mmpose.datasets.datasets.bottom_up.BottomUpMhpDataset(ann_file, img_prefix, data_cfg, pipeline, test_mode=False)[源代码]

MHPv2.0 dataset for top-down pose estimation.

The Multi-Human Parsing project of Learning and Vision (LV) Group, National University of Singapore (NUS) is proposed to push the frontiers of fine-grained visual understanding of humans in crowd scene. <https://lv-mhp.github.io/>

The dataset loads raw features and apply specified transforms to return a dict containing the image tensors and other information.

MHP keypoint indexes:

0: "right ankle",
1: "right knee",
2: "right hip",
3: "left hip",
4: "left knee",
5: "left ankle",
6: "pelvis",
7: "thorax",
8: "upper neck",
9: "head top",
10: "right wrist",
11: "right elbow",
12: "right shoulder",
13: "left shoulder",
14: "left elbow",
15: "left wrist",
参数
  • ann_file (str) – Path to the annotation file.

  • img_prefix (str) – Path to a directory where images are held. Default: None.

  • data_cfg (dict) – config

  • pipeline (list[dict | callable]) – A sequence of data transforms.

  • test_mode (bool) – Store True when building test or validation dataset. Default: False.

pipelines

class mmpose.datasets.pipelines.loading.LoadImageFromFile(to_float32=False, color_type='color', channel_order='rgb')[源代码]

Loading image from file.

参数
  • color_type (str) – Flags specifying the color type of a loaded image, candidates are ‘color’, ‘grayscale’ and ‘unchanged’.

  • channel_order (str) – Order of channel, candidates are ‘bgr’ and ‘rgb’.

class mmpose.datasets.pipelines.shared_transform.Albumentation(transforms, keymap=None)[源代码]

Albumentation augmentation (pixel-level transforms only). Adds custom pixel-level transformations from Albumentations library. Please visit https://albumentations.readthedocs.io to get more information.

Note: we only support pixel-level transforms. Please visit https://github.com/albumentations-team/ albumentations#pixel-level-transforms to get more information about pixel-level transforms.

An example of transforms is as followed: .. code-block:

[
    dict(
        type='RandomBrightnessContrast',
        brightness_limit=[0.1, 0.3],
        contrast_limit=[0.1, 0.3],
        p=0.2),
    dict(type='ChannelShuffle', p=0.1),
    dict(
        type='OneOf',
        transforms=[
            dict(type='Blur', blur_limit=3, p=1.0),
            dict(type='MedianBlur', blur_limit=3, p=1.0)
        ],
        p=0.1),
]
参数
  • transforms (list[dict]) – A list of Albumentation transformations

  • keymap (dict) – Contains {‘input key’:’albumentation-style key’}, e.g., {‘img’: ‘image’}.

albu_builder(cfg)[源代码]

Import a module from albumentations.

It resembles some of build_from_cfg() logic. :param cfg: Config dict. It should at least contain the key “type”. :type cfg: dict

返回

The constructed object.

返回类型

obj

static mapper(d, keymap)[源代码]

Dictionary mapper.

Renames keys according to keymap provided. :param d: old dict :type d: dict :param keymap: {‘old_key’:’new_key’} :type keymap: dict

返回

new dict.

返回类型

dict

class mmpose.datasets.pipelines.shared_transform.Collect(keys, meta_keys, meta_name='img_metas')[源代码]

Collect data from the loader relevant to the specific task.

This keeps the items in keys as it is, and collect items in meta_keys into a meta item called meta_name.This is usually the last stage of the data loader pipeline. For example, when keys=’imgs’, meta_keys=(‘filename’, ‘label’, ‘original_shape’), meta_name=’img_metas’, the results will be a dict with keys ‘imgs’ and ‘img_metas’, where ‘img_metas’ is a DataContainer of another dict with keys ‘filename’, ‘label’, ‘original_shape’.

参数
  • keys (Sequence[str|tuple]) – Required keys to be collected. If a tuple (key, key_new) is given as an element, the item retrived by key will be renamed as key_new in collected data.

  • meta_name (str) – The name of the key that contains meta infomation. This key is always populated. Default: “img_metas”.

  • meta_keys (Sequence[str|tuple]) – Keys that are collected under meta_name. The contents of the meta_name dictionary depends on meta_keys.

class mmpose.datasets.pipelines.shared_transform.Compose(transforms)[源代码]

Compose a data pipeline with a sequence of transforms.

参数

transforms (list[dict | callable]) – Either config dicts of transforms or transform objects.

class mmpose.datasets.pipelines.shared_transform.MultitaskGatherTarget(pipeline_list, pipeline_indices)[源代码]

Gather the targets for multitask heads.

参数
  • pipeline_list (list[list]) – List of pipelines for all heads.

  • pipeline_indices (list[int]) – Pipeline index of each head.

class mmpose.datasets.pipelines.shared_transform.NormalizeTensor(mean, std)[源代码]

Normalize the Tensor image (CxHxW), with mean and std.

Required key: ‘img’. Modifies key: ‘img’.

参数
  • mean (list[float]) – Mean values of 3 channels.

  • std (list[float]) – Std values of 3 channels.

class mmpose.datasets.pipelines.shared_transform.PhotometricDistortion(brightness_delta=32, contrast_range=(0.5, 1.5), saturation_range=(0.5, 1.5), hue_delta=18)[源代码]

Apply photometric distortion to image sequentially, every transformation is applied with a probability of 0.5. The position of random contrast is in second or second to last.

  1. random brightness

  2. random contrast (mode 0)

  3. convert color from BGR to HSV

  4. random saturation

  5. random hue

  6. convert color from HSV to BGR

  7. random contrast (mode 1)

  8. randomly swap channels

参数
  • brightness_delta (int) – delta of brightness.

  • contrast_range (tuple) – range of contrast.

  • saturation_range (tuple) – range of saturation.

  • hue_delta (int) – delta of hue.

brightness(img)[源代码]

Brightness distortion.

contrast(img)[源代码]

Contrast distortion.

convert(img, alpha=1, beta=0)[源代码]

Multiple with alpha and add beta with clip.

class mmpose.datasets.pipelines.shared_transform.RenameKeys(key_pairs)[源代码]

Rename the keys.

Args: key_pairs (Sequence[tuple]): Required keys to be renamed. If a tuple (key_src, key_tgt) is given as an element, the item retrived by key_src will be renamed as key_tgt.

class mmpose.datasets.pipelines.shared_transform.ToTensor[源代码]

Transform image to Tensor.

Required key: ‘img’. Modifies key: ‘img’.

参数

results (dict) – contain all information about training.

class mmpose.datasets.pipelines.top_down_transform.TopDownAffine(use_udp=False)[源代码]

Affine transform the image to make input.

Required keys:’img’, ‘joints_3d’, ‘joints_3d_visible’, ‘ann_info’,’scale’, ‘rotation’ and ‘center’. Modified keys:’img’, ‘joints_3d’, and ‘joints_3d_visible’.

参数

use_udp (bool) – To use unbiased data processing. Paper ref: Huang et al. The Devil is in the Details: Delving into Unbiased Data Processing for Human Pose Estimation (CVPR 2020).

class mmpose.datasets.pipelines.top_down_transform.TopDownGenerateTarget(sigma=2, kernel=(11, 11), valid_radius_factor=0.0546875, target_type='GaussianHeatmap', encoding='MSRA', unbiased_encoding=False)[源代码]

Generate the target heatmap.

Required keys: ‘joints_3d’, ‘joints_3d_visible’, ‘ann_info’. Modified keys: ‘target’, and ‘target_weight’.

参数
  • sigma – Sigma of heatmap gaussian for ‘MSRA’ approach.

  • kernel – Kernel of heatmap gaussian for ‘Megvii’ approach.

  • encoding (str) – Approach to generate target heatmaps. Currently supported approaches: ‘MSRA’, ‘Megvii’, ‘UDP’. Default:’MSRA’

  • unbiased_encoding (bool) – Option to use unbiased encoding methods. Paper ref: Zhang et al. Distribution-Aware Coordinate Representation for Human Pose Estimation (CVPR 2020).

  • keypoint_pose_distance – Keypoint pose distance for UDP. Paper ref: Huang et al. The Devil is in the Details: Delving into Unbiased Data Processing for Human Pose Estimation (CVPR 2020).

  • target_type (str) – supported targets: ‘GaussianHeatmap’, ‘CombinedTarget’. Default:’GaussianHeatmap’ CombinedTarget: The combination of classification target (response map) and regression target (offset map). Paper ref: Huang et al. The Devil is in the Details: Delving into Unbiased Data Processing for Human Pose Estimation (CVPR 2020).

class mmpose.datasets.pipelines.top_down_transform.TopDownGenerateTargetRegression[源代码]

Generate the target regression vector (coordinates).

Required keys: ‘joints_3d’, ‘joints_3d_visible’, ‘ann_info’. Modified keys: ‘target’, and ‘target_weight’.

class mmpose.datasets.pipelines.top_down_transform.TopDownGetRandomScaleRotation(rot_factor=40, scale_factor=0.5, rot_prob=0.6)[源代码]

Data augmentation with random scaling & rotating.

Required key: ‘scale’. Modifies key: ‘scale’ and ‘rotation’.

参数
  • rot_factor (int) – Rotating to [-2*rot_factor, 2*rot_factor].

  • scale_factor (float) – Scaling to [1-scale_factor, 1+scale_factor].

  • rot_prob (float) – Probability of random rotation.

class mmpose.datasets.pipelines.top_down_transform.TopDownHalfBodyTransform(num_joints_half_body=8, prob_half_body=0.3)[源代码]

Data augmentation with half-body transform. Keep only the upper body or the lower body at random.

Required keys: ‘joints_3d’, ‘joints_3d_visible’, and ‘ann_info’. Modifies key: ‘scale’ and ‘center’.

参数
  • num_joints_half_body (int) – Threshold of performing half-body transform. If the body has fewer number of joints (< num_joints_half_body), ignore this step.

  • prob_half_body (float) – Probability of half-body transform.

static half_body_transform(cfg, joints_3d, joints_3d_visible)[源代码]

Get center&scale for half-body transform.

class mmpose.datasets.pipelines.top_down_transform.TopDownRandomFlip(flip_prob=0.5)[源代码]

Data augmentation with random image flip.

Required keys: ‘img’, ‘joints_3d’, ‘joints_3d_visible’, ‘center’ and ‘ann_info’. Modifies key: ‘img’, ‘joints_3d’, ‘joints_3d_visible’, ‘center’ and ‘flipped’.

参数
  • flip (bool) – Option to perform random flip.

  • flip_prob (float) – Probability of flip.

class mmpose.datasets.pipelines.top_down_transform.TopDownRandomTranslation(trans_factor=0.15, trans_prob=1.0)[源代码]

Data augmentation with random translation.

Required key: ‘scale’ and ‘center’. Modifies key: ‘center’.

提示

bbox height: H bbox width: W

参数
  • trans_factor (float) – Translating center to

  • ``[-trans_factor

  • [W (trans_factor] *) –

  • center``. (H] +) –

  • trans_prob (float) – Probability of random translation.

class mmpose.datasets.pipelines.bottom_up_transform.BottomUpGenerateHeatmapTarget(sigma, use_udp=False)[源代码]

Generate multi-scale heatmap target for bottom-up.

参数
  • sigma (int) – Sigma of heatmap Gaussian

  • max_num_people (int) – Maximum number of people in an image

  • use_udp (bool) – To use unbiased data processing. Paper ref: Huang et al. The Devil is in the Details: Delving into Unbiased Data Processing for Human Pose Estimation (CVPR 2020).

class mmpose.datasets.pipelines.bottom_up_transform.BottomUpGeneratePAFTarget(limb_width, skeleton=None)[源代码]

Generate multi-scale heatmaps and part affinity fields (PAF) target for bottom-up. Paper ref: Cao et al. Realtime Multi-Person 2D Human Pose Estimation using Part Affinity Fields (CVPR 2017).

参数

limb_width (int) – Limb width of part affinity fields

class mmpose.datasets.pipelines.bottom_up_transform.BottomUpGenerateTarget(sigma, max_num_people, use_udp=False)[源代码]

Generate multi-scale heatmap target for bottom-up.

参数
  • sigma (int) – Sigma of heatmap Gaussian

  • max_num_people (int) – Maximum number of people in an image

  • use_udp (bool) – To use unbiased data processing. Paper ref: Huang et al. The Devil is in the Details: Delving into Unbiased Data Processing for Human Pose Estimation (CVPR 2020).

class mmpose.datasets.pipelines.bottom_up_transform.BottomUpGetImgSize(test_scale_factor, current_scale=1, use_udp=False)[源代码]

Get multi-scale image sizes for bottom-up, including base_size and test_scale_factor. Keep the ratio and the image is resized to results[‘ann_info’][‘image_size’]×current_scale.

参数
  • test_scale_factor (List[float]) – Multi scale

  • current_scale (int) – default 1

  • use_udp (bool) – To use unbiased data processing. Paper ref: Huang et al. The Devil is in the Details: Delving into Unbiased Data Processing for Human Pose Estimation (CVPR 2020).

class mmpose.datasets.pipelines.bottom_up_transform.BottomUpRandomAffine(rot_factor, scale_factor, scale_type, trans_factor, use_udp=False)[源代码]

Data augmentation with random scaling & rotating.

参数
  • rot_factor (int) – Rotating to [-rotation_factor, rotation_factor]

  • scale_factor (float) – Scaling to [1-scale_factor, 1+scale_factor]

  • scale_type – wrt long or short length of the image.

  • trans_factor – Translation factor.

  • scale_aware_sigma – Option to use scale-aware sigma

  • use_udp (bool) – To use unbiased data processing. Paper ref: Huang et al. The Devil is in the Details: Delving into Unbiased Data Processing for Human Pose Estimation (CVPR 2020).

class mmpose.datasets.pipelines.bottom_up_transform.BottomUpRandomFlip(flip_prob=0.5)[源代码]

Data augmentation with random image flip for bottom-up.

参数

flip_prob (float) – Probability of flip.

class mmpose.datasets.pipelines.bottom_up_transform.BottomUpResizeAlign(transforms, use_udp=False)[源代码]

Resize multi-scale size and align transform for bottom-up.

参数
  • transforms (List) – ToTensor & Normalize

  • use_udp (bool) – To use unbiased data processing. Paper ref: Huang et al. The Devil is in the Details: Delving into Unbiased Data Processing for Human Pose Estimation (CVPR 2020).

class mmpose.datasets.pipelines.bottom_up_transform.HeatmapGenerator(output_size, num_joints, sigma=- 1, use_udp=False)[源代码]

Generate heatmaps for bottom-up models.

参数
  • num_joints (int) – Number of keypoints

  • output_size (int) – Size of feature map

  • sigma (int) – Sigma of the heatmaps.

  • use_udp (bool) – To use unbiased data processing. Paper ref: Huang et al. The Devil is in the Details: Delving into Unbiased Data Processing for Human Pose Estimation (CVPR 2020).

class mmpose.datasets.pipelines.bottom_up_transform.JointsEncoder(max_num_people, num_joints, output_size, tag_per_joint)[源代码]

Encodes the visible joints into (coordinates, score); The coordinate of one joint and its score are of int type.

(idx * output_size**2 + y * output_size + x, 1) or (0, 0).

参数
  • max_num_people (int) – Max number of people in an image

  • num_joints (int) – Number of keypoints

  • output_size (int) – Size of feature map

  • tag_per_joint (bool) – Option to use one tag map per joint.

class mmpose.datasets.pipelines.bottom_up_transform.PAFGenerator(output_size, limb_width, skeleton)[源代码]

Generate part affinity fields.

参数
  • output_size (int) – Size of feature map.

  • limb_width (int) – Limb width of part affinity fields.

  • skeleton (list[list]) – connections of joints.

class mmpose.datasets.pipelines.mesh_transform.IUVToTensor[源代码]

Transform IUV image to part index mask and uv coordinates image. The 3 channels of IUV image means: part index, u coordinates, v coordinates.

Required key: ‘iuv’, ‘ann_info’. Modifies key: ‘part_index’, ‘uv_coordinates’.

参数

results (dict) – contain all information about training.

class mmpose.datasets.pipelines.mesh_transform.LoadIUVFromFile(to_float32=False)[源代码]

Loading IUV image from file.

class mmpose.datasets.pipelines.mesh_transform.MeshAffine[源代码]

Affine transform the image to get input image. Affine transform the 2D keypoints, 3D kepoints and IUV image too.

Required keys: ‘img’, ‘joints_2d’,’joints_2d_visible’, ‘joints_3d’, ‘joints_3d_visible’, ‘pose’, ‘iuv’, ‘ann_info’,’scale’, ‘rotation’ and ‘center’. Modifies key: ‘img’, ‘joints_2d’,’joints_2d_visible’, ‘joints_3d’, ‘pose’, ‘iuv’.

class mmpose.datasets.pipelines.mesh_transform.MeshGetRandomScaleRotation(rot_factor=30, scale_factor=0.25, rot_prob=0.6)[源代码]

Data augmentation with random scaling & rotating.

Required key: ‘scale’. Modifies key: ‘scale’ and ‘rotation’.

参数
  • rot_factor (int) – Rotating to [-2*rot_factor, 2*rot_factor].

  • scale_factor (float) – Scaling to [1-scale_factor, 1+scale_factor].

  • rot_prob (float) – Probability of random rotation.

class mmpose.datasets.pipelines.mesh_transform.MeshRandomChannelNoise(noise_factor=0.4)[源代码]

Data augmentation with random channel noise.

Required keys: ‘img’ Modifies key: ‘img’

参数

noise_factor (float) – Multiply each channel with a factor between``[1-scale_factor, 1+scale_factor]``

class mmpose.datasets.pipelines.mesh_transform.MeshRandomFlip(flip_prob=0.5)[源代码]

Data augmentation with random image flip.

Required keys: ‘img’, ‘joints_2d’,’joints_2d_visible’, ‘joints_3d’, ‘joints_3d_visible’, ‘center’, ‘pose’, ‘iuv’ and ‘ann_info’. Modifies key: ‘img’, ‘joints_2d’,’joints_2d_visible’, ‘joints_3d’, ‘joints_3d_visible’, ‘center’, ‘pose’, ‘iuv’.

参数

flip_prob (float) – Probability of flip.

class mmpose.datasets.pipelines.pose3d_transform.CameraProjection(item, mode, output_name=None, camera_type='SimpleCamera', camera_param=None)[源代码]

Apply camera projection to joint coordinates.

参数
  • item (str) – The name of the pose to apply camera projection.

  • mode (str) – The type of camera projection, supported options are - world_to_camera - world_to_pixel - camera_to_world - camera_to_pixel

  • output_name (str|None) – The name of the projected pose. If None (default) is given, the projected pose will be stored in place.

  • camera_type (str) – The camera class name (should be registered in CAMERA).

  • camera_param (dict|None) – The camera parameter dict. See the camera class definition for more details. If None is given, the camera parameter will be obtained during processing of each data sample with the key “camera_param”.

Required keys:

item camera_param (if camera parameters are not given in initialization)

Modified keys:

output_name

class mmpose.datasets.pipelines.pose3d_transform.CollectCameraIntrinsics(camera_param=None, need_distortion=True)[源代码]

Store camera intrinsics in a 1-dim array, including f, c, k, p.

参数
  • camera_param (dict|None) – The camera parameter dict. See the camera class definition for more details. If None is given, the camera parameter will be obtained during processing of each data sample with the key “camera_param”.

  • need_distortion (bool) – Whether need distortion parameters k and p. Default: True.

Required keys:

camera_param (if camera parameters are not given in initialization)

Modified keys:

intrinsics

class mmpose.datasets.pipelines.pose3d_transform.Generate3DHeatmapTarget(sigma=2, joint_indices=None, max_bound=1.0)[源代码]

Generate the target 3d heatmap.

Required keys: ‘joints_3d’, ‘joints_3d_visible’, ‘ann_info’. Modified keys: ‘target’, and ‘target_weight’.

参数
  • sigma – Sigma of heatmap gaussian.

  • joint_indices (list) – Indices of joints used for heatmap generation.

  • None (If) –

  • max_bound (float) – The maximal value of heatmap.

class mmpose.datasets.pipelines.pose3d_transform.GetRootCenteredPose(item, root_index, visible_item=None, remove_root=False, root_name=None)[源代码]

Zero-center the pose around a given root joint. Optionally, the root joint can be removed from the origianl pose and stored as a separate item.

Note that the root-centered joints may no longer align with some annotation information (e.g. flip_pairs, num_joints, inference_channel, etc.) due to the removal of the root joint.

参数
  • item (str) – The name of the pose to apply root-centering.

  • root_index (int) – Root joint index in the pose.

  • visible_item (str) – The name of the visibility item.

  • remove_root (bool) – If true, remove the root joint from the pose

  • root_name (str) – Optional. If not none, it will be used as the key to store the root position separated from the original pose.

Required keys:

item

Modified keys:

item, visible_item, root_name

class mmpose.datasets.pipelines.pose3d_transform.ImageCoordinateNormalization(item, norm_camera=False, camera_param=None)[源代码]

Normalize the 2D joint coordinate with image width and height. Range [0, w] is mapped to [-1, 1], while preserving the aspect ratio.

参数
  • item (str|list[str]) – The name of the pose to normalize.

  • norm_camera (bool) – Whether to normalize camera intrinsics. Default: False.

  • camera_param (dict|None) – The camera parameter dict. See the camera class definition for more details. If None is given, the camera parameter will be obtained during processing of each data sample with the key “camera_param”.

Required keys:

item

Modified keys:

item (, camera_param)

class mmpose.datasets.pipelines.pose3d_transform.NormalizeJointCoordinate(item, mean=None, std=None, norm_param_file=None)[源代码]

Normalize the joint coordinate with given mean and std.

参数
  • item (str) – The name of the pose to normalize.

  • mean (array) – Mean values of joint coordiantes in shape [K, C].

  • std (array) – Std values of joint coordinates in shape [K, C].

  • norm_param_file (str) – Optionally load a dict containing mean and std from a file using mmcv.load.

Required keys:

item

Modified keys:

item

class mmpose.datasets.pipelines.pose3d_transform.PoseSequenceToTensor(item)[源代码]

Convert pose sequence from numpy array to Tensor.

The original pose sequence should have a shape of [T,K,C] or [K,C], where T is the sequence length, K and C are keypoint number and dimension. The converted pose sequence will have a shape of [K*C, T].

参数

item (str) – The name of the pose sequence

Requred keys:

item

Modified keys:

item

class mmpose.datasets.pipelines.pose3d_transform.RelativeJointRandomFlip(item, flip_cfg, visible_item=None, flip_prob=0.5, flip_camera=False, camera_param=None)[源代码]

Data augmentation with random horizontal joint flip around a root joint.

参数
  • item (str|list[str]) – The name of the pose to flip.

  • flip_cfg (dict|list[dict]) –

    Configurations of the fliplr_regression function. It should contain the following arguments:

    • center_mode: The mode to set the center location on the

      x-axis to flip around.

    -center_x or center_index: Set the x-axis location or the

    root joint’s index to define the flip center.

    Please refer to the docstring of the fliplr_regression function for more details.

  • visible_item (str|list[str]) – The name of the visibility item which will be flipped accordingly along with the pose.

  • flip_prob (float) – Probability of flip.

  • flip_camera (bool) – Whether to flip horizontal distortion coefficients.

  • camera_param (dict|None) – The camera parameter dict. See the camera class definition for more details. If None is given, the camera parameter will be obtained during processing of each data sample with the key “camera_param”.

Required keys:

item

Modified keys:

item (, camera_param)

samplers

class mmpose.datasets.samplers.DistributedSampler(dataset, num_replicas=None, rank=None, shuffle=True, seed=0)[源代码]

DistributedSampler inheriting from torch.utils.data.DistributedSampler.

In pytorch of lower versions, there is no shuffle argument. This child class will port one to DistributedSampler.

mmpose.utils

class mmpose.utils.StopWatch(window=1)[源代码]

A helper class to measure FPS and detailed time consuming of each phase in a video processing loop or similar scenarios.

参数

window (int) – The sliding window size to calculate the running average of the time consuming.

Example::
>>> stop_watch = StopWatch(window=10)
>>> while True:
...     with stop_watch.timeit('total'):
...         sleep(1)
...         # 'timeit' support nested use
...         with stop_watch.timeit('phase1'):
...             sleep(1)
...         with stop_watch.timeit('phase2'):
...             sleep(2)
...         sleep(2)
...     report = stop_watch.report()
report = {'total': 6., 'phase1': 1., 'phase2': 2.}
report()[源代码]

Report timing information.

返回

The key is the timer name and the value is the corresponding

average time consuming.

返回类型

dict

report_strings()[源代码]

Report timing information in texture strings.

返回

Each element is the information string of a timed event,

in format of ‘{timer_name}: {time_in_ms}’. Specially, if timer_name is ‘_FPS_’, the result will be converted to fps.

返回类型

list(str)

timeit(timer_name='_FPS_')[源代码]

Timing a code snippet with an assigned name.

参数

timer_name (str) – The unique name of the interested code snippet to handle multiple timers and generate reports. Note that ‘_FPS_’ is a special key that the measurement will be in fps instead of millisecond. Also see report and report_strings. Default: ‘_FPS_’.

注解

This function should always be used in a with statement, as shown in the example.

mmpose.utils.get_root_logger(log_file=None, log_level=20)[源代码]

Use get_logger method in mmcv to get the root logger.

The logger will be initialized if it has not been initialized. By default a StreamHandler will be added. If log_file is specified, a FileHandler will also be added. The name of the root logger is the top-level package name, e.g., “mmpose”.

参数
  • log_file (str | None) – The log filename. If specified, a FileHandler will be added to the root logger.

  • log_level (int) – The root logger level. Note that only the process of rank 0 is affected, while other processes will set the level to “Error” and be silent most of the time.

返回

The root logger.

返回类型

logging.Logger