atlalign package

Subpackages

Submodules

atlalign.augmentations module

Module creating one-to-many augmentations.

class DatasetAugmenter(original_path)[source]

Bases: object

Class that does the augmentation.

original_path

Path to where the original dataset is located.

Type

str

augment(output_path, n_iter=10, anchor=True, p_reg=0.5, random_state=None, max_corrupted_pixels=500, ds_f=8, max_trials=5)[source]

Augment the original dataset and create a new one.

Note that this not modify the original dataset.

Parameters
  • output_path (str) – Path to where the new h5 file stored.

  • n_iter (int) – Number of augmented samples per each sample in the original dataset.

  • anchor (bool) – If True, then dvf anchored before inverted.

  • p_reg (bool) – Probability that we start from a registered image (rather than the moving).

  • random_state (bool) – Random state

  • max_corrupted_pixels (int) – Maximum numbr of corrupted pixels allowed for a dvf - the actual number is computed as np.sum(df.jacobian() < 0)

  • ds_f (int) – Downsampling factor for inverses. 1 creates the least artifacts.

  • max_trials (int) – Max number of attemps to augment before an identity displacement used as augmentation.

static generate_mov2art(img_mov, verbose=True, radius_max=60, use_normal=True)[source]

Generate geometric augmentation and its inverse.

load_dataset_in_memory(h5_path, dataset_name)[source]

Load a dataset of a h5 file in memory.

atlalign.base module

Fundamental building blocks of the project.

Notes

This module does not import any other module except for zoo. Be careful to keep this logic in order to prevent cyclical imports.

class DisplacementField(delta_x, delta_y)[source]

Bases: object

A class representing a 2D displacement vector field.

Notes

The dtype is enforced to be single-precision (float32) since opencv’s remap function (used for warping) does not accept double-precision (float64).

delta_x

A 2D array of dtype float32 that represents the displacement field in the x coordinate (columns). Positive values move the pixel to the right, negative move it to the left.

Type

np.ndarray

delta_y

A 2D array of dtype float32 that represents the displacement field in the y coordinate (rows). Positive values move the pixel down, negative pixels move the pixels up.

Type

np.ndarray

adjust(delta_x_max=None, delta_y_max=None, force_inside_border=True)[source]

Adjust the displacement field.

Notes

Not in place, returns a modified instance.

Parameters
  • delta_x_max (float) – Maximum absolute size of delta_x. If None, no limit is imposed.

  • delta_y_max (float) – Maximum absolute size of delta_y. If None, no limit is imposed.

  • force_inside_border (bool) – If True, then all displacement vector that would result in leaving the image are clipped.

Returns

Adjusted DisplacementField.

Return type

DisplacementField

anchor(h_kept=0.75, w_kept=0.75, ds_f=5, smooth=0)[source]

Anchor and smoothen the displacement field.

Embeds a rectangle inside of the domain and uses it as a regular subgrid to smoothen out the original displacement field via radial basis function interpolation. Additionally makes sure that the 4 corners have zero displacements.

Parameters
  • h_kept (int or float) – If int then represents the actual height of the rectangle to be embedded. If float then percentage of the df height.

  • w_kept (int or float) – If int then represents the actual width of the rectangle to be embedded. If float then percentage of the df width.

  • ds_f (ds_f) – Downsampling factor. The higher the quicker the interpolation but the more different the new df compared to the original.

  • smooth (float) – If 0 then performs exact interpolation - transformation values on node points are equal to the original. If >0 then starts favoring smoothness over exact interpolation. Needs to be meddled with manually.

Returns

Smoothened and anchored version of the original displacement field.

Return type

DisplacementField

property average_displacement

Average displacement per pixel.

property delta_x_scaled

Scaled version of delta_x.

property delta_y_scaled

Scaled version of delta_y.

classmethod from_file(file_path)[source]

Load displacement field from a file.

Parameters

file_path (str or pathlib.Path) – Path to where the file is located.

Returns

Instance of the Displacement field.

Return type

DisplacementField

classmethod from_transform(f_x, f_y)[source]

Instantiate displacement field from actual transformations.

Parameters
  • f_x (np.array) – 2D array of shape (h, w) representing the x coordinate of the transformation.

  • f_y (np.array) – 2D array of shape (h, w) representing the y coordinate of the transformation.

Returns

Instance of the Displacement field.

Return type

DisplacementField

classmethod generate(shape, approach='identity', **kwargs)[source]

Construct different displacement vector fields (DVF) via factory method.

Parameters
  • shape (tuple) – A tuple representing the (height, width) of the displacement field. Note that if multiple channels passed then only the height and width is extracted.

  • approach (str, {'affine', 'affine_simple', 'control_points', 'identity', 'microsoft', 'paper', 'patch_shift'}) – What approach to use for generating the DVF.

  • kwargs – Additional parameters that are passed into the the given approach function.

Returns

An instance of a Displacement field.

Return type

DisplacementField

property is_valid

Check whether both delta_x and delta_y finite.

property jacobian

Compute determinant of a Jacobian per each pixel.

mask(mask_matrix, fill_value=0)[source]

Mask a displacement field.

Notes

Not in place, returns a modified instance.

Parameters
  • mask_matrix (np.array) – An array of dtype=bool where True represents a pixel that is supposed to be unchanged. False pixels are filled with fill_value.

  • fill_value (float or tuple) – Value to fill the False pixels with. If tuple then fill_value_x, fill_value_y

Returns

A new DisplacementField instance accordingly masked.

Return type

DisplacementField

property n_pixels

Count the number of pixels in the displacement field.

Notes

Number of channels is ignored.

property norm

Norm for each pixel.

property outsiders

For each pixels determines whether it is mapped outside of the image.

Notes

An important thing to look out for since for each outsider the interpolator cannot use grid interpolation.

plot_dvf(ds_f=8, figsize=(15, 15), ax=None)[source]

Plot displacement vector field.

Notes

Still works in a weird way.

Parameters
  • ds_f (int) – Downsampling factor, i.e if ds_f=8 every 8-th row and every 8th column printed.

  • figsize (tuple) – Size of the figure.

  • ax (matplotlib.Axes) – Axes upon which to plot. If None, create a new one

Returns

ax – Axes with the visualization.

Return type

matplotlib.Axes

plot_outside(figsize=(15, 15), ax=None)[source]

Plot all pixels that are mapped outside of the image.

Parameters
  • figsize (tuple) – Size of the figure.

  • ax (matplotlib.Axes) – Axes upon which to plot. If None, create a new one

Returns

ax – Axes with the visualization.

Return type

matplotlib.Axes

plot_ranges(freq=10, figsize=(15, 10), kwargs_domain=None, kwargs_range=None, ax=None)[source]

Plot domain and the range of the mapping.

Parameters
  • freq (int) – Take every freq th pixel. The higher the more sparse.

  • figsize (tuple) – Size of the figure.

  • kwargs_domain (dict or None) – If dict then matplotlib kwargs to be passed into the domain scatter.

  • kwargs_range (dict or None) – If dict then matplotlib kwargs to be passed into the range scatter.

  • ax (matplotlib.Axes) – Axes upon which to plot. If None, create a new one.

Returns

ax – Axes with the visualization.

Return type

matplotlib.Axes

pseudo_inverse(ds_f=1, interpolation_method='griddata_custom', interpolator_kwargs=None)[source]

Find the displacement field of the inverse mapping.

Notes

Dangerously approximate and imprecise. Uses irregular grid interpolation.

Parameters

ds_f (int, optional) – Downsampling factor for all the interpolations. Note that ds_1 = 1 means no downsampling. Applied both to the x and y coordinates.

interpolation_method{‘griddata’, ‘bspline’, ‘rbf’}, optional

Interpolation method to use.

interpolator_kwargsdict, optional

Additional parameters passed to the interpolator.

Returns

An instance of the DisplacementField class representing the inverse mapping.

Return type

DisplacementField

resize(new_shape)[source]

Calculate a resized displacement vector field.

Goal: df_resized.warp(img) ~ resized(df.warp(img))

Parameters

new_shape (tuple) – Represents (new_height, new_width) of the resized displacement field.

Returns

New DisplacementField with a shape of new_shape.

Return type

DisplacementField

resize_constant(new_shape)[source]

Calculate the resized displacement vector field that will have the same effect on original image.

Goal: upsampled(df.warp(img_downsampled)) ~ df_resized.warp(img).

Parameters

new_shape (tuple) – Represents (new_height, new_width) of the resized displacement field.

Returns

New DisplacementField with a shape of new_shape.

Return type

DisplacementField

Notes

Very useful when we perform registration on a smaller resolution image and then we want to resize it back to the original higher resolution shape.

save(path)[source]

Save displacement field as a .npy file.

Notes

Can be loaded via DisplacementField.from_file class method.

Parameters

path (str or pathlib.Path) – Path to the file.

property summary

Generate a summary of the displacement field.

Returns

summary – Summary series containing the most interesting values.

Return type

pd.Series

property transformation

Output the actual transformation rather than the displacement field.

Returns

  • f_x (np.ndarray) – A 2D array of dtype float32. For each pixel in the fixed image what is the corresponding x coordinate in the moving image.

  • f_y (np.ndarray) – A 2D array of dtype float32. For each pixel in the fixed image what is the corresponding y coordinate in the moving image.

warp(img, interpolation='linear', border_mode='constant', c=0)[source]

Warp an input image based on the inner displacement field.

Parameters
  • img (np.ndarray) –

    Input image to which we will apply the transformation. Currently the only 3 supported dtypes are uint8,
    float32 and float64. The logic is for the warped_img to have the dtype (input dtype - output dtype).
    • uint8 - uint8

    • float32 - float32

    • float64 - float32

  • interpolation (str, {'nearest', 'linear', 'cubic', 'area', 'lanczos'}) – Regular grid interpolation method to be used.

  • border_mode (str, {'constant', 'replicate', 'reflect', 'wrap', 'reflect101', 'transparent'}) – How to fill outside of the range values. See references for detailed explanation.

  • c (float) – Only used if border_mode=’constant’ and represents the fill value.

Returns

warped_img – Warped image. Note that the dtype will be the same as the input img.

Return type

np.ndarray

warp_annotation(img, approach='opencv')[source]

Warp an input annotation image based on the displacement field.

If displacement falls outside of the image the logic is to replicate the border. This approach guarantees that no new labels are created.

Notes

If approach is ‘scipy’ then calls scipy.spatial.cKDTree in the background with default Euclidian distance and exactly 1 nearest neighbor.

Parameters
  • img (np.ndarray) – Input annotation image. The allowed dtypes are currently int8, int16, int32

  • approach (str, {'scipy', 'opencv'}) – Approach to be used. Currently ‘opencv’ way faster.

Returns

warped_img – Warped image.

Return type

np.ndarray

atlalign.data module

A set of function generating simple datasets.

Notes

All returned np.ndarrays should have dtype=np.float32 and intensities in range[0, 1] just to prevent scaling issues withing ML models.

annotation_volume(path=None)[source]

Output a dataset created of 528 consecutive coronal slice annotations.

Notes

As opposed to other datasets in this module the output ndim is 3 since we are not expecting to use this as a channel in an input.

Parameters

path (str or None or LocalPath) – An absolute path to the underlying .npy file. If not speficied then a default one used.

Returns

x_atlas – An array of shape (528, 320, 456) representing the consecutive coronal slices. The dtype is np.int32 and the number represent distinct classes.

Return type

np.ndarray

circles(n_samples, shape, radius, n_levels=3, random_state=None)[source]

Generate simple nested circles whose intensities gradually change.

Parameters
  • n_samples (int) – Number of samples to generate.

  • shape (tuple) – Represents the (height, width) of the output image (not the rectangle).

  • radius (int or tuple) – If int, then all the outer circle always the same radius. If tuple, then represents (radius_min, radius_max) and the actual radius for a given sample is sampled from a uniform distribution.

n_levelsint or tuple, optional

If int, then fixed levels (nested circles). If tuples, then (n_levels_min, n_levels_max) and sampled uniformly.

random_stateint, optional

If int, then results are reproducible.

Returns

dataset – Of shape (n_samples, shape[0], shape[1], 1)

Return type

np.ndarray

manual_registration(path=None)[source]

Return all manual registration done with the new labeling tool.

Parameters

path (str or None or LocalPath) – An absolute path to the underlying .h5 file. If not speficied then a default one used.

Returns

res – Dictionary where keys are corresponding dataset names. The values are numpy arrays.

Return type

dict

nissl_volume(path=None)[source]

Output a dataset created of 528 consecutive coronal slices with Nissl staining.

Parameters

path (str or None or LocalPath) – An absolute path to the underlying .npy file. If not speficied then a default one used.

Returns

x_atlas – An array of shape (528, 320, 456, 1) representing the consecutive coronal slices. The dtype is np.float32

Return type

np.ndarray

rectangles(n_samples, shape, height, width, n_levels=3, random_state=None)[source]

Generate simple rectangles whose intensities gradually change.

Parameters
  • n_samples (int) – Number of samples to generate.

  • shape (tuple) – Represents the (height, width) of the output image (not the rectangle).

  • height (int or tuple) – If int, then fixed size. If tuple than (height_min, heigh_max) and sampled uniformly.

  • width (int or tuple) – If int, then fixed size. If tuple than (width_min, width_max) and sampled uniformly.

  • n_levels (int or tuple, optional) – If int, then fixed levels. If tuples, then (n_levels_min, n_levels_max) and sampled uniformly.

  • random_state (int, optional) – If int, then results are reproducible.

Returns

dataset – Of shape (n_samples, shape[0], shape[1], 1).

Return type

np.ndarray

segmentation_collapsing_labels(path=None)[source]

Segmentation collapsing tree.

Parameters

path (str or None or LocalPath) – An absolute path to the underlying .json file. If not speficied then a default one used.

Returns

json_file – Dictionary containing all the labels in a tree structure.

Return type

dict

atlalign.metrics module

A module implementing some useful metrics.

Notes

The metrics depend on what we apply them to. In general there are 4 types of metrics:
  • DVF (=multioutput regression)

  • Annotation (=per pixel classification)

  • Image

  • Keypoint metrics

These metrics are using numpy and are supposed to be used locally after forward passing

In DVF Metrics, the word ‘combined’ denotes the fact that we are performing multioutput regression. Each of the metrics should always return a tuple of (metric_average, metric_per_output).

angular_error_of(y_true, y_pred, weighted=False)[source]

Compute angular error between two displacement fields.

Parameters
  • y_true (np.ndarray or list) – If np.ndarray then expected shape is (N, h, w, 2) and represents true samples of displacement fields. If list then elements are instances of atlalign.base.DisplacementField.

  • y_pred (np.ndarray or list) – If np.ndarray then expected shape is (N, h, w, 2) and represents predicted samples of displacement fields. If list then elements are instances of atlalign.base.DisplacementField.

  • weighted (bool) – Only applicable in cases where N=1. The norm of y_true is used to created the weights for the average.

Returns

  • angular_error_average (float) – An average angular error over all samples and pixels. If weighted=True then it is a weighted average where the weights are derived from the norm of the y_true.

  • angular_error_per_output (np.ndarray) – An np.ndarray of shape (h, w) representing an average over all samples of angular error.

correlation_combined(y_true, y_pred)[source]

Compute combined version of correlation.

Notes

Slow.

Parameters
  • y_true (np.ndarray or list) – If np.ndarray then expected shape is (N, h, w, 2) and represents true samples of displacement fields. If list then elements are instances of atlalign.base.DisplacementField.

  • y_pred (np.ndarray or list) – If np.ndarray then expected shape is (N, h, w, 2) and represents predicted samples of displacement fields. If list then elements are instances of atlalign.base.DisplacementField.

Returns

  • correlation_average (float) – Mean correlation.

  • correlation_per_output (np.ndarray) – An np.ndarray of shape (h, w, 2) representing individual correlation scores for each regression output.

cross_correlation_img(y_true, y_pred, mask=None)[source]

Compute the cross correlation metric between two images.

Parameters
  • y_true (np.array) – Image 1. Either (h, w) of (N, h, w). If (N, h, w), the decorator multiple_images_decorator takes care of the sample dimension.

  • y_pred (np.array) – Image 2. Either (h, w) of (N, h, w). If (N, h, w), the decorator multiple_images_decorator takes care of the sample dimension.

  • mask (np.array, optional) – Optional, can be specified to have the computation carried out on a precise area.

Returns

cc – The Cross-Correlation value. Similarity metric, the higher the more similar the images are.

Return type

float

demons_img(y_true, y_pred, mask=None)[source]

Compute the demons metric between two images.

Parameters
  • y_true (np.array) – Image 1. Either (h, w) of (N, h, w). If (N, h, w), the decorator multiple_images_decorator takes care of the sample dimension.

  • y_pred (np.array) – Image 2. Either (h, w) of (N, h, w). If (N, h, w), the decorator multiple_images_decorator takes care of the sample dimension.

  • mask (np.array, optional) – Optional, can be specified to have the computation carried out on a precise area.

Returns

demons – The demons value. Loss metric, the lower the more similar the images are.

Return type

float

dice_score(y_true, y_pred, k=0, disable_check=False, excluded_labels=None)[source]

Compute dice score of a class k equally weighted over all samples.

Notes

If the class is not present in either of the images — (‘0/0’) type of situation, we simply skip this sample and do not consider it if for the average.

Parameters
  • y_true (np.ndarray) – A np.ndarray of shape (N, h, w) such that the first dimension represents the sample. The ground truth annotation.

  • y_pred (np.ndarray) – A np.ndarray of shape (N, h, w) such that the first dimensions represents the sample. The predicted annotation.

  • k (int or float or None) – A class label. If None, then averaging based on label distribution in each true image is performed.

  • disable_check (bool) – If False, then checks are disabled. Used when recursively calling the function.

  • excluded_labels (None or list) – If None then no effect. If a list of ints then they wont be used in the averaging over labels (in case k is None).

Returns

  • dice_average (float) – An average dice over all samples.

  • dice_per_sample (np.ndarray, shape = (N,)) – Dice score per each sample. Note that if label does not occur in either of the images then its equal to np.nan.

evaluate(y_true, y_pred, imgs_mov, img_ids, ps, dataset_ids, depths=())[source]

Evaluate all relevant matrics in per sample fashion.

Parameters
  • y_true (np.ndarray) – np.ndarray then expected shape is (N, h, w, 2) and represents true samples of displacement fields.

  • y_pred (np.ndarray) – np.ndarray then expected shape is (N, h, w, 2) and represents predicted samples of displacement fields.

  • imgs_mov (np.ndarray) – Array of shape (N, h, w) representing the moving images. If dtype=float32 then no division. If uint8 then values devided by 255 and casted to flaot32.

  • img_ids (np.array) – Array of shape (N, ) representing the image ids.

  • dataset_ids (np.array) – Array of shape (N, ) representing the dataset ids.

  • depths (tuple) – Tuple of different depths to compute the intersection over union score.

Returns

  • result (pd.DataFrame) – All results even containing array entries.

  • results_viewable (pd.DataFrame) – Results without array entries.

evaluate_single(deltas_true, deltas_pred, img_mov, p=None, avol=None, collapsing_labels=None, deltas_pred_inv=None, deltas_true_inv=None, ds_f=4, depths=())[source]

Evaluate a single sample.

Parameters
  • deltas_true (DisplacementField or np.ndarray) – If np.ndarray then of shape (height, width, 2) representing deltas_xy of ground truth.

  • deltas_pred (DisplacementField or np.ndarray) – If np.ndarray then of shape (height, width, 2) representing deltas_xy of prediction.

  • img_mov (np.ndarray) – Moving image.

  • p (int) – Coronal section in microns.

  • avol (np.ndarray or None) – Annotation volume of shape (528, 320, 456). If None then loaded via annotation_volume.

  • collapsing_labels (dict or None) – Dictionary for segmentation collapsing. If None then loaded via segmentation_collapsing_labels

  • deltas_pred_inv (None or np.ndarray) – If np.ndarray then of shape (height, width, 2) representing inv_deltas_xy of prediction. If not provided computed from df_pred.

  • deltas_true_inv (None or np.ndarray) – If np.ndarray then of shape (height, width, 2) representing inv_deltas_xy of truth. If not provided computed from df_true.

  • ds_f (int) – Downsampling factor for numerical inversses.

  • depths (tuple) – Tuple of integers representing all depths to compute IOU for. If empty no IOU computation takes places.

Returns

results – Relevant metrics.

Return type

pd.Series

improvement_kp(y_true, y_pred, y_init)[source]

Compute improvement ratio with respect to initial keypoints.

Parameters
  • y_true (np.array) – Array of shape (N, 2) where N is the number of samples. Ground truth positions in the reference space.

  • y_pred (np.array) – Array of shape (N, 2) where N is the number of samples. Predicted positions in the reference space.

  • y_init (np.array) – Array of shape (N, 2) where N is the number of samples. Initial positions in the reference space.

Returns

  • percent_improved (float) – Percent of predicted keypoints that were better than the initial ones.

  • mask (np.array) – Array of booleans representing which predicted keypoints achieved a higher TRE than the inital one.

iou_score(y_true, y_pred, k=0, disable_check=False, excluded_labels=None)[source]

Compute intersection over union of a class k equally weighted over all samples.

Notes

If the class is not present in either of the images — (‘0/0’) type of situation, we simply skip this sample and do not consider it if for the average.

Parameters
  • y_true (np.ndarray) – A np.ndarray of shape (N, h, w) such that the first dimension represents the sample. The ground truth annotation.

  • y_pred (np.ndarray) – A np.ndarray of shape (N, h, w) such that the first dimensions represents the sample. The predicted annotation.

  • k (int or float or None) – A class label. If None, then averaging based on label distribution in each true image is performed.

  • disable_check (bool) – If False, then checks are disabled. Used when recursively calling the function.

  • excluded_labels (None or list) – If None then no effect. If a list of ints then they wont be used in the averaging over labels (in case k is None).

Returns

  • iou_average (float) – An average IOU over all samples.

  • iou_per_sample (np.ndarray, shape = (N,)) – IOU score per each sample. Note that if label does not occur in either of the images then its equal to np.nan.

mae_combined(y_true, y_pred)[source]

Compute combined version of mean absolute error.

Notes

A difference between this implementation and scikit-learn is that the inputs here y_true, y_pred are custom made for our registration problem.

Parameters
  • y_true (np.ndarray or list) – If np.ndarray then expected shape is (N, h, w, 2) and represents true samples of displacement fields. If list then elements are instances of atlalign.base.DisplacementField.

  • y_pred (np.ndarray or list) – If np.ndarray then expected shape is (N, h, w, 2) and represents predicted samples of displacement fields. If list then elements are instances of atlalign.base.DisplacementField.

Returns

  • mae_average (float) – A combined mse.

  • mae_per_output (np.ndarray) – An np.ndarray of shape (h, w, 2) representing individual mae scores for each regression output.

mae_img(y_true, y_pred, mask=None)[source]

Compute the mean absolute error between two images.

Parameters
  • y_true (np.array) – Image 1. Either (h, w) of (N, h, w). If (N, h, w), the decorator multiple_images_decorator takes care of the sample dimension.

  • y_pred (np.array) – Image 2. Either (h, w) of (N, h, w). If (N, h, w), the decorator multiple_images_decorator takes care of the sample dimension.

  • mask (np.array, optional) – Optional, can be specified to have the computation carried out on a precise area.

Returns

mse – The mean absolute error (MAE) metric. Loss metric, the lower the more similar the images are.

Return type

float

mi_img(y_true, y_pred, mask=None, metric_type='MattesMutualInformation')[source]

Compute the mutual information (MI) between two images.

Parameters
  • y_true (np.array) – Image 1. Either (h, w) of (N, h, w). If (N, h, w), the decorator multiple_images_decorator takes care of the sample dimension.

  • y_pred (np.array) – Image 2. Either (h, w) of (N, h, w). If (N, h, w), the decorator multiple_images_decorator takes care of the sample dimension.

  • mask (np.array, optional) – Optional, can be specified to have the computation carried out on a precise area.

  • metric_type (str, {'MattesMutualInformation', 'JointHistogramMutualInformation'}) – Type of mutual information computation.

Returns

mi – The mutual information (MI) metric. Similarity metric, the higher the more similar the images are.

Return type

float

mse_combined(y_true, y_pred)[source]

Compute combined version of mean squared error.

Notes

A difference between this implementation and scikit-learn is that the inputs here y_true, y_pred are custom made for our registration problem.

Parameters
  • y_true (np.ndarray or list) – If np.ndarray then expected shape is (N, h, w, 2) and represents true samples of displacement fields. If list then elements are instances of atlalign.base.DisplacementField.

  • y_pred (np.ndarray or list) – If np.ndarray then expected shape is (N, h, w, 2) and represents predicted samples of displacement fields. If list then elements are instances of atlalign.base.DisplacementField.

Returns

  • mse_average (float) – A combined mse.

  • mse_per_output (np.ndarray) – An np.ndarray of shape (h, w, 2) representing individual mse scores for each regression output.

mse_img(y_true, y_pred, mask=None)[source]

Compute the mean-squared error between two images.

Parameters
  • y_true (np.array) – Image 1. Either (h, w) of (N, h, w). If (N, h, w), the decorator multiple_images_decorator takes care of the sample dimension.

  • y_pred (np.array) – Image 2. Either (h, w) of (N, h, w). If (N, h, w), the decorator multiple_images_decorator takes care of the sample dimension.

  • mask (np.array, optional) – Optional, can be specified to have the computation carried out on a precise area.

Returns

mse – The mean-squared error (MSE) metric. Loss metric, the lower the more similar the images are.

Return type

float

multiple_images_decorator(fun)[source]

Enhance a function with iteration over the samples.

Parameters

fun (callable) – Callable whose functionality will be enhanced.

Returns

wrapper_fun – Enhanced version of fun callable.

Return type

callable

perceptual_loss_img(y_true, y_pred, model='net-lin', net='vgg')[source]

Compute the perceptual loss (PL) between two images.

Parameters
  • y_true (np.array) – Image 1. Either (h, w) of (N, h, w). If (N, h, w), the decorator multiple_images_decorator takes care of the sample dimension.

  • y_pred (np.array) – Image 2. Either (h, w) of (N, h, w). If (N, h, w), the decorator multiple_images_decorator takes care of the sample dimension.

  • model (str, {'net', 'net-lin'}) – Type of model (cf lpips_tf package).

  • net (str, {'vgg', 'alex'}) – Type of network (cf lpips_tf package).

Returns

pl – The Perceptual Loss (PL) metric. Loss metric, the lower the more similar the images are.

Return type

float

Notes

We use the decorator just to make sure we do not run out of memory during a forward pass. Also, its fully convolutional but if the images are too small then might run into issues.

psnr_img(y_true, y_pred, mask=None, data_range=None)[source]

Compute the peak signal to noise ratio (PSNR) for an image.

Parameters
  • y_true (np.array) – Image 1. Either (h, w) of (N, h, w). If (N, h, w), the decorator multiple_images_decorator takes care of the sample dimension.

  • y_pred (np.array) – Image 2. Either (h, w) of (N, h, w). If (N, h, w), the decorator multiple_images_decorator takes care of the sample dimension.

  • mask (np.array, optional) – Optional, can be specified to have the computation carried out on a precise area.

  • data_range (int) – The data range of the input image (distance between minimum and maximum possible values). By default, this is estimated from the image data-type.

Returns

psnr – The PSNR metric. Similarity metric, the higher the more similar the images are.

Return type

float

References

r2_combined(y_true, y_pred)[source]

Compute combined version of r2.

Notes

A difference between this implementation and scikit-learn is that the inputs here y_true, y_pred are custom made for our registration problem.

Parameters
  • y_true (np.ndarray or list) – If np.ndarray then expected shape is (N, h, w, 2) and represents true samples of displacement fields. If list then elements are instances of atlalign.base.DisplacementField.

  • y_pred (np.ndarray or list) – If np.ndarray then expected shape is (N, h, w, 2) and represents predicted samples of displacement fields. If list then elements are instances of atlalign.base.DisplacementField.

Returns

  • r2_average (float) – A combined r2.

  • r2_per_output (np.ndarray) – An np.ndarray of shape (h, w, 2) representing individual r2 scores for each regression output.

rtre_kp(y_true, y_pred, h, w, weighted=False)[source]

Compute relative target registration error.

Parameters
  • y_true (np.array) – Array of shape (N, 2) where N is the number of samples.

  • y_pred (np.array) – Array of shape (N, 2) where N is the number of samples.

  • h (np.array) – Height of the image.

  • w (np.array) – Width of the image.

  • weighted (bool) – If True, then the final TRE is weighted by y_true.

Returns

  • mean_tre (float) – Mean target registration error over all keypoint pairs.

  • rtre (np.array) – Array of shape (N,) where elements represent the rTRE of a given keypoint par.

  • weights (np.array) – Array of shape (N,). If weighted is False then simply 1 / N. Otherwise the weights are derived from y_true.

References

ssmi_img(y_true, y_pred)[source]

Compute the structural similarity between two images.

Parameters
  • y_true (np.array) – Image 1. Either (h, w) of (N, h, w). If (N, h, w), the decorator multiple_images_decorator takes care of the sample dimension.

  • y_pred (np.array) – Image 2. Either (h, w) of (N, h, w). If (N, h, w), the decorator multiple_images_decorator takes care of the sample dimension.

Returns

ssmi – The structural similarity (SSMI) metric. Similarity metric, the higher the more similar the images are.

Return type

float

tre_kp(y_true, y_pred, weighted=False)[source]

Compute (absolute) target registration error.

Parameters
  • y_true (np.array) – Array of shape (N, 2) where N is the number of samples.

  • y_pred (np.array) – Array of shape (N, 2) where N is the number of samples.

  • weighted (bool) – If True, then the final TRE is weighted by y_true.

Returns

  • mean_tre (float) – Mean target registration error over all keypoint pairs.

  • tre (np.array) – Array of shape (N,) where elements represent the TRE of a given keypoint par.

  • weights (np.array) – Array of shape (N,). If weighted is False then simply 1 / N. Otherwise the weights are derived from y_true.

References

vector_distance_combined(y_true, y_pred)[source]

Compute combined version of vector distance.

Parameters
  • y_true (np.ndarray or list) – If np.ndarray then expected shape is (N, h, w, 2) and represents true samples of displacement fields. If list then elements are instances of atlalign.base.DisplacementField.

  • y_pred (np.ndarray or list) – If np.ndarray then expected shape is (N, h, w, 2) and represents predicted samples of displacement fields. If list then elements are instances of atlalign.base.DisplacementField.

Returns

  • vector_distance_average (float) – An average vector distance over all samples and pixels.

  • vector_distance_per_output (np.ndarray) – An np.ndarray of shape (h, w) representing an average over all samples of vector distance.

atlalign.nn module

Architecture generators.

supervised_global_model_factory(filters=(16, 32, 64), dense_layers=(10,), losses=('ncc_9', 'grad'), losses_weights=(1, 0.1), n_gpus=1, optimizer='rmsprop', mlflow_log=False, use_lambda=False)[source]

Generate a global alignment network.

Parameters
  • filters (tuple) – Tuple of filter sizes.

  • dense_layers (None or tuple) – If None then global average pooling applied (the last conv layer needs to have 6 channels). If tuple then represents number of nodes in each respective dense layer. Note that in the background a final layer of 6 nodes is created.

  • losses (tuple) –

    • loss to apply to registered images - needs to be a key in the ALL_IMAGE_LOSSES dictionary

    • loss to apply to predicted dvfs - needs to be a key in the ALL_DVF_LOSSES dictionary

  • losses_weights (tuple) – Two element tuple representing the weights for each separate loss function.

  • n_gpus (int) – Number of gpus to use.

  • optimizer (str or Keras.Optimizer) – Optimizer to be used.

  • use_lambda (bool) – If True, then network includes Lambda layers. Otherwise, not. It is advisable not to include them because serialization is straightforward.

supervised_model_factory(start_filters=(16,), downsample_filters=(16, 32, 32, 32), middle_filters=(32,), upsample_filters=(32, 32, 32, 32), end_filters=(), losses=('ncc_9', 'grad'), losses_weights=(1, 0.1), n_gpus=1, optimizer='rmsprop', compute_inv=False, mlflow_log=False, use_lambda=False)[source]

Create a generic supervised registration unet.

The mini blocks going down are: pooling - convolution - activation The mini blocks going up are: upsampling - convolution - activation - merge - convolution - activation The standard block is : convolution - activation

Notes

If compute_inv=False, then:
  • Inputs:
    • stacked reference and moving images - (None, 320, 456, 2)

  • Outputs
    • registered images - (None, 320, 456, 1)

    • predicted dvfs - (None, 320, 456, 2)

If compute_inv=True, then:
  • Inputs:
    • stacked reference and moving images (None, 320, 456, 2)

    • stacked moving and reference images (None, 320, 456, 2)

  • Outputs
    • registered images - (None, 320, 456, 1)

    • predicted dvfs - (None, 320, 456, 1)

    • predicted inverse dvfs - (None, 320, 456, 1)

Parameters
  • start_filters (tuple) – The size represents the number of starting convolutions (before downsizing) and the respective elements are the number of filters.

  • downsample_filters (tuple) – The size represents the number of downsizing convolutions and the respective elements are the number of filters.

  • middle_filters (tuple) – The size represents the number of standard convblocks in the middle of the net (with the most downsampled feature representation. The respective elements are the number of filters.

  • upsample_filters (tuple) –

    The size represents the number of upsample convblocks of the decoder part of the net. The respective elements

    are the number of filters.

  • end_filters (tuple) – The size represents the number of standard convblocks in the end of the net. The respective elements are the number of filters.

  • losses (tuple) –

    If compute_inv=False then
    • loss to apply to registered images - needs to be a key in the ALL_IMAGE_LOSSES dictionary

    • loss to apply to predicted dvfs - needs to be a key in the ALL_DVF_LOSSES dictionary

    If compute_inv=True then
    • loss to apply to registered images - needs to be a key in the ALL_IMAGE_LOSSES dictionary

    • loss to apply to predicted dvfs - needs to be a key in the ALL_DVF_LOSSES dictionary

    • loss to apply to predicted inverse dvfs - needs to be a key in the all_DVFS_LOSSES dictionary

  • losses_weights (tuple) – Weights for each separate loss function (again will be 3 elements if compute_inv=True else 2 elements).

  • n_gpus (int) – Number of gpus to use.

  • optimizer (str or Keras.Optimizer) – Optimizer to be used.

  • compute_inv (bool) – If True then also predicting inverse transformation. This is achieved by switching the reference and moving in the input and creating a new keras input out of it. Note that it effects the outputs of the model.

  • mlflow_log (bool) – If True, then assumes we are inside of an MLFlow run context manager and all input parameters are logged.

  • use_lambda (bool) – If True, then network includes Lambda layers. Otherwise, not. It is advisable not to include them because serialization is straightforward.

Returns

Compiled model with the desired architecture.

Return type

keras.Model

See also

To

atlalign.utils module

Collection of helper classes and function that do not deserve to be in base.py.

Notes

This module cannot import from anywhere else within this project to prevent circular dependencies.

find_labels_dic(segmentation_array, dic, chosen_depth)[source]

Collapse existing labels into parent labels corresponding to the tree provided in a dictionary.

Parameters
  • segmentation_array (np.array) – Annotation array before the concatenation of the labels.

  • dic (dict) – Dictionary of tree of labels.

  • chosen_depth (int) – Depth at which it is wanted to concatenate the labels.

Returns

new_segmentation_array – New Annotation array with the concatenation of the labels at the desired depth. If a specific label does not exist in the tree it is assigned -1.

Return type

np.array

griddata_custom(points, values_f_1, values_f_2, xi)[source]

Run griddata extensions that performs only one triangulation.

Notes

The scipy implementation does not allow to separate triangulation from interpolation. Since we need to evaluate 2 different functions on the !same! non-regular grid if points the triangulation can be simply just done once and stored.

Parameters
  • points (np.ndarray) – An array of shape (N, 2) where each row represents one point in 2D for which we know the function value.

  • values_f_1 (np.ndarray) – An array of shape (N,) where each row represents a value of function f_1 on the corresponding point in points.

  • values_f_2 (np.ndarray) – An array of shape (N,) where each row represents a value of function f_2 on the corresponding point in points.

  • xi (tuple) – Tuple of 2 np.ndarray of shapes (h, w) representing the x and y coordinates of the points where we want to interpolate data. Note that this is simply the result of np.meshgrid if our points of interest lie on a regular grid.

Returns

  • f_1_interpolation_on_xi (np.ndarray) – An array of shape (h, w) representing the interpolation of f_1 on the xi points.

  • f_2_interpolation_on_xi (np.ndarray) – An array of shape (h, w) representing the interpolation of f_2 on the xi points.

References

https://stackoverflow.com/questions/20915502/speedup-scipy-griddata-for-multiple-interpolations-between-two-irregular-grids # noqa

atlalign.visualization module

A collection of utils for all visualization scripts.

chain_predict(model, inp, n_iterations=1)[source]

Run alignment recursively.

Parameters
  • model (keras.models.Model) – A trained model that whose inputs have shape (batch_size, h, w, 2) - last dimension represents stacking of atlas and input image. The outputs are of the same shape where the last dimension represents stacking of delta_x and delta_y of the displacement field.

  • inp (np.ndarray) – An array of shape (h, w, 2) or (1, h, w, 2) representing the atlas and input image.

Returns

unwarped_img_list – List of np.ndarrays of shape (h, w) representign the unwarped image at each iteration.

Return type

list

create_animation(df, img, frames_per_second=30, n_seconds=3, repeat=False, blit=False, cmap='gray', img_ref=None, n_ref=3, duration_ref=1)[source]

Create a slow motion animation of a warping.

Parameters
  • df (DisplacementField or list) – If an instance of the DisplacementField class representing then representing a single transformation. If a list of DisplacementField instances then represents a pipeline of different transformations to be applied in the respective order.

  • img (np.ndarray) – Image to be warped. Needs to have the same shape as teh df and dtype either uint8 or float32.

  • frames_per_second (int, default 30) – Number of frames per second.

  • n_seconds (int, default 3) – Number of seconds one df will last. Total number of seconds is len(df) * n_seconds.

  • repeat (bool) – If True, animation is automatically restarted.

  • blit (bool) – Controls whether blitting is used to optimize drawing.

  • cmap (str, default 'gray') – Only applicable if image grayscale.

  • img_ref (np.array or None) – If supplied then at the end of the animation switch between moving and registered n_ref of times where each blit lasts duration_ref seconds.

  • n_ref (int) – Number of times to switch between img_ref and registered image. Only active when img_reg is not None.

  • duration_ref (int) – Number of seconds img_ref is visible per blit.

Returns

ani – Animation object that can be viewed in a jupter notebook for example.

Return type

matplotlib.animation.ArtistAnimation

Notes

To make it viewable in a jupyter notebook one needs to do the following

>>> from matplotlib import  rc
>>> rc('animation', html='jshtml')

If you get errors using these settings consider replacing html=’jshtml’ by html=’html5’ above.

Additionally, it is necessary to install ffpmeg package. On Ubuntu this can be done:

`bash sudo apt install ffmpeg `

create_grid(shape, grid_spacing=20, grid_thickness=3)[source]

Create a grid to see warpings clearly.

Parameters
  • shape (tuple) – Tuple of (height, width) which represent the shape of the output image.

  • grid_spacing (int) – Both horizontal and vertical spacing of consecutive lines.

  • grid_thickness (int) – Thickness of all lines.

Returns

img_grid – An image of the grid.

Return type

np.ndarray

create_segmentation_image(segmentation_array, colors_dict=None)[source]

Turn segmentation array into a colorful image.

Parameters
  • segmentation_array (np.array) – An array of shape (h, w) and dtype int where each number represents a unique class.

  • colors_dict (None or dict) – If None, then all classes are assigned a random color (except for 0 which by default gets a black color). If dict, keys are integers representing classes and values are tuples of size 3 representing (R, G, B). If a class is not contained in the dict then color randomly generated.

Returns

  • segmentation_img (np.array) – An image of shape (h, w) and dtype uint8` and 3 channels (RGB).

  • colors_dict (dict) – Color (values) per class (keys) dictionary. If no colors_dict passed then a new instance. If passed, then it is an updated version.

generate_df_plots(df_true, df_pred, filepath=None, figsize=(15, 15))[source]

Generate displacement vector plots.

df_trueDisplacementField

Truth. Assumes that shape is (320, 456).

df_predDisplacementField

Prediction. Assumes that shape is (320, 456)

filepathNone or pathlib.Path

If specified, then the path to where the figure saved as a PNG image. If not specified, then shown.

atlalign.volume module

Collection of tools for aggregating slices to 3D models.

class CoronalInterpolator(kind='linear', fill_value=0, bounds_error=False)[source]

Bases: object

Interpolator that works pixel by pixel in the coronal dimension.

interpolate(gv)[source]

Interpolate.

Note that some section images might have pixels equal to np.nan. In this case these pixels are skipped in the interpolation.

Parameters

gv (GappedVolume) – Instance of the GappedVolume to be interpolated.

Returns

final_volume – Array of shape (528, 320, 456) that holds the entire interpolated volume without gaps.

Return type

np.ndarray

class GappedVolume(sn, imgs)[source]

Bases: object

Volume containing gaps.

Parameters
  • sn (list) – List of section numbers. Note that not required to be ordered.

  • imgs (np.ndarray or list) – Internally converted to list of grayscale images of same shape representing different coronal sections. Order corresponds to the one in sn.

class Volume(sn, mov_imgs, dvfs)[source]

Bases: object

Class representing mutliple slices.

Parameters
  • sn (list) – List of section numbers.

  • mov_imgs (list) – List of np.ndarrays representing the moving images corresponding to the sn.

  • dvfs (list) – List of displacement fields corresponding to the sn.

property sorted_dvfs

Return displacement fields sorted by the coronal section.

property sorted_mov

Return moving images as sorted by the coronal section.

property sorted_ref

Return reference images as sorted by the coronal section.

property sorted_reg

Return registered images as sorted by the coronal section.

atlalign.zoo module

Contains different way how to generate displacement field.

Notes

Ideally you want the only positional argument to be the shape and all the others be keyword arguments with reasonable defaults.

affine(shape, matrix=None)[source]

Affine transformation encoded in a 2 x 3 matrix.

Parameters
  • shape (tuple) – Of the form (height, width).

  • matrix (np.ndarray) – Transformation matrix of the shape 2 x 3.

Raises

ValueError – In case the transformation matrix has a wrong shape.

Returns

  • delta_x (np.ndarray) – Displacement vector field of the x coordinates.

  • delta_y (np.ndarray) – Displacement vector field of the y coordinates.

affine_simple(shape, scale_x=1, scale_y=1, rotation=0, translation_x=0, translation_y=0, shear=0, apply_centering=True)[source]

Just a human version of affine mapping.

Notes

Instead of specifying the whole matrix one can just specify all the understandable quantities.

Parameters
  • shape (tuple) – Of the form (height, width).

  • scale_x (float) – Scale on the x axis. If scale_x < 1 then zoom out, if scale_x > 1 zoom in.

  • scale_y (float) – Scale on the y axis. If scale_y < 1 then zoom out, if scale_y > 1 zoom in.

  • rotation (float) – Rotation angle in counter-clockwise direction as radians.

  • translation_x (float) – Translation in the x direction. If translation_x > 0 then to the right, else to the left.

  • translation_y (float) – Translation in the y direction. If translation_y > 0 then down, else to the up.

  • shear (float) – Shear angle in counter-clockwise direction as radians.

  • apply_centering (bool) – If True then (h // 2 - 0.5, w // 2 - 0.5) is considered a center of the image. And before performing all the other operations the image is first shifted so that the center corresponds to (0, 0). Then the actual transformation is applied and after that the image is shifted into the original center.

Returns

  • delta_x (np.ndarray) – Displacement vector field of the x coordinates.

  • delta_y (np.ndarray) – Displacement vector field of the y coordinates.

control_points(shape, points=None, values_delta_x=None, values_delta_y=None, anchor_corners=True, interpolation_method='griddata', interpolator_kwargs=None)[source]

Simply interpolate given control points.

Notes

We assume there are N control points.

This function is used by others functions from the zoo. See below a complete list
  • edge_stretching

  • single_frequency

Additionally, it is also used in the eliminate_bb method of the DisplacementField class.

Parameters
  • shape (tuple) – Of the form (height, width).

  • points (np.ndarray, optional) – An array of shape (N, 2) where each row represents a (row, column) of a given control point.

  • values_delta_x (np.ndarray, optional) – An array of shape (N, ) where each row represents a delta_x of the transformation at the corresponding control point.

  • values_delta_y (np.ndarray, optional) – An array of shape (N, ) where each row represents a delta_y of the transformation at the corresponding control point.

  • anchor_corners (bool, optional) – If True then each of the 4 images corners are automatically added to the control points and identity transformation is assumed.

  • interpolation_method ({'griddata', 'bspline', 'rbf'}, optional) – Interpolation method to use.

  • interpolator_kwargs (dict, optional) – Additional parameters passed to the interpolator.

Returns

  • delta_x (np.ndarray) – Displacement vector field of the x coordinates of shape = shape.

  • delta_y (np.ndarray) – Displacement vector field of the y coordinates of shape = shape.

Raises
  • TypeError: – When either of the points, values_delta_x or values_delta_y is not a np.ndarray.

  • ValueError: – Various inconsistencies in inputs (different len, zero len, wrong other dimensions or default without anchor).

  • IndexError: – Some of the points are outside of the image domain.

edge_stretching(shape, edge_mask=None, n_perturbation_points=3, radius_max=30, interpolation_method='griddata_custom', interpolator_kwargs=None)[source]

Pick points on the edges and using them to stretch the image.

Parameters
  • shape (tuple) – Of the form (height, width).

  • edge_mask (np.ndarray) – An array of dtype=bool of shape shape. The True elements represent an edge.

  • n_perturbation_points (int) – Number of points to pick among the edges on which the perturbation defined.

  • radius_max (float, optional) – Maxim value of radius, the actual value is a sample from uniform [0, radius_max].

  • interpolation_method ({'griddata', 'griddata_custom', 'bspline', 'rbf'}, optional) – Interpolation method to use.

  • interpolator_kwargs (dict, optional) – Additional parameters passed to the interpolator.

Returns

  • delta_x (np.ndarray) – Displacement vector field of the x coordinates.

  • delta_y (np.ndarray) – Displacement vector field of the y coordinates.

paper(shape, n_pixels=10, v_min=- 20000, v_max=20000, kernel_sigma=25, p=None, random_state=None)[source]

Algorithm proposed in the reference paper.

Notes

This algorithm has 2 steps
  1. Pick n_pixels in the image and randomly sample x and y displacement from interval [v_min, v_max]

  2. Apply a gaussian kernel on the displacement field with a given kernel_size and kernel_sigma

Parameters
  • shape (tuple) – Of the form (height, width).

  • n_pixels (int) – Number of pixels to choose in the first step.

  • v_min (float) – Minimum value for x and y displacement sampling.

  • v_max (float) – Maximum value for x and y displacement sampling.

  • kernel_sigma (float) – Standard deviation of the kernel in both the x and y direction.

  • p (np.array) – Pixelwise probability of selection, where p.shape=shape. Note that if None, then uniform.

  • random_state (int) – If None, then results not reproducible.

Returns

  • delta_x (np.ndarray) – Displacement vector field of the x coordinates.

  • delta_y (np.ndarray) – Displacement vector field of the y coordinates.

References

[1] Sokooti H., de Vos B., Berendsen F., Lelieveldt B.P.F., Išgum I., Staring M. (2017) Nonrigid Image

Registration Using Multi-scale 3D Convolutional Neural Network

paper_microsoft(shape, alpha=1000, sigma=10, random_state=None)[source]

Generate artificial displacement based on a paper from Microsoft.

Parameters
  • shape (tuple) – Of the form (height, width).

  • alpha (float) – Constant that the per pixel displacements are multiplied with. The higher the crazier displacements. If set to 0 then zero transformation.

  • sigma (float) – Standard deviation of the gaussian kernel. The closer to 0 the crazier displacement. If close to inf then zero transformation.

Returns

  • delta_x (np.ndarray) – Displacement vector field of the x coordinates.

  • delta_y (np.ndarray) – Displacement vector field of the y coordinates.

References

Simard, P. Y., Steinkraus, D., & Platt, J. C. (2003, August). Best practices for convolutional neural networks applied to visual document analysis. In null (p. 958). IEEE.

patch_shift(shape, ul=(10, 10), height=100, width=120, shift_size=30, shift_direction='D')[source]

For a fixed patch in an image redefine it with another same-shaped patch elsewhere in the image.

Parameters
  • shape (tuple) – Of the form (height, width).

  • ul (tuple) – Of the form (row of the UPPER LEFT corner of the patch, column of the UPPER LEFT corner of the patch).

  • height (int) – Height of the patch.

  • width (int) – Width of the patch.

  • shift_size (int) – How many pixels to shift the patch.

  • shift_direction (str, {'U', 'D', 'L', 'R'}) – The direction of the shift. ‘U’ = Up, ‘D’ = Down, ‘L’ = Left, ‘R’ = Right.

Raises

IndexError: – If the starting or the ending patch are not in the image.

Returns

  • delta_x (np.ndarray) – Displacement vector field of the x coordinates.

  • delta_y (np.ndarray) – Displacement vector field of the y coordinates.

projective(shape, matrix=None)[source]

Projective transformation encoded in a 3 x 3 matrix.

Parameters
  • shape (tuple) – Of the form (height, width).

  • matrix (np.ndarray) – Transformation matrix of the shape 3 x 3.

Raises

ValueError – In case the transformation matrix has a wrong shape.

Returns

  • delta_x (np.ndarray) – Displacement vector field of the x coordinates.

  • delta_y (np.ndarray) – Displacement vector field of the y coordinates.

single_frequency(shape, p=None, grid_spacing=5, n_perturbation_points=20, radius_mean=None, interpolation_method='griddata', interpolator_kwargs=None)[source]

Single frequency artificial warping generator.

Notes

  1. The reason why this approach is called single frequency is that the grid_spacing is constant over the entire

    image regions. One can therefore capture displacements of more or less the same size pixelwise.

  2. Calls the control_points so in this sense it is slightly unusual.

  3. The idea is to create a regular grid and then sample some of the points on it. For a fixed point

    we then randomly select and angle - Uniform[0, 2pi] and also the diameter ~ Exp(radius_mean)

Parameters
  • shape (tuple) – Of the form (height, width).

  • p (np.array, optional) – Pixelwise probability of selection, where p.shape=shape. Note that if None, then uniform.

  • grid_spacing (int, optional) – Grid spacing size in both the columns and rows.

  • n_perturbation_points (int, optional) – Number of grid points to which random perturbation will be applied (without replacement).

  • radius_mean (float, optional) – If None then set to grid_spacing / 2.

  • interpolation_method ({'griddata', 'bspline', 'rbf'}, optional) – Interpolation method to use.

  • interpolator_kwargs (dict, optional) – Additional parameters passed to the interpolator.

Returns

  • delta_x (np.ndarray) – Displacement vector field of the x coordinates.

  • delta_y (np.ndarray) – Displacement vector field of the y coordinates.

References

[1] https://github.com/hsokooti/RegNet (README.md)

Module contents

Image registration package.

Release markers: X.Y X.Y.Z for bug fixes

Source

source/api/atlalign.rst