FFCV

ffcv.transforms module

class ffcv.transforms.ToTensor[source]

Convert from Numpy array to PyTorch Tensor.

generate_code() Callable[source]
declare_state_and_memory(previous_state: ffcv.pipeline.state.State) Tuple[ffcv.pipeline.state.State, Optional[ffcv.pipeline.allocation_query.AllocationQuery]][source]
class ffcv.transforms.ToDevice(device, non_blocking=True)[source]

Move tensor to device.

Parameters
  • device (torch.device) – Device to move to.

  • non_blocking (bool) – Asynchronous if copying from CPU to GPU.

generate_code() Callable[source]
declare_state_and_memory(previous_state: ffcv.pipeline.state.State) Tuple[ffcv.pipeline.state.State, Optional[ffcv.pipeline.allocation_query.AllocationQuery]][source]
class ffcv.transforms.ToTorchImage(channels_last=True, convert_back_int16=True)[source]

Change tensor to PyTorch format for images (B x C x H x W).

Parameters
  • channels_last (bool) – Use torch.channels_last.

  • convert_back_int16 (bool) – Convert to float16.

generate_code() Callable[source]
declare_state_and_memory(previous_state: ffcv.pipeline.state.State) Tuple[ffcv.pipeline.state.State, Optional[ffcv.pipeline.allocation_query.AllocationQuery]][source]
class ffcv.transforms.NormalizeImage(mean: numpy.ndarray, std: numpy.ndarray, type: numpy.dtype)[source]

Fast implementation of normalization and type conversion for uint8 images to any floating point dtype.

Works on both GPU and CPU tensors.

Parameters
  • mean (np.ndarray) – The mean vector.

  • std (np.ndarray) – The standard deviation vector.

  • type (np.dtype) – The desired output type for the result as a numpy type. If the transform is applied on a GPU tensor it will be converted as the equivalent torch dtype.

generate_code() Callable[source]
generate_code_gpu() Callable[source]
generate_code_cpu() Callable[source]
declare_state_and_memory(previous_state: ffcv.pipeline.state.State) Tuple[ffcv.pipeline.state.State, Optional[ffcv.pipeline.allocation_query.AllocationQuery]][source]
class ffcv.transforms.Convert(target_dtype)[source]

Convert to target data type.

Parameters

target_dtype (numpy.dtype or torch.dtype) – Target data type.

generate_code() Callable[source]
declare_state_and_memory(previous_state: ffcv.pipeline.state.State) Tuple[ffcv.pipeline.state.State, Optional[ffcv.pipeline.allocation_query.AllocationQuery]][source]
class ffcv.transforms.Squeeze(*dims)[source]

Remove given dimensions of input of size 1. Operates on tensors.

Parameters

*dims (List[int]) – Dimensions to squeeze.

generate_code() Callable[source]
declare_state_and_memory(previous_state: ffcv.pipeline.state.State) Tuple[ffcv.pipeline.state.State, Optional[ffcv.pipeline.allocation_query.AllocationQuery]][source]
class ffcv.transforms.View(target_dtype)[source]

View array using np.view or torch.view.

Parameters

target_dtype (numpy.dtype or torch.dtype) – Target data type.

generate_code() Callable[source]
declare_state_and_memory(previous_state: ffcv.pipeline.state.State) Tuple[ffcv.pipeline.state.State, Optional[ffcv.pipeline.allocation_query.AllocationQuery]][source]
class ffcv.transforms.RandomResizedCrop(scale: Tuple[float, float], ratio: Tuple[float, float], size: int)[source]

Crop a random portion of image with random aspect ratio and resize it to a given size.

Parameters
  • scale (Tuple[float, float]) – Lower and upper bounds for the ratio of random area of the crop.

  • ratio (Tuple[float, float]) – Lower and upper bounds for random aspect ratio of the crop.

  • size (int) – Side length of the output.

generate_code() Callable[source]
declare_state_and_memory(previous_state: ffcv.pipeline.state.State) Tuple[ffcv.pipeline.state.State, Optional[ffcv.pipeline.allocation_query.AllocationQuery]][source]
class ffcv.transforms.RandomHorizontalFlip(flip_prob: float = 0.5)[source]

Flip the image horizontally with probability flip_prob. Operates on raw arrays (not tensors).

Parameters

flip_prob (float) – The probability with which to flip each image in the batch horizontally.

generate_code() Callable[source]
declare_state_and_memory(previous_state: ffcv.pipeline.state.State) Tuple[ffcv.pipeline.state.State, Optional[ffcv.pipeline.allocation_query.AllocationQuery]][source]
class ffcv.transforms.RandomTranslate(padding: int, fill: Tuple[int, int, int] = (0, 0, 0))[source]

Translate each image randomly in vertical and horizontal directions up to specified number of pixels.

Parameters
  • padding (int) – Max number of pixels to translate in any direction.

  • fill (tuple) – An RGB color ((0, 0, 0) by default) to fill the area outside the shifted image.

generate_code() Callable[source]
declare_state_and_memory(previous_state: ffcv.pipeline.state.State) Tuple[ffcv.pipeline.state.State, Optional[ffcv.pipeline.allocation_query.AllocationQuery]][source]
class ffcv.transforms.Cutout(crop_size: int, fill: Tuple[int, int, int] = (0, 0, 0))[source]

Cutout data augmentation (https://arxiv.org/abs/1708.04552).

Parameters
  • crop_size (int) – Size of the random square to cut out.

  • fill (Tuple[int, int, int], optional) – An RGB color ((0, 0, 0) by default) to fill the cutout square with. Useful for when a normalization layer follows cutout, in which case you can set the fill such that the square is zero post-normalization.

generate_code() Callable[source]
declare_state_and_memory(previous_state: ffcv.pipeline.state.State) Tuple[ffcv.pipeline.state.State, Optional[ffcv.pipeline.allocation_query.AllocationQuery]][source]
class ffcv.transforms.ImageMixup(alpha: float, same_lambda: bool)[source]

Mixup for images. Operates on raw arrays (not tensors).

Parameters
  • alpha (float) – Mixup parameter alpha

  • same_lambda (bool) – Whether to use the same value of lambda across the whole batch, or an individually sampled lambda per image in the batch

generate_code() Callable[source]
declare_state_and_memory(previous_state: ffcv.pipeline.state.State) Tuple[ffcv.pipeline.state.State, Optional[ffcv.pipeline.allocation_query.AllocationQuery]][source]
class ffcv.transforms.LabelMixup(alpha: float, same_lambda: bool)[source]

Mixup for labels. Should be initialized in exactly the same way as :cla:`ffcv.transforms.ImageMixup`.

generate_code() Callable[source]
declare_state_and_memory(previous_state: ffcv.pipeline.state.State) Tuple[ffcv.pipeline.state.State, Optional[ffcv.pipeline.allocation_query.AllocationQuery]][source]
class ffcv.transforms.MixupToOneHot(num_classes: int)[source]
generate_code() Callable[source]
declare_state_and_memory(previous_state: ffcv.pipeline.state.State) Tuple[ffcv.pipeline.state.State, Optional[ffcv.pipeline.allocation_query.AllocationQuery]][source]
class ffcv.transforms.Poison(mask: numpy.ndarray, alpha: numpy.ndarray, indices, clamp=(0, 255))[source]

Poison specified images by adding a mask with given opacity. Operates on raw arrays (not tensors).

Parameters
  • mask (ndarray) – The mask to apply to each image.

  • alpha (float) – The opacity of the mask.

  • indices (Sequence[int]) – The indices of images that should have the mask applied.

  • clamp (Tuple[int, int]) – Clamps the final pixel values between these two values (default: (0, 255)).

generate_code() Callable[source]
declare_state_and_memory(previous_state: ffcv.pipeline.state.State) Tuple[ffcv.pipeline.state.State, Optional[ffcv.pipeline.allocation_query.AllocationQuery]][source]
class ffcv.transforms.ReplaceLabel(indices, new_label: int)[source]

Replace label of specified images.

Parameters
  • indices (Sequence[int]) – The indices of images to relabel.

  • new_label (int) – The new label to assign.

generate_code() Callable[source]
declare_state_and_memory(previous_state: ffcv.pipeline.state.State) Tuple[ffcv.pipeline.state.State, Optional[ffcv.pipeline.allocation_query.AllocationQuery]][source]
class ffcv.transforms.ModuleWrapper(module: torch.nn.Module)[source]

Transform using the given torch.nn.Module

Parameters

module (torch.nn.Module) – The module for transformation

generate_code() Callable[source]
declare_state_and_memory(previous_state: ffcv.pipeline.state.State) Tuple[ffcv.pipeline.state.State, Optional[ffcv.pipeline.allocation_query.AllocationQuery]][source]