nn

Layers and functinal methods for PyTorch

class tinder.nn.AssertSize(*size)[source]

Assert that the input has the specified size.

Example:

net = nn.Sequential(
    tinder.nn.AssertSize(None, 3, 224, 224),
    nn.Conv2d(3, 64, kernel_size=1, stride=2),
    nn.Conv2d(64, 128, kernel_size=1, stride=2),
    tinder.nn.AssertSize(None, 128, 64, 64),
)
Parameters

size (iterable) – an iterable of dimensions. Each dimension is one of -1, None, or positive integer.

forward(x)[source]
class tinder.nn.Flatten[source]

A layer that flattens the input.

Example:

net = nn.Sequential(
    nn.Conv2d(..),
    nn.BatchNorm2d(..),
    nn.ReLU(),

    nn.Conv2d(..),
    nn.BatchNorm2d(..),
    nn.ReLU(),

    tinder.nn.Flatten(),
    nn.Linear(3*3*512, 1024),
)
Parameters

x – input tensor

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class tinder.nn.Identity(*_args, **_kwargs)[source]
forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class tinder.nn.MinibatchStddev[source]

Layer for GAN Discriminator used in PGGAN. It penalizes when images in the minibatch look similar. For example, G generates fall into mode collapse and generate similar images, then stddev of the minibatch is small. D looks at the small stddev and thinks this is likely fake.

This layer calculates stddev and provide it as additional channel.

forward(x)[source]
Parameters

x (N, C, H, W) –

Returns

The last channel dimension is stddev.

Return type

(N, C+1, H, W)

class tinder.nn.PixelwiseNormalize[source]

Pixelwise Normalization used in PGGAN. It normalizes the input [B,C,H,W] so that the L2 norms over the C dimension is 1. There are B*H*W norms.

Example:

x = tinder.PixelwiseNormalize()(x)
forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class tinder.nn.View(*size_without_batch_dim)[source]

nn.Module version of tensor.view().

Example:

layer = tinder.nn.View(3, -1, 256)
x = layer(x)

The batch dimension is implicit. The above code is the same as tensor.view(tensor.size(0), 3, -1, 256).

Parameters

size_without_batch_dim (iterable) – each dimension is one of -1, None, or positive.

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class tinder.nn.WeightScale(prev_layer, init_with_leakiness: float = None)[source]

A weight normalizing layer used in PGGAN.

How it works:

  1. Initialize your ConvLayer with weights from N(0, std=kaiming)

  2. WeightScale calculates the initial std of weights (from 1)

  3. Divide Conv.weights by std (from 2). Now Conv.weights are ~N(0,1)

  4. On forward, WeightScale multiply the input by std (from 2).

  5. Bias or Activation is applied after WeightScale. If Conv has bias, WeightScale steals it.

Note that the scale factor calculated in 2 is constant permanently.

Advantage:

The author of PGGAN claims Adam is better at training weights in N(0,1) than training conv weights of different std.

Example:

conv = nn.Conv2d(3, 64, kernel_size=3, padding=1)
ws = tinder.nn.WeightScale(conv)
nn.Sequential(
    conv,
    ws
)
Parameters
  • prev_layer – a layer (e.g. nn.Conv2d) with weight and optionally bias.

  • init_with_leakiness – if given, it initializes prev_layer with kaiming_normal.

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

tinder.nn.loss_wgan_gp(D, real: torch.Tensor, fake: torch.Tensor) → torch.Tensor[source]

Gradient Penalty for Wasserstein GAN.

It generates random interpolations between the real and fake points, and tries to make grad norm close to 1.

Note

  • The second-order derivative is unstable in PyTorch. Some layers in your G/D may not work.

  • It assumes CUDA.

Example:

gp = tinder.nn.loss_wgan_gp(D, real_img, fake_img)
loss = D(fake)-D(real)+10*gp
Parameters
  • D (callable) – A discriminator. Typically nn.Module or a lambda function that returns the score.

  • real (torch.Tensor) – real image batch.

  • fake (torch.Tensor) – fake image batch.

Returns

gradient penalty. optimizing on this loss will update D.

Return type

(torch.Tensor)

tinder.nn.odin(network: torch.nn.modules.module.Module, x: torch.Tensor, threshold, T=1000, epsilon=0.0012) → Tuple[bool, torch.Tensor][source]

Decide if we should reject the prediction for the given example x using ODIN.

Example:

is_reject, max_p = tinder.nn.odin(resnet, imgs, threshold=0.003)
# assert (is_reject == (max_p<0.003)).all()
Parameters
  • {nn.Module} -- A function returning logits (network) –

  • {torch.Tensor} -- input of [B,*] (x) –

  • {[type]} -- [description] (threshold) –

Keyword Arguments
  • {int} -- A parameter for temperature scailing (default (T) – {1000})

  • {float} -- A parameter for noisyness (default (epsilon) – {0.0012})

Returns

Tuple[torch.Tensor, torch.Tensor] – (is_reject of shape [B], max_p of shape [B])

tinder.nn.one_dimensional_discrete_wasserstein_distance(px, py, p=2)[source]

1D wasserstein distance between p(class) and p(class) where d(cls1,cls2)=(cls2!=cls2).

Parameters
  • {torch.Tensor} -- [N,D] N discrete distributions of shape [D] where p (px) –

  • {torch.Tensor} -- [N,D] (py) –

Keyword Arguments

{int} -- [description] (default (p) – {2})

tinder.nn.sliced_wasserstein_distance(x, y, sample_cnt, p=2, weight_x=None, weight_y=None)[source]

Calculated a stochastic sliced wasserstein distance between x and y.

c(x,y) = ||x-y||p

Parameters
  • {torch.Tensor} -- A tensor of shape [N,*] Samples from the distribution p (y) –

  • {torch.Tensor} -- A tensor of shape [N,*] Samples from the distribution p

  • {int} -- A number of samples to estimate the distance (sample_cnt) –

  • {int} -- L_p is used to calculate sliced w-dist (p) –

  • {torch.Tensor} -- A tensor of shape [N] or None (weight_y) –

  • {torch.Tensor} -- A tensor of shape [N] or None

Returns

scalar – The sliced wasserstein distance (with gradient)