MaskedAutoregressiveFlow
Inherits From: Bijector
Defined in tensorflow/contrib/distributions/python/ops/bijectors/masked_autoregressive.py
.
Affine MaskedAutoregressiveFlow bijector for vector-valued events.
The affine autoregressive flow [(Papamakarios et al., 2016)][3] provides a relatively simple framework for user-specified (deep) architectures to learn a distribution over vector-valued events. Regarding terminology,
"Autoregressive models decompose the joint density as a product of conditionals, and model each conditional in turn. Normalizing flows transform a base density (e.g. a standard Gaussian) into the target density by an invertible transformation with tractable Jacobian." [(Papamakarios et al., 2016)][3]
In other words, the "autoregressive property" is equivalent to the decomposition, p(x) = prod{ p(x[i] | x[0:i]) : i=0, ..., d }
. The provided shift_and_log_scale_fn
, masked_autoregressive_default_template
, achieves this property by zeroing out weights in its masked_dense
layers.
In the tf.distributions
framework, a "normalizing flow" is implemented as a tf.distributions.bijectors.Bijector
. The forward
"autoregression" is implemented using a tf.while_loop
and a deep neural network (DNN) with masked weights such that the autoregressive property is automatically met in the inverse
.
A TransformedDistribution
using MaskedAutoregressiveFlow(...)
uses the (expensive) forward-mode calculation to draw samples and the (cheap) reverse-mode calculation to compute log-probabilities. Conversely, a TransformedDistribution
using Invert(MaskedAutoregressiveFlow(...))
uses the (expensive) forward-mode calculation to compute log-probabilities and the (cheap) reverse-mode calculation to compute samples. See "Example Use" [below] for more details.
Given a shift_and_log_scale_fn
, the forward and inverse transformations are (a sequence of) affine transformations. A "valid" shift_and_log_scale_fn
must compute each shift
(aka loc
or "mu" in [Germain et al. (2015)][1]) and log(scale)
(aka "alpha" in [Germain et al. (2015)][1]) such that each are broadcastable with the arguments to forward
and inverse
, i.e., such that the calculations in forward
, inverse
[below] are possible.
For convenience, masked_autoregressive_default_template
is offered as a possible shift_and_log_scale_fn
function. It implements the MADE architecture [(Germain et al., 2015)][1]. MADE is a feed-forward network that computes a shift
and log(scale)
using masked_dense
layers in a deep neural network. Weights are masked to ensure the autoregressive property. It is possible that this architecture is suboptimal for your task. To build alternative networks, either change the arguments to masked_autoregressive_default_template
, use the masked_dense
function to roll-out your own, or use some other architecture, e.g., using tf.layers
.
Assuming shift_and_log_scale_fn
has valid shape and autoregressive semantics, the forward transformation is
def forward(x): y = zeros_like(x) event_size = x.shape[-1] for _ in range(event_size): shift, log_scale = shift_and_log_scale_fn(y) y = x * math_ops.exp(log_scale) + shift return y
and the inverse transformation is
def inverse(y): shift, log_scale = shift_and_log_scale_fn(y) return (y - shift) / math_ops.exp(log_scale)
Notice that the inverse
does not need a for-loop. This is because in the forward pass each calculation of shift
and log_scale
is based on the y
calculated so far (not x
). In the inverse
, the y
is fully known, thus is equivalent to the scaling used in forward
after event_size
passes, i.e., the "last" y
used to compute shift
, log_scale
. (Roughly speaking, this also proves the transform is bijective.)
tfd = tf.contrib.distributions tfb = tfd.bijectors dims = 5 # A common choice for a normalizing flow is to use a Gaussian for the base # distribution. (However, any continuous distribution would work.) E.g., maf = tfd.TransformedDistribution( distribution=tfd.Normal(loc=0., scale=1.), bijector=tfb.MaskedAutoregressiveFlow( shift_and_log_scale_fn=tfb.masked_autoregressive_default_template( hidden_layers=[512, 512])), event_shape=[dims]) x = maf.sample() # Expensive; uses <a href="../../../../tf/while_loop"><code>tf.while_loop</code></a>, no Bijector caching. maf.log_prob(x) # Almost free; uses Bijector caching. maf.log_prob(0.) # Cheap; no <a href="../../../../tf/while_loop"><code>tf.while_loop</code></a> despite no Bijector caching. # [Papamakarios et al. (2016)][3] also describe an Inverse Autoregressive # Flow [(Kingma et al., 2016)][2]: iaf = tfd.TransformedDistribution( distribution=tfd.Normal(loc=0., scale=1.), bijector=tfb.Invert(tfb.MaskedAutoregressiveFlow( shift_and_log_scale_fn=tfb.masked_autoregressive_default_template( hidden_layers=[512, 512]))), event_shape=[dims]) x = iaf.sample() # Cheap; no <a href="../../../../tf/while_loop"><code>tf.while_loop</code></a> despite no Bijector caching. iaf.log_prob(x) # Almost free; uses Bijector caching. iaf.log_prob(0.) # Expensive; uses <a href="../../../../tf/while_loop"><code>tf.while_loop</code></a>, no Bijector caching. # In many (if not most) cases the default `shift_and_log_scale_fn` will be a # poor choice. Here's an example of using a "shift only" version and with a # different number/depth of hidden layers. shift_only = True maf_no_scale_hidden2 = tfd.TransformedDistribution( distribution=tfd.Normal(loc=0., scale=1.), bijector=tfb.MaskedAutoregressiveFlow( tfb.masked_autoregressive_default_template( hidden_layers=[32], shift_only=shift_only), is_constant_jacobian=shift_only), event_shape=[dims])
[1]: Mathieu Germain, Karol Gregor, Iain Murray, and Hugo Larochelle. MADE: Masked Autoencoder for Distribution Estimation. In International Conference on Machine Learning, 2015. https://arxiv.org/abs/1502.03509
[2]: Diederik P. Kingma, Tim Salimans, Rafal Jozefowicz, Xi Chen, Ilya Sutskever, and Max Welling. Improving Variational Inference with Inverse Autoregressive Flow. In Neural Information Processing Systems, 2016. https://arxiv.org/abs/1606.04934
[3]: George Papamakarios, Theo Pavlakou, and Iain Murray. Masked Autoregressive Flow for Density Estimation. In Neural Information Processing Systems, 2017. https://arxiv.org/abs/1705.07057
dtype
dtype of Tensor
s transformable by this distribution.
event_ndims
Returns then number of event dimensions this bijector operates on.
graph_parents
Returns this Bijector
's graph_parents as a Python list.
is_constant_jacobian
Returns true iff the Jacobian is not a function of x.
Note: Jacobian is either constant for both forward and inverse or neither.
is_constant_jacobian
: Python bool
.name
Returns the string name of this Bijector
.
validate_args
Returns True if Tensor arguments will be validated.
__init__
__init__( shift_and_log_scale_fn, is_constant_jacobian=False, validate_args=False, unroll_loop=False, name=None )
Creates the MaskedAutoregressiveFlow bijector.
shift_and_log_scale_fn
: Python callable
which computes shift
and log_scale
from both the forward domain (x
) and the inverse domain (y
). Calculation must respect the "autoregressive property" (see class docstring). Suggested default masked_autoregressive_default_template(hidden_layers=...)
. Typically the function contains tf.Variables
and is wrapped using tf.make_template
. Returning None
for either (both) shift
, log_scale
is equivalent to (but more efficient than) returning zero.is_constant_jacobian
: Python bool
. Default: False
. When True
the implementation assumes log_scale
does not depend on the forward domain (x
) or inverse domain (y
) values. (No validation is made; is_constant_jacobian=False
is always safe but possibly computationally inefficient.)validate_args
: Python bool
indicating whether arguments should be checked for correctness.unroll_loop
: Python bool
indicating whether the tf.while_loop
in _forward
should be replaced with a static for loop. Requires that the final dimension of x
be known at graph construction time. Defaults to False
.name
: Python str
, name given to ops managed by this object.forward
forward( x, name='forward' )
Returns the forward Bijector
evaluation, i.e., X = g(Y).
x
: Tensor
. The input to the "forward" evaluation.name
: The name to give this op.Tensor
.
TypeError
: if self.dtype
is specified and x.dtype
is not self.dtype
.NotImplementedError
: if _forward
is not implemented.forward_event_shape
forward_event_shape(input_shape)
Shape of a single sample from a single batch as a TensorShape
.
Same meaning as forward_event_shape_tensor
. May be only partially defined.
input_shape
: TensorShape
indicating event-portion shape passed into forward
function.forward_event_shape_tensor
: TensorShape
indicating event-portion shape after applying forward
. Possibly unknown.forward_event_shape_tensor
forward_event_shape_tensor( input_shape, name='forward_event_shape_tensor' )
Shape of a single sample from a single batch as an int32
1D Tensor
.
input_shape
: Tensor
, int32
vector indicating event-portion shape passed into forward
function.name
: name to give to the opforward_event_shape_tensor
: Tensor
, int32
vector indicating event-portion shape after applying forward
.forward_log_det_jacobian
forward_log_det_jacobian( x, name='forward_log_det_jacobian' )
Returns both the forward_log_det_jacobian.
x
: Tensor
. The input to the "forward" Jacobian evaluation.name
: The name to give this op.Tensor
, if this bijector is injective. If not injective this is not implemented.
TypeError
: if self.dtype
is specified and y.dtype
is not self.dtype
.NotImplementedError
: if neither _forward_log_det_jacobian
nor {_inverse
, _inverse_log_det_jacobian
} are implemented, or this is a non-injective bijector.inverse
inverse( y, name='inverse' )
Returns the inverse Bijector
evaluation, i.e., X = g^{-1}(Y).
y
: Tensor
. The input to the "inverse" evaluation.name
: The name to give this op.Tensor
, if this bijector is injective. If not injective, returns the k-tuple containing the unique k
points (x1, ..., xk)
such that g(xi) = y
.
TypeError
: if self.dtype
is specified and y.dtype
is not self.dtype
.NotImplementedError
: if _inverse
is not implemented.inverse_event_shape
inverse_event_shape(output_shape)
Shape of a single sample from a single batch as a TensorShape
.
Same meaning as inverse_event_shape_tensor
. May be only partially defined.
output_shape
: TensorShape
indicating event-portion shape passed into inverse
function.inverse_event_shape_tensor
: TensorShape
indicating event-portion shape after applying inverse
. Possibly unknown.inverse_event_shape_tensor
inverse_event_shape_tensor( output_shape, name='inverse_event_shape_tensor' )
Shape of a single sample from a single batch as an int32
1D Tensor
.
output_shape
: Tensor
, int32
vector indicating event-portion shape passed into inverse
function.name
: name to give to the opinverse_event_shape_tensor
: Tensor
, int32
vector indicating event-portion shape after applying inverse
.inverse_log_det_jacobian
inverse_log_det_jacobian( y, name='inverse_log_det_jacobian' )
Returns the (log o det o Jacobian o inverse)(y).
Mathematically, returns: log(det(dX/dY))(Y)
. (Recall that: X=g^{-1}(Y)
.)
Note that forward_log_det_jacobian
is the negative of this function, evaluated at g^{-1}(y)
.
y
: Tensor
. The input to the "inverse" Jacobian evaluation.name
: The name to give this op.Tensor
, if this bijector is injective. If not injective, returns the tuple of local log det Jacobians, log(det(Dg_i^{-1}(y)))
, where g_i
is the restriction of g
to the ith
partition Di
.
TypeError
: if self.dtype
is specified and y.dtype
is not self.dtype
.NotImplementedError
: if _inverse_log_det_jacobian
is not implemented.
© 2018 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 3.0.
Code samples licensed under the Apache 2.0 License.
https://www.tensorflow.org/api_docs/python/tf/contrib/distributions/bijectors/MaskedAutoregressiveFlow