Mixture
Inherits From: Distribution
Defined in tensorflow/contrib/distributions/python/ops/mixture.py
.
See the guide: Statistical Distributions (contrib) > Mixture Models
Mixture distribution.
The Mixture
object implements batched mixture distributions. The mixture model is defined by a Categorical
distribution (the mixture) and a python list of Distribution
objects.
Methods supported include log_prob
, prob
, mean
, sample
, and entropy_lower_bound
.
# Create a mixture of two Gaussians: tfd = tf.contrib.distributions mix = 0.3 bimix_gauss = tfd.Mixture( cat=tfd.Categorical(probs=[mix, 1.-mix]), components=[ tfd.Normal(loc=-1., scale=0.1), tfd.Normal(loc=+1., scale=0.5), ]) # Plot the PDF. import matplotlib.pyplot as plt x = tf.linspace(-2., 3., int(1e4)).eval() plt.plot(x, bimix_gauss.prob(x).eval());
allow_nan_stats
Python bool
describing behavior when a stat is undefined.
Stats return +/- infinity when it makes sense. E.g., the variance of a Cauchy distribution is infinity. However, sometimes the statistic is undefined, e.g., if a distribution's pdf does not achieve a maximum within the support of the distribution, the mode is undefined. If the mean is undefined, then by definition the variance is undefined. E.g. the mean for Student's T for df = 1 is undefined (no clear way to say it is either + or - infinity), so the variance = E[(X - mean)**2] is also undefined.
allow_nan_stats
: Python bool
.batch_shape
Shape of a single sample from a single event index as a TensorShape
.
May be partially defined or unknown.
The batch dimensions are indexes into independent, non-identical parameterizations of this distribution.
batch_shape
: TensorShape
, possibly unknown.cat
components
dtype
The DType
of Tensor
s handled by this Distribution
.
event_shape
Shape of a single sample from a single batch as a TensorShape
.
May be partially defined or unknown.
event_shape
: TensorShape
, possibly unknown.name
Name prepended to all ops created by this Distribution
.
num_components
parameters
Dictionary of parameters used to instantiate this Distribution
.
reparameterization_type
Describes how samples from the distribution are reparameterized.
Currently this is one of the static instances distributions.FULLY_REPARAMETERIZED
or distributions.NOT_REPARAMETERIZED
.
An instance of ReparameterizationType
.
validate_args
Python bool
indicating possibly expensive checks are enabled.
__init__
__init__( cat, components, validate_args=False, allow_nan_stats=True, use_static_graph=False, name='Mixture' )
Initialize a Mixture distribution.
A Mixture
is defined by a Categorical
(cat
, representing the mixture probabilities) and a list of Distribution
objects all having matching dtype, batch shape, event shape, and continuity properties (the components).
The num_classes
of cat
must be possible to infer at graph construction time and match len(components)
.
cat
: A Categorical
distribution instance, representing the probabilities of distributions
.components
: A list or tuple of Distribution
instances. Each instance must have the same type, be defined on the same domain, and have matching event_shape
and batch_shape
.validate_args
: Python bool
, default False
. If True
, raise a runtime error if batch or event ranks are inconsistent between cat and any of the distributions. This is only checked if the ranks cannot be determined statically at graph construction time.allow_nan_stats
: Boolean, default True
. If False
, raise an exception if a statistic (e.g. mean/mode/etc...) is undefined for any batch member. If True
, batch members with valid parameters leading to undefined statistics will return NaN for this statistic.use_static_graph
: Calls to sample
will not rely on dynamic tensor indexing, allowing for some static graph compilation optimizations, but at the expense of sampling all underlying distributions in the mixture. (Possibly useful when running on TPUs). Default value: False
(i.e., use dynamic indexing).name
: A name for this distribution (optional).TypeError
: If cat is not a Categorical
, or components
is not a list or tuple, or the elements of components
are not instances of Distribution
, or do not have matching dtype
.ValueError
: If components
is an empty list or tuple, or its elements do not have a statically known event rank. If cat.num_classes
cannot be inferred at graph creation time, or the constant value of cat.num_classes
is not equal to len(components)
, or all components
and cat
do not have matching static batch shapes, or all components do not have matching static event shapes.batch_shape_tensor
batch_shape_tensor(name='batch_shape_tensor')
Shape of a single sample from a single event index as a 1-D Tensor
.
The batch dimensions are indexes into independent, non-identical parameterizations of this distribution.
name
: name to give to the opbatch_shape
: Tensor
.cdf
cdf( value, name='cdf' )
Cumulative distribution function.
Given random variable X
, the cumulative distribution function cdf
is:
cdf(x) := P[X <= x]
value
: float
or double
Tensor
.name
: Python str
prepended to names of ops created by this function.cdf
: a Tensor
of shape sample_shape(x) + self.batch_shape
with values of type self.dtype
.copy
copy(**override_parameters_kwargs)
Creates a deep copy of the distribution.
Note: the copy distribution may continue to depend on the original initialization arguments.
**override_parameters_kwargs
: String/value dictionary of initialization arguments to override with new values.distribution
: A new instance of type(self)
initialized from the union of self.parameters and override_parameters_kwargs, i.e., dict(self.parameters, **override_parameters_kwargs)
.covariance
covariance(name='covariance')
Covariance.
Covariance is (possibly) defined only for non-scalar-event distributions.
For example, for a length-k
, vector-valued distribution, it is calculated as,
Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
where Cov
is a (batch of) k x k
matrix, 0 <= (i, j) < k
, and E
denotes expectation.
Alternatively, for non-vector, multivariate distributions (e.g., matrix-valued, Wishart), Covariance
shall return a (batch of) matrices under some vectorization of the events, i.e.,
Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
where Cov
is a (batch of) k' x k'
matrices, 0 <= (i, j) < k' = reduce_prod(event_shape)
, and Vec
is some function mapping indices of this distribution's event dimensions to indices of a length-k'
vector.
name
: Python str
prepended to names of ops created by this function.covariance
: Floating-point Tensor
with shape [B1, ..., Bn, k', k']
where the first n
dimensions are batch coordinates and k' = reduce_prod(self.event_shape)
.cross_entropy
cross_entropy( other, name='cross_entropy' )
Computes the (Shannon) cross entropy.
Denote this distribution (self
) by P
and the other
distribution by Q
. Assuming P, Q
are absolutely continuous with respect to one another and permit densities p(x) dr(x)
and q(x) dr(x)
, (Shanon) cross entropy is defined as:
H[P, Q] = E_p[-log q(X)] = -int_F p(x) log q(x) dr(x)
where F
denotes the support of the random variable X ~ P
.
other
: tf.distributions.Distribution
instance.name
: Python str
prepended to names of ops created by this function.cross_entropy
: self.dtype
Tensor
with shape [B1, ..., Bn]
representing n
different calculations of (Shanon) cross entropy.entropy
entropy(name='entropy')
Shannon entropy in nats.
entropy_lower_bound
entropy_lower_bound(name='entropy_lower_bound')
A lower bound on the entropy of this mixture model.
The bound below is not always very tight, and its usefulness depends on the mixture probabilities and the components in use.
A lower bound is useful for ELBO when the Mixture
is the variational distribution:
\( \log p(x) >= ELBO = \int q(z) \log p(x, z) dz + H[q] \)
where \( p \) is the prior distribution, \( q \) is the variational, and \( H[q] \) is the entropy of \( q \). If there is a lower bound \( G[q] \) such that \( H[q] \geq G[q] \) then it can be used in place of \( H[q] \).
For a mixture of distributions \( q(Z) = \sum_i c_i q_i(Z) \) with \( \sum_i c_i = 1 \), by the concavity of \( f(x) = -x \log x \), a simple lower bound is:
\( \begin{align} H[q] & = - \int q(z) \log q(z) dz \\ & = - \int (\sum_i c_i q_i(z)) \log(\sum_i c_i q_i(z)) dz \\ & \geq - \sum_i c_i \int q_i(z) \log q_i(z) dz \\ & = \sum_i c_i H[q_i] \end{align} \)
This is the term we calculate below for \( G[q] \).
name
: A name for this operation (optional).A lower bound on the Mixture's entropy.
event_shape_tensor
event_shape_tensor(name='event_shape_tensor')
Shape of a single sample from a single batch as a 1-D int32 Tensor
.
name
: name to give to the opevent_shape
: Tensor
.is_scalar_batch
is_scalar_batch(name='is_scalar_batch')
Indicates that batch_shape == []
.
name
: Python str
prepended to names of ops created by this function.is_scalar_batch
: bool
scalar Tensor
.is_scalar_event
is_scalar_event(name='is_scalar_event')
Indicates that event_shape == []
.
name
: Python str
prepended to names of ops created by this function.is_scalar_event
: bool
scalar Tensor
.kl_divergence
kl_divergence( other, name='kl_divergence' )
Computes the Kullback--Leibler divergence.
Denote this distribution (self
) by p
and the other
distribution by q
. Assuming p, q
are absolutely continuous with respect to reference measure r
, the KL divergence is defined as:
KL[p, q] = E_p[log(p(X)/q(X))] = -int_F p(x) log q(x) dr(x) + int_F p(x) log p(x) dr(x) = H[p, q] - H[p]
where F
denotes the support of the random variable X ~ p
, H[., .]
denotes (Shanon) cross entropy, and H[.]
denotes (Shanon) entropy.
other
: tf.distributions.Distribution
instance.name
: Python str
prepended to names of ops created by this function.kl_divergence
: self.dtype
Tensor
with shape [B1, ..., Bn]
representing n
different calculations of the Kullback-Leibler divergence.log_cdf
log_cdf( value, name='log_cdf' )
Log cumulative distribution function.
Given random variable X
, the cumulative distribution function cdf
is:
log_cdf(x) := Log[ P[X <= x] ]
Often, a numerical approximation can be used for log_cdf(x)
that yields a more accurate answer than simply taking the logarithm of the cdf
when x << -1
.
value
: float
or double
Tensor
.name
: Python str
prepended to names of ops created by this function.logcdf
: a Tensor
of shape sample_shape(x) + self.batch_shape
with values of type self.dtype
.log_prob
log_prob( value, name='log_prob' )
Log probability density/mass function.
value
: float
or double
Tensor
.name
: Python str
prepended to names of ops created by this function.log_prob
: a Tensor
of shape sample_shape(x) + self.batch_shape
with values of type self.dtype
.log_survival_function
log_survival_function( value, name='log_survival_function' )
Log survival function.
Given random variable X
, the survival function is defined:
log_survival_function(x) = Log[ P[X > x] ] = Log[ 1 - P[X <= x] ] = Log[ 1 - cdf(x) ]
Typically, different numerical approximations can be used for the log survival function, which are more accurate than 1 - cdf(x)
when x >> 1
.
value
: float
or double
Tensor
.name
: Python str
prepended to names of ops created by this function.Tensor
of shape sample_shape(x) + self.batch_shape
with values of type self.dtype
.
mean
mean(name='mean')
Mean.
mode
mode(name='mode')
Mode.
param_shapes
param_shapes( cls, sample_shape, name='DistributionParamShapes' )
Shapes of parameters given the desired shape of a call to sample()
.
This is a class method that describes what key/value arguments are required to instantiate the given Distribution
so that a particular shape is returned for that instance's call to sample()
.
Subclasses should override class method _param_shapes
.
sample_shape
: Tensor
or python list/tuple. Desired shape of a call to sample()
.name
: name to prepend ops with.dict
of parameter name to Tensor
shapes.
param_static_shapes
param_static_shapes( cls, sample_shape )
param_shapes with static (i.e. TensorShape
) shapes.
This is a class method that describes what key/value arguments are required to instantiate the given Distribution
so that a particular shape is returned for that instance's call to sample()
. Assumes that the sample's shape is known statically.
Subclasses should override class method _param_shapes
to return constant-valued tensors when constant values are fed.
sample_shape
: TensorShape
or python list/tuple. Desired shape of a call to sample()
.dict
of parameter name to TensorShape
.
ValueError
: if sample_shape
is a TensorShape
and is not fully defined.prob
prob( value, name='prob' )
Probability density/mass function.
value
: float
or double
Tensor
.name
: Python str
prepended to names of ops created by this function.prob
: a Tensor
of shape sample_shape(x) + self.batch_shape
with values of type self.dtype
.quantile
quantile( value, name='quantile' )
Quantile function. Aka "inverse cdf" or "percent point function".
Given random variable X
and p in [0, 1]
, the quantile
is:
quantile(p) := x such that P[X <= x] == p
value
: float
or double
Tensor
.name
: Python str
prepended to names of ops created by this function.quantile
: a Tensor
of shape sample_shape(x) + self.batch_shape
with values of type self.dtype
.sample
sample( sample_shape=(), seed=None, name='sample' )
Generate samples of the specified shape.
Note that a call to sample()
without arguments will generate a single sample.
sample_shape
: 0D or 1D int32
Tensor
. Shape of the generated samples.seed
: Python integer seed for RNGname
: name to give to the op.samples
: a Tensor
with prepended dimensions sample_shape
.stddev
stddev(name='stddev')
Standard deviation.
Standard deviation is defined as,
stddev = E[(X - E[X])**2]**0.5
where X
is the random variable associated with this distribution, E
denotes expectation, and stddev.shape = batch_shape + event_shape
.
name
: Python str
prepended to names of ops created by this function.stddev
: Floating-point Tensor
with shape identical to batch_shape + event_shape
, i.e., the same shape as self.mean()
.survival_function
survival_function( value, name='survival_function' )
Survival function.
Given random variable X
, the survival function is defined:
survival_function(x) = P[X > x] = 1 - P[X <= x] = 1 - cdf(x).
value
: float
or double
Tensor
.name
: Python str
prepended to names of ops created by this function.Tensor
of shape sample_shape(x) + self.batch_shape
with values of type self.dtype
.
variance
variance(name='variance')
Variance.
Variance is defined as,
Var = E[(X - E[X])**2]
where X
is the random variable associated with this distribution, E
denotes expectation, and Var.shape = batch_shape + event_shape
.
name
: Python str
prepended to names of ops created by this function.variance
: Floating-point Tensor
with shape identical to batch_shape + event_shape
, i.e., the same shape as self.mean()
.
© 2018 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 3.0.
Code samples licensed under the Apache 2.0 License.
https://www.tensorflow.org/api_docs/python/tf/contrib/distributions/Mixture