Chapter 18
Confronting the Partition
Function
In section 16.2.2 we saw that many probabilistic models (commonly known as undi-
rected graphical models) are defined by an unnormalized probability distribution
˜p
(
x
;
θ
). We must normalize
˜p
by dividing by a partition function
Z
(
θ
) in order to
obtain a valid probability distribution:
p(x; θ) =
1
Z(θ)
˜p(x; θ). (18.1)
The partition function is an integral (for continuous variables) or sum (for discrete
variables) over the unnormalized probability of all states:
˜p(x)dx (18.2)
or
x
˜p(x). (18.3)
This operation is intractable for many interesting models.
As we will see in chapter 20, several deep learning models are designed to
have a tractable normalizing constant, or are designed to be used in ways that do
not involve computing
p
(
x
) at all. However, other models directly confront the
challenge of intractable partition functions. In this chapter, we describe techniques
used for training and evaluating models that have intractable partition functions.
605
CHAPTER 18. CONFRONTING THE PARTITION FUNCTION
18.1 The Log-Likelihood Gradient
What makes learning undirected models by maximum likelihood particularly
difficult is that the partition function depends on the parameters. The gradient of
the log-likelihood with respect to the parameters has a term corresponding to the
gradient of the partition function:
θ
log p(x; θ) =
θ
log ˜p(x; θ)
θ
log Z(θ). (18.4)
This is a well-known decomposition into the
positive phase
and
negative
phase of learning.
For most undirected models of interest, the negative phase is difficult. Models
with no latent variables or with few interactions between latent variables typically
have a tractable positive phase. The quintessential example of a model with a
straightforward positive phase and difficult negative phase is the RBM, which has
hidden units that are conditionally independent from each other given the visible
units. The case where the positive phase is difficult, with complicated interactions
between latent variables, is primarily covered in chapter 19. This chapter focuses
on the difficulties of the negative phase.
Let us look more closely at the gradient of log Z:
θ
log Z (18.5)
=
θ
Z
Z
(18.6)
=
θ
x
˜p(x)
Z
(18.7)
=
x
θ
˜p(x)
Z
. (18.8)
For models that guarantee
p
(
x
)
>
0 for all
x
, we can substitute
exp (log ˜p(x))
for ˜p(x):
x
θ
exp (log ˜p(x))
Z
(18.9)
=
x
exp (log ˜p(x))
θ
log ˜p(x)
Z
(18.10)
=
x
˜p(x)
θ
log ˜p(x)
Z
(18.11)
=
x
p(x)
θ
log ˜p(x) (18.12)
606
CHAPTER 18. CONFRONTING THE PARTITION FUNCTION
= E
xp(x)
θ
log ˜p(x). (18.13)
This derivation made use of summation over discrete
x
, but a similar result
applies using integration over continuous
x
. In the continuous version of the
derivation, we use Leibniz’s rule for differentiation under the integral sign to obtain
the identity
θ
˜p(x)dx =
θ
˜p(x)dx. (18.14)
This identity is applicable only under certain regularity conditions on
˜p
and
θ
˜p
(
x
).
In measure theoretic terms, the conditions are: (i) The unnormalized distribution
˜p
must be a Lebesgue-integrable function of
x
for every value of
θ
; (ii) The gradient
θ
˜p
(
x
) must exist for all
θ
and almost all
x
; (iii) There must exist an integrable
function
R
(
x
) that bounds
θ
˜p
(
x
) in the sense that
max
i
|
θ
i
˜p
(
x
)
| R
(
x
) for all
θ
and almost all
x
. Fortunately, most machine learning models of interest have
these properties.
This identity
θ
log Z = E
xp(x)
θ
log ˜p(x) (18.15)
is the basis for a variety of Monte Carlo methods for approximately maximizing
the likelihood of models with intractable partition functions.
The Monte Carlo approach to learning undirected models provides an intuitive
framework in which we can think of both the positive phase and the negative
phase. In the positive phase, we increase
log ˜p
(
x
) for
x
drawn from the data. In
the negative phase, we decrease the partition function by decreasing
log ˜p
(
x
) drawn
from the model distribution.
In the deep learning literature, it is common to parametrize
log ˜p
in terms of
an energy function (equation 16.7). In this case, we can interpret the positive
phase as pushing down on the energy of training examples and the negative phase
as pushing up on the energy of samples drawn from the model, as illustrated in
figure 18.1.
18.2 Stochastic Maximum Likelihood and Contrastive
Divergence
The naive way of implementing equation 18.15 is to compute it by burning in
a set of Markov chains from a random initialization every time the gradient is
needed. When learning is performed using stochastic gradient descent, this means
the chains must be burned in once per gradient step. This approach leads to the
607
CHAPTER 18. CONFRONTING THE PARTITION FUNCTION
training procedure presented in algorithm 18.1. The high cost of burning in the
Markov chains in the inner loop makes this procedure computationally infeasible,
but this procedure is the starting point that other more practical algorithms aim
to approximate.
Algorithm 18.1
A naive MCMC algorithm for maximizing the log-likelihood
with an intractable partition function using gradient ascent.
Set , the step size, to a small positive number.
Set
k
, the number of Gibbs steps, high enough to allow burn in. Perhaps 100 to
train an RBM on a small image patch.
while not converged do
Sample a minibatch of m examples {x
(1)
, . . . , x
(m)
} from the training set.
g
1
m
m
i=1
θ
log ˜p(x
(i)
; θ).
Initialize a set of
m
samples
{
˜
x
(1)
, . . . ,
˜
x
(m)
}
to random values (e.g., from
a uniform or normal distribution, or possibly a distribution with marginals
matched to the model’s marginals).
for i = 1 to k do
for j = 1 to m do
˜
x
(j)
gibbs_update(
˜
x
(j)
).
end for
end for
g g
1
m
m
i=1
θ
log ˜p(
˜
x
(i)
; θ).
θ θ + g.
end while
We can view the MCMC approach to maximum likelihood as trying to achieve
balance between two forces, one pushing up on the model distribution where the
data occurs, and another pushing down on the model distribution where the model
samples occur. Figure 18.1 illustrates this process. The two forces correspond to
maximizing
log ˜p
and minimizing
log Z
. Several approximations to the negative
phase are possible. Each of these approximations can be understood as making
the negative phase computationally cheaper but also making it push down in the
wrong locations.
Because the negative phase involves drawing samples from the model’s distri-
bution, we can think of it as finding points that the model believes in strongly.
Because the negative phase acts to reduce the probability of those points, they
are generally considered to represent the model’s incorrect beliefs about the world.
They are frequently referred to in the literature as “hallucinations” or “fantasy
particles.” In fact, the negative phase has been proposed as a possible explanation
608
CHAPTER 18. CONFRONTING THE PARTITION FUNCTION
x
p(x)
The positive phase
p
model
(x)
p
data
(x)
x
p(x)
The negative phase
p
model
(x)
p
data
(x)
Figure 18.1: The view of algorithm 18.1 as having a “positive phase” and “negative phase.”
(Left)In the positive phase, we sample points from the data distribution, and push up on
their unnormalized probability. This means points that are likely in the data get pushed
up on more. (Right)In the negative phase, we sample points from the model distribution,
and push down on their unnormalized probability. This counteracts the positive phase’s
tendency to just add a large constant to the unnormalized probability everywhere. When
the data distribution and the model distribution are equal, the positive phase has the
same chance to push up at a point as the negative phase has to push down. When this
occurs, there is no longer any gradient (in expectation) and training must terminate.
for dreaming in humans and other animals (Crick and Mitchison, 1983), the idea
being that the brain maintains a probabilistic model of the world and follows
the gradient of
log ˜p
while experiencing real events while awake and follows the
negative gradient of
log ˜p
to minimize
log Z
while sleeping and experiencing events
sampled from the current model. This view explains much of the language used to
describe algorithms with a positive and negative phase, but it has not been proven
to be correct with neuroscientific experiments. In machine learning models, it is
usually necessary to use the positive and negative phase simultaneously, rather
than in separate time periods of wakefulness and REM sleep. As we will see in
section 19.5, other machine learning algorithms draw samples from the model
distribution for other purposes and such algorithms could also provide an account
for the function of dream sleep.
Given this understanding of the role of the positive and negative phase of
learning, we can attempt to design a less expensive alternative to algorithm 18.1.
The main cost of the naive MCMC algorithm is the cost of burning in the Markov
chains from a random initialization at each step. A natural solution is to initialize
the Markov chains from a distribution that is very close to the model distribution,
609
CHAPTER 18. CONFRONTING THE PARTITION FUNCTION
so that the burn in operation does not take as many steps.
The
contrastive divergence
(CD, or CD-
k
to indicate CD with
k
Gibbs steps)
algorithm initializes the Markov chain at each step with samples from the data
distribution (Hinton, 2000, 2010). This approach is presented as algorithm 18.2.
Obtaining samples from the data distribution is free, because they are already
available in the data set. Initially, the data distribution is not close to the model
distribution, so the negative phase is not very accurate. Fortunately, the positive
phase can still accurately increase the model’s probability of the data. After the
positive phase has had some time to act, the model distribution is closer to the
data distribution, and the negative phase starts to become accurate.
Algorithm 18.2
The contrastive divergence algorithm, using gradient ascent as
the optimization procedure.
Set , the step size, to a small positive number.
Set
k
, the number of Gibbs steps, high enough to allow a Markov chain sampling
from
p
(
x
;
θ
) to mix when initialized from
p
data
. Perhaps 1-20 to train an RBM
on a small image patch.
while not converged do
Sample a minibatch of m examples {x
(1)
, . . . , x
(m)
} from the training set.
g
1
m
m
i=1
θ
log ˜p(x
(i)
; θ).
for i = 1 to m do
˜
x
(i)
x
(i)
.
end for
for i = 1 to k do
for j = 1 to m do
˜
x
(j)
gibbs_update(
˜
x
(j)
).
end for
end for
g g
1
m
m
i=1
θ
log ˜p(
˜
x
(i)
; θ).
θ θ + g.
end while
Of course, CD is still an approximation to the correct negative phase. The
main way that CD qualitatively fails to implement the correct negative phase
is that it fails to suppress regions of high probability that are far from actual
training examples. These regions that have high probability under the model but
low probability under the data generating distribution are called
spurious modes
.
Figure 18.2 illustrates why this happens. Essentially, it is because modes in the
model distribution that are far from the data distribution will not be visited by
610
CHAPTER 18. CONFRONTING THE PARTITION FUNCTION
x
p(x)
p
model
(x)
p
data
(x)
Figure 18.2: An illustration of how the negative phase of contrastive divergence (algo-
rithm 18.2) can fail to suppress spurious modes. A spurious mode is a mode that is
present in the model distribution but absent in the data distribution. Because contrastive
divergence initializes its Markov chains from data points and runs the Markov chain for
only a few steps, it is unlikely to visit modes in the model that are far from the data
points. This means that when sampling from the model, we will sometimes get samples
that do not resemble the data. It also means that due to wasting some of its probability
mass on these modes, the model will struggle to place high probability mass on the correct
modes. For the purpose of visualization, this figure uses a somewhat simplified concept
of distance—the spurious mode is far from the correct mode along the number line in
R
. This corresponds to a Markov chain based on making local moves with a single
x
variable in
R
. For most deep probabilistic models, the Markov chains are based on Gibbs
sampling and can make non-local moves of individual variables but cannot move all of
the variables simultaneously. For these problems, it is usually better to consider the edit
distance between modes, rather than the Euclidean distance. However, edit distance in a
high dimensional space is difficult to depict in a 2-D plot.
Markov chains initialized at training points, unless k is very large.
Carreira-Perpiñan and Hinton (2005) showed experimentally that the CD
estimator is biased for RBMs and fully visible Boltzmann machines, in that it
converges to different points than the maximum likelihood estimator. They argue
that because the bias is small, CD could be used as an inexpensive way to initialize
a model that could later be fine-tuned via more expensive MCMC methods. Bengio
and Delalleau (2009) showed that CD can be interpreted as discarding the smallest
terms of the correct MCMC update gradient, which explains the bias.
CD is useful for training shallow models like RBMs. These can in turn be
stacked to initialize deeper models like DBNs or DBMs. However, CD does not
provide much help for training deeper models directly. This is because it is difficult
611
CHAPTER 18. CONFRONTING THE PARTITION FUNCTION
to obtain samples of the hidden units given samples of the visible units. Since the
hidden units are not included in the data, initializing from training points cannot
solve the problem. Even if we initialize the visible units from the data, we will still
need to burn in a Markov chain sampling from the distribution over the hidden
units conditioned on those visible samples.
The CD algorithm can be thought of as penalizing the model for having a
Markov chain that changes the input rapidly when the input comes from the data.
This means training with CD somewhat resembles autoencoder training. Even
though CD is more biased than some of the other training methods, it can be
useful for pretraining shallow models that will later be stacked. This is because
the earliest models in the stack are encouraged to copy more information up to
their latent variables, thereby making it available to the later models. This should
be thought of more of as an often-exploitable side effect of CD training rather than
a principled design advantage.
Sutskever and Tieleman (2010) showed that the CD update direction is not the
gradient of any function. This allows for situations where CD could cycle forever,
but in practice this is not a serious problem.
A different strategy that resolves many of the problems with CD is to initial-
ize the Markov chains at each gradient step with their states from the previous
gradient step. This approach was first discovered under the name
stochastic max-
imum likelihood
(SML) in the applied mathematics and statistics community
(Younes, 1998) and later independently rediscovered under the name
persistent
contrastive divergence
(PCD, or PCD-
k
to indicate the use of
k
Gibbs steps
per update) in the deep learning community (Tieleman, 2008). See algorithm 18.3.
The basic idea of this approach is that, so long as the steps taken by the stochastic
gradient algorithm are small, then the model from the previous step will be similar
to the model from the current step. It follows that the samples from the previous
model’s distribution will be very close to being fair samples from the current
model’s distribution, so a Markov chain initialized with these samples will not
require much time to mix.
Because each Markov chain is continually updated throughout the learning
process, rather than restarted at each gradient step, the chains are free to wander
far enough to find all of the model’s modes. SML is thus considerably more
resistant to forming models with spurious modes than CD is. Moreover, because
it is possible to store the state of all of the sampled variables, whether visible or
latent, SML provides an initialization point for both the hidden and visible units.
CD is only able to provide an initialization for the visible units, and therefore
requires burn-in for deep models. SML is able to train deep models efficiently.
612
CHAPTER 18. CONFRONTING THE PARTITION FUNCTION
Marlin et al. (2010) compared SML to many of the other criteria presented in
this chapter. They found that SML results in the best test set log-likelihood for
an RBM, and that if the RBM’s hidden units are used as features for an SVM
classifier, SML results in the best classification accuracy.
SML is vulnerable to becoming inaccurate if the stochastic gradient algorithm
can move the model faster than the Markov chain can mix between steps. This
can happen if
k
is too small or
is too large. The permissible range of values is
unfortunately highly problem-dependent. There is no known way to test formally
whether the chain is successfully mixing between steps. Subjectively, if the learning
rate is too high for the number of Gibbs steps, the human operator will be able
to observe that there is much more variance in the negative phase samples across
gradient steps rather than across different Markov chains. For example, a model
trained on MNIST might sample exclusively 7s on one step. The learning process
will then push down strongly on the mode corresponding to 7s, and the model
might sample exclusively 9s on the next step.
Algorithm 18.3
The stochastic maximum likelihood / persistent contrastive
divergence algorithm using gradient ascent as the optimization procedure.
Set , the step size, to a small positive number.
Set
k
, the number of Gibbs steps, high enough to allow a Markov chain sampling
from
p
(
x
;
θ
+
g
) to burn in, starting from samples from
p
(
x
;
θ
). Perhaps 1 for
RBM on a small image patch, or 5-50 for a more complicated model like a DBM.
Initialize a set of
m
samples
{
˜
x
(1)
, . . . ,
˜
x
(m)
}
to random values (e.g., from a
uniform or normal distribution, or possibly a distribution with marginals matched
to the model’s marginals).
while not converged do
Sample a minibatch of m examples {x
(1)
, . . . , x
(m)
} from the training set.
g
1
m
m
i=1
θ
log ˜p(x
(i)
; θ).
for i = 1 to k do
for j = 1 to m do
˜
x
(j)
gibbs_update(
˜
x
(j)
).
end for
end for
g g
1
m
m
i=1
θ
log ˜p(
˜
x
(i)
; θ).
θ θ + g.
end while
Care must be taken when evaluating the samples from a model trained with
SML. It is necessary to draw the samples starting from a fresh Markov chain
613
CHAPTER 18. CONFRONTING THE PARTITION FUNCTION
initialized from a random starting point after the model is done training. The
samples present in the persistent negative chains used for training have been
influenced by several recent versions of the model, and thus can make the model
appear to have greater capacity than it actually does.
Berglund and Raiko (2013) performed experiments to examine the bias and
variance in the estimate of the gradient provided by CD and SML. CD proves to
have lower variance than the estimator based on exact sampling. SML has higher
variance. The cause of CD’s low variance is its use of the same training points
in both the positive and negative phase. If the negative phase is initialized from
different training points, the variance rises above that of the estimator based on
exact sampling.
All of these methods based on using MCMC to draw samples from the model
can in principle be used with almost any variant of MCMC. This means that
techniques such as SML can be improved by using any of the enhanced MCMC
techniques described in chapter 17, such as parallel tempering (Desjardins et al.,
2010; Cho et al., 2010).
One approach to accelerating mixing during learning relies not on changing
the Monte Carlo sampling technology but rather on changing the parametrization
of the model and the cost function.
Fast PCD
or FPCD (Tieleman and Hinton,
2009) involves replacing the parameters
θ
of a traditional model with an expression
θ = θ
(slow)
+ θ
(fast)
. (18.16)
There are now twice as many parameters as before, and they are added together
element-wise to provide the parameters used by the original model definition. The
fast copy of the parameters is trained with a much larger learning rate, allowing
it to adapt rapidly in response to the negative phase of learning and push the
Markov chain to new territory. This forces the Markov chain to mix rapidly, though
this effect only occurs during learning while the fast weights are free to change.
Typically one also applies significant weight decay to the fast weights, encouraging
them to converge to small values, after only transiently taking on large values long
enough to encourage the Markov chain to change modes.
One key benefit to the MCMC-based methods described in this section is that
they provide an estimate of the gradient of
log Z
, and thus we can essentially
decompose the problem into the
log ˜p
contribution and the
log Z
contribution.
We can then use any other method to tackle
log ˜p
(
x
), and just add our negative
phase gradient onto the other method’s gradient. In particular, this means that
our positive phase can make use of methods that provide only a lower bound on
˜p
. Most of the other methods of dealing with
log Z
presented in this chapter are
614
CHAPTER 18. CONFRONTING THE PARTITION FUNCTION
incompatible with bound-based positive phase methods.
18.3 Pseudolikelihood
Monte Carlo approximations to the partition function and its gradient directly
confront the partition function. Other approaches sidestep the issue, by training
the model without computing the partition function. Most of these approaches are
based on the observation that it is easy to compute ratios of probabilities in an
undirected probabilistic model. This is because the partition function appears in
both the numerator and the denominator of the ratio and cancels out:
p(x)
p(y)
=
1
Z
˜p(x)
1
Z
˜p(y)
=
˜p(x)
˜p(y)
. (18.17)
The pseudolikelihood is based on the observation that conditional probabilities
take this ratio-based form, and thus can be computed without knowledge of the
partition function. Suppose that we partition
x
into
a
,
b
and
c
, where
a
contains
the variables we want to find the conditional distribution over,
b
contains the
variables we want to condition on, and
c
contains the variables that are not part
of our query.
p(a | b) =
p(a, b)
p(b)
=
p(a, b)
a,c
p(a, b, c)
=
˜p(a, b)
a,c
˜p(a, b, c)
. (18.18)
This quantity requires marginalizing out
a
, which can be a very efficient operation
provided that
a
and
c
do not contain very many variables. In the extreme case,
a
can be a single variable and
c
can be empty, making this operation require only as
many evaluations of ˜p as there are values of a single random variable.
Unfortunately, in order to compute the log-likelihood, we need to marginalize
out large sets of variables. If there are
n
variables total, we must marginalize a set
of size n 1. By the chain rule of probability,
log p(x) = log p(x
1
) + log p(x
2
| x
1
) + ··· + p(x
n
| x
1:n1
). (18.19)
In this case, we have made
a
maximally small, but
c
can be as large as
x
2:n
. What
if we simply move
c
into
b
to reduce the computational cost? This yields the
pseudolikelihood
(Besag, 1975) objective function, based on predicting the value
of feature x
i
given all of the other features x
i
:
n
i=1
log p(x
i
| x
i
). (18.20)
615
CHAPTER 18. CONFRONTING THE PARTITION FUNCTION
If each random variable has
k
different values, this requires only
k×n
evaluations
of
˜p
to compute, as opposed to the
k
n
evaluations needed to compute the partition
function.
This may look like an unprincipled hack, but it can be proven that estimation
by maximizing the pseudolikelihood is asymptotically consistent (Mase, 1995).
Of course, in the case of datasets that do not approach the large sample limit,
pseudolikelihood may display different behavior from the maximum likelihood
estimator.
It is possible to trade computational complexity for deviation from maximum
likelihood behavior by using the
generalized pseudolikelihood
estimator (Huang
and Ogata, 2002). The generalized pseudolikelihood estimator uses
m
different sets
S
(i)
, i
= 1
, . . . , m
of indices of variables that appear together on the left side of the
conditioning bar. In the extreme case of
m
= 1 and
S
(1)
=
1, . . . , n
the generalized
pseudolikelihood recovers the log-likelihood. In the extreme case of
m
=
n
and
S
(i)
=
{i}
, the generalized pseudolikelihood recovers the pseudolikelihood. The
generalized pseudolikelihood objective function is given by
m
i=1
log p(x
S
(i)
| x
S
(i)
). (18.21)
The performance of pseudolikelihood-based approaches depends largely on how
the model will be used. Pseudolikelihood tends to perform poorly on tasks that
require a good model of the full joint
p
(
x
), such as density estimation and sampling.
However, it can perform better than maximum likelihood for tasks that require only
the conditional distributions used during training, such as filling in small amounts
of missing values. Generalized pseudolikelihood techniques are especially powerful if
the data has regular structure that allows the
S
index sets to be designed to capture
the most important correlations while leaving out groups of variables that only
have negligible correlation. For example, in natural images, pixels that are widely
separated in space also have weak correlation, so the generalized pseudolikelihood
can be applied with each S set being a small, spatially localized window.
One weakness of the pseudolikelihood estimator is that it cannot be used with
other approximations that provide only a lower bound on
˜p
(
x
), such as variational
inference, which will be covered in chapter 19. This is because
˜p
appears in the
denominator. A lower bound on the denominator provides only an upper bound on
the expression as a whole, and there is no benefit to maximizing an upper bound.
This makes it difficult to apply pseudolikelihood approaches to deep models such
as deep Boltzmann machines, since variational methods are one of the dominant
approaches to approximately marginalizing out the many layers of hidden variables
616
CHAPTER 18. CONFRONTING THE PARTITION FUNCTION
that interact with each other. However, pseudolikelihood is still useful for deep
learning, because it can be used to train single layer models, or deep models using
approximate inference methods that are not based on lower bounds.
Pseudolikelihood has a much greater cost per gradient step than SML, due to
its explicit computation of all of the conditionals. However, generalized pseudo-
likelihood and similar criteria can still perform well if only one randomly selected
conditional is computed per example (Goodfellow et al., 2013b), thereby bringing
the computational cost down to match that of SML.
Though the pseudolikelihood estimator does not explicitly minimize
log Z
, it
can still be thought of as having something resembling a negative phase. The
denominators of each conditional distribution result in the learning algorithm
suppressing the probability of all states that have only one variable differing from
a training example.
See Marlin and de Freitas (2011) for a theoretical analysis of the asymptotic
efficiency of pseudolikelihood.
18.4 Score Matching and Ratio Matching
Score matching (Hyvärinen, 2005) provides another consistent means of training a
model without estimating
Z
or its derivatives. The name score matching comes
from terminology in which the derivatives of a log density with respect to its
argument,
x
log p
(
x
), are called its
score
. The strategy used by score matching
is to minimize the expected squared difference between the derivatives of the
model’s log density with respect to the input and the derivatives of the data’s log
density with respect to the input:
L(x, θ) =
1
2
||∇
x
log p
model
(x; θ)
x
log p
data
(x)||
2
2
(18.22)
J(θ) =
1
2
E
p
data
(x)
L(x, θ) (18.23)
θ
= min
θ
J(θ) (18.24)
This objective function avoids the difficulties associated with differentiating
the partition function
Z
because
Z
is not a function of
x
and therefore
x
Z
= 0.
Initially, score matching appears to have a new difficulty: computing the score
of the data distribution requires knowledge of the true distribution generating
the training data, p
data
. Fortunately, minimizing the expected value of L(x, θ) is
617
CHAPTER 18. CONFRONTING THE PARTITION FUNCTION
equivalent to minimizing the expected value of
˜
L(x, θ) =
n
j=1
2
x
2
j
log p
model
(x; θ) +
1
2
x
j
log p
model
(x; θ)
2
(18.25)
where n is the dimensionality of x.
Because score matching requires taking derivatives with respect to
x
, it is not
applicable to models of discrete data. However, the latent variables in the model
may be discrete.
Like the pseudolikelihood, score matching only works when we are able to
evaluate
log ˜p
(
x
) and its derivatives directly. It is not compatible with methods
that only provide a lower bound on
log ˜p
(
x
), because score matching requires
the derivatives and second derivatives of
log ˜p
(
x
) and a lower bound conveys no
information about its derivatives. This means that score matching cannot be
applied to estimating models with complicated interactions between the hidden
units, such as sparse coding models or deep Boltzmann machines. While score
matching can be used to pretrain the first hidden layer of a larger model, it has
not been applied as a pretraining strategy for the deeper layers of a larger model.
This is probably because the hidden layers of such models usually contain some
discrete variables.
While score matching does not explicitly have a negative phase, it can be
viewed as a version of contrastive divergence using a specific kind of Markov chain
(Hyvärinen, 2007a). The Markov chain in this case is not Gibbs sampling, but
rather a different approach that makes local moves guided by the gradient. Score
matching is equivalent to CD with this type of Markov chain when the size of the
local moves approaches zero.
Lyu (2009) generalized score matching to the discrete case (but made an error
in their derivation that was corrected by Marlin et al. (2010)). Marlin et al.
(2010) found that
generalized score matching
(GSM) does not work in high
dimensional discrete spaces where the observed probability of many events is 0.
A more successful approach to extending the basic ideas of score matching
to discrete data is
ratio matching
(Hyvärinen, 2007b). Ratio matching applies
specifically to binary data. Ratio matching consists of minimizing the average over
examples of the following objective function:
L
(RM)
(x, θ) =
n
j=1
1
1 +
p
model
(x;θ)
p
model
(f(x),j);θ)
2
, (18.26)
618