Partial Differential Equations In Image Analysis

 

       Partial differential equations (PDE) have been widely used in image restoration (denoise, inpainting or fill-in, super-resolution), 
and image analysis (segmentation and  tracking objects). The former are based on pixels, and the latter are based on contours.  
These PDEs are computational tools and belong to the computational level in Marr's terminology. Thus to understand their behaviors 
we need to study them at the representational level.  Firstly  we should be clear what the PDE is doing and what the underlying 
representations and assumptions are.  Secondly the PDEs must be adapted to the statistical properties of the image ensemble.

      Thus we have been advocating a three-stage scheme,

         
    We start with studying image statistics. Then we learn probabilistic models that characterize the statistics and represent visual
knowledge. The models learned by minimax entropy have Gibbs form with energy functions. Then minimizing the energy function
by Euler-Largange equations, we got PDEs. Sometimes the Green's theorem is used to transfer a 2D integral on regions into
a 1D integral on contours.  

      Two families of PDEs are derived in this way.

  I).  Region-competition equation (Zhu and Yuille, 1995-96)

         This is derived from an image segmentation formulation in the Bayes framework, and it simulates the motion (evolution)
  of a number of regions (contours). The contour evolution is driven by the fitness of probability models at adjacent regions 
 through a log-likelihood ratio test. The contour smoothness prior term adds a curvature flow equation.   The probability for the
regions can be in any form--parametric or non-parametric.
 
         This equation shows that how intuitive and general the PDEs are when they are connected to statistical models.
 2). GRADE: Gibbs Reaction and Diffusion Equations (Zhu and Mumford, 1997)

         We first learn a generic prior model for natural images, and this yields a Gibbs distribution with non-parametric
potential functions.  Then PDEs are derived and shown to be much more general than the popular non-linear diffusion
equations for edge preserving (Perona and Malik). 

        a). It can embed any Gabor-type filters, not just the gradients.
        b). It uses a non-parametric potential which is best adapted to the image statistics.

       This work was really to show the three stage scheme.

       We should be aware that PDEs at the current form have fundamental limits.

  I).   They are greedy methods and can be trapped to various local minima depending critically on initial conditions.
 2).  They do not handle multiple models and object well. For example, split/merge, death/birth, model switching etc.
         
  These limits are removed in the Data-Driven Markov Chain Monte Carlo (DDMCMC) scheme. A more comprehensive  picture is
shown below.  When the image models are Gibbs type (or descriptive models), we can derive PDEs, but when the models are
generative. they involve discrete variables, such as the types of models, the graph partition, the number of objects/components etc.
Thus the computation engages jumps from one manifold (subspace) to the others with changing dimensions. 



     This is exactly where four beautiful mathematical tools: MRF, MCMC, Wavelets, PDEs converge.

  Reference
 [1].  S. C. Zhu and A.L Yuille, "Region Competition: Unifying Snake/balloon, Region Growing and Bayes/MDL/Energy for 
         multi-band Image Segmentation", IEEE Trans.on PAMI,  vol.18, no.9, pp.884-900, Sept. 1996.
  [2].    S. C. Zhu and D.B. Mumford,   "Prior Learning and Gibbs Reaction-Diffusion",   IEEE Trans. on PAMI, vol.19, no.11, 
         pp1236-1250,  Nov. 1997