Performance Bounds

          
       In computer science, the complexity of a problem, say sorting N numbers, is measured by the number of basic operators, e.g. O(N log N).
  In computer vision,  the complexity measure includes two aspects: the computational complexity and the problem complexity. The latter 
  includes error rates and error bounds:

      Unlike the sorting problem, we cannot solve a problem exactly and not  100% correctly.  For example, it is impossible to design
  an edge detector or a face detector which is 100% correct.  Therefore, various bounds are needed to measure the fundamental
  limits.  For example, there are a three traditional bounds.
                         
                         a). The Cramer-Rao bound.        To learn a model with parameters, given finite data the parameter can be learned
                                                                                only to a certain precision. This is related to the Fisher's information which measures
                                                                                how flat the likelihood is.
                         b). Chernoff/Bhatta information.  For two classes whose probability densities overlap, the fundamental limit of
                                                                                 distinguishing the two classes is the measured by the Chernoff information which
                                                                                 is a distance measure of the two densities. When the two densities are Gaussian
                                                                                 with equal variance. This reduces to the signal-to-noise ratio (SNR).
                         c). Channel Capacity and mutual information. From certain features and evidence x, the ability for discriminating 
                                                                                other variables y is limited by the mutual information between x and y.
 
                        However, real applications are far more difficult to measure than the two simple cases. 
                 
                For example, the figures below show three typical regimes: Detecting a stop sign is easy, detecting an animal adapted to its
           environment is much difficult, and detecting the dalmation dog in the snow scene is almost impossible. 
Easy regime: Stop sign
Difficult regime: Lizard
impossible regime: Damation dog
 
          The order parameter theory was developed and linked with the minimax entropy learning. It can do two things:

   1). The error probability for discrimination is an exponential number     exp(-nK) with n being  the number of observations and
            K is a rate.  When K is large, the task is easy. When K is nearly zero, then it is very hard. When K is negative, then the
            task is impossible.  The figures below show three typical situations for detecting a target road (curve) in clutter.
Easy task (K=0.8647) Harder task (K=0.2105) Impossible task (K=-0.7272)
    2).  The order parameter than is related to the minimax entropy learning: for large K and easy task, we can learn a simple model
which will be enough. For small K, the models for the patterns will have to be more sophisticated.

References:

 [1]    Y. N. Wu,  S. C. Zhu, and X. W. Liu,  "Equivalence of Julesz Ensemble and FRAME models", 
               International Journal of Computer Vision, 38(3), 247-265, July, 2000. 
 
 [2]    A.L. Yuille and J.M. Coughlan,   ``Fundamental Limits of Bayesian Inference: Order Parameters and Phase Transitions for
                 Road Tracking'' .  IEEE Trans. on Pattern Analysis and Machine Intelligence PAMI. Vol. 22. No. 2. February. 2000.

 [3]  A. L. Yuille, J. M. Coughlan, Y. N. Wu, and S. C. Zhu,   "Order Parameter for Detecting Target Curves in Images: How does
              High Level  Knowledge Helps?"     International Journal of Computer Vision, vol. 41, No. 1/2, pp.9-33, January, 2001. 
 
 [4]  S. Kinishi, J. M. Coughlan, A. L. Yuille, and S. C. Zhu,  "Fundamental Bounds on Edge Detection: an Information Theoretic 
             Evaluation of Different Edge Cues",  IEEE Trans. on Pattern Analysis and Machine Intelligence, vol.25, no.1, January, 2003.
  
 [5]  S. Zhu, A.D. Lanterman, and M.I. Miller, "Clutter modeling and Performance analysis in Automatic Target recognition",
           Workshop on Detection and Classification of Difficult Targets, Redstone Arsenal, Alabama, 1998