|  | @@ -4,9 +4,9 @@
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  |  .. _`chapter-modeling`:
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  | -============
 | 
	
		
			
				|  |  | -Modeling API
 | 
	
		
			
				|  |  | -============
 | 
	
		
			
				|  |  | +========
 | 
	
		
			
				|  |  | +Modeling
 | 
	
		
			
				|  |  | +========
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  |  Recall that Ceres solves robustified non-linear least squares problems
 | 
	
		
			
				|  |  |  of the form
 | 
	
	
		
			
				|  | @@ -29,18 +29,21 @@ that is used to reduce the influence of outliers on the solution of
 | 
	
		
			
				|  |  |  non-linear least squares problems.
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  |  In this chapter we will describe the various classes that are part of
 | 
	
		
			
				|  |  | -Ceres Solver's modeling API, and how they can be used to construct
 | 
	
		
			
				|  |  | -optimization.
 | 
	
		
			
				|  |  | -
 | 
	
		
			
				|  |  | -Once a problem has been constructed, various methods for solving them
 | 
	
		
			
				|  |  | -will be discussed in :ref:`chapter-solving`. It is by design that the
 | 
	
		
			
				|  |  | -modeling and the solving APIs are orthogonal to each other. This
 | 
	
		
			
				|  |  | -enables easy switching/tweaking of various solver parameters without
 | 
	
		
			
				|  |  | -having to touch the problem once it has been successfuly modeling.
 | 
	
		
			
				|  |  | +Ceres Solver's modeling API, and how they can be used to construct an
 | 
	
		
			
				|  |  | +optimization problem. Once a problem has been constructed, various
 | 
	
		
			
				|  |  | +methods for solving them will be discussed in
 | 
	
		
			
				|  |  | +:ref:`chapter-solving`. It is by design that the modeling and the
 | 
	
		
			
				|  |  | +solving APIs are orthogonal to each other. This enables
 | 
	
		
			
				|  |  | +switching/tweaking of various solver parameters without having to
 | 
	
		
			
				|  |  | +touch the problem once it has been successfully modeled.
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  |  :class:`CostFunction`
 | 
	
		
			
				|  |  |  ---------------------
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  | +The single biggest task when modeling a problem is specifying the
 | 
	
		
			
				|  |  | +residuals and their derivatives. This is done using
 | 
	
		
			
				|  |  | +:class:`CostFunction` objects.
 | 
	
		
			
				|  |  | +
 | 
	
		
			
				|  |  |  .. class:: CostFunction
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  |     .. code-block:: c++
 | 
	
	
		
			
				|  | @@ -66,7 +69,7 @@ having to touch the problem once it has been successfuly modeling.
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  |     .. math:: J_{ij} = \frac{\partial}{\partial x_{i_j}}f_i\left(x_{i_1},...,x_{i_k}\right),\quad \forall j \in \{i_1,..., i_k\}
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  | -   The signature of the class:`CostFunction` (number and sizes of
 | 
	
		
			
				|  |  | +   The signature of the :class:`CostFunction` (number and sizes of
 | 
	
		
			
				|  |  |     input parameter blocks and number of outputs) is stored in
 | 
	
		
			
				|  |  |     :member:`CostFunction::parameter_block_sizes_` and
 | 
	
		
			
				|  |  |     :member:`CostFunction::num_residuals_` respectively. User code
 | 
	
	
		
			
				|  | @@ -77,18 +80,16 @@ having to touch the problem once it has been successfuly modeling.
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  |  .. function:: bool CostFunction::Evaluate(double const* const* parameters, double* residuals, double** jacobians)
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  | -   This is the key methods. It implements the residual and Jacobian
 | 
	
		
			
				|  |  | -   computation.
 | 
	
		
			
				|  |  | +   Compute the residual vector and the Jacobian matrices.
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  |     ``parameters`` is an array of pointers to arrays containing the
 | 
	
		
			
				|  |  | -   various parameter blocks. parameters has the same number of
 | 
	
		
			
				|  |  | -   elements as :member:`CostFunction::parameter_block_sizes_`.
 | 
	
		
			
				|  |  | -   Parameter blocks are in the same order as
 | 
	
		
			
				|  |  | +   various parameter blocks. ``parameters`` has the same number of
 | 
	
		
			
				|  |  | +   elements as :member:`CostFunction::parameter_block_sizes_` and the
 | 
	
		
			
				|  |  | +   parameter blocks are in the same order as
 | 
	
		
			
				|  |  |     :member:`CostFunction::parameter_block_sizes_`.
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  |     ``residuals`` is an array of size ``num_residuals_``.
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  | -
 | 
	
		
			
				|  |  |     ``jacobians`` is an array of size
 | 
	
		
			
				|  |  |     :member:`CostFunction::parameter_block_sizes_` containing pointers
 | 
	
		
			
				|  |  |     to storage for Jacobian matrices corresponding to each parameter
 | 
	
	
		
			
				|  | @@ -105,7 +106,7 @@ having to touch the problem once it has been successfuly modeling.
 | 
	
		
			
				|  |  |     this is the case when computing cost only. If ``jacobians[i]`` is
 | 
	
		
			
				|  |  |     ``NULL``, then the Jacobian matrix corresponding to the
 | 
	
		
			
				|  |  |     :math:`i^{\textrm{th}}` parameter block must not be returned, this
 | 
	
		
			
				|  |  | -   is the case when the a parameter block is marked constant.
 | 
	
		
			
				|  |  | +   is the case when a parameter block is marked constant.
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  |     **NOTE** The return value indicates whether the computation of the
 | 
	
		
			
				|  |  |     residuals and/or jacobians was successful or not.
 | 
	
	
		
			
				|  | @@ -132,10 +133,10 @@ having to touch the problem once it has been successfuly modeling.
 | 
	
		
			
				|  |  |  .. class:: SizedCostFunction
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  |     If the size of the parameter blocks and the size of the residual
 | 
	
		
			
				|  |  | -   vector is known at compile time (this is the common case), Ceres
 | 
	
		
			
				|  |  | -   provides :class:`SizedCostFunction`, where these values can be
 | 
	
		
			
				|  |  | -   specified as template parameters. In this case the user only needs
 | 
	
		
			
				|  |  | -   to implement the :func:`CostFunction::Evaluate`.
 | 
	
		
			
				|  |  | +   vector is known at compile time (this is the common case),
 | 
	
		
			
				|  |  | +   :class:`SizeCostFunction` can be used where these values can be
 | 
	
		
			
				|  |  | +   specified as template parameters and the user only needs to
 | 
	
		
			
				|  |  | +   implement :func:`CostFunction::Evaluate`.
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  |     .. code-block:: c++
 | 
	
		
			
				|  |  |  
 | 
	
	
		
			
				|  | @@ -155,9 +156,28 @@ having to touch the problem once it has been successfuly modeling.
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  |  .. class:: AutoDiffCostFunction
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  | -   But even defining the :class:`SizedCostFunction` can be a tedious
 | 
	
		
			
				|  |  | -   affair if complicated derivative computations are involved. To this
 | 
	
		
			
				|  |  | -   end Ceres provides automatic differentiation.
 | 
	
		
			
				|  |  | +   Defining a :class:`CostFunction` or a :class:`SizedCostFunction`
 | 
	
		
			
				|  |  | +   can be a tedious and error prone especially when computing
 | 
	
		
			
				|  |  | +   derivatives.  To this end Ceres provides `automatic differentiation
 | 
	
		
			
				|  |  | +   <http://en.wikipedia.org/wiki/Automatic_differentiation>`_.
 | 
	
		
			
				|  |  | +
 | 
	
		
			
				|  |  | +   .. code-block:: c++
 | 
	
		
			
				|  |  | +
 | 
	
		
			
				|  |  | +     template <typename CostFunctor,
 | 
	
		
			
				|  |  | +            int M,        // Number of residuals, or ceres::DYNAMIC.
 | 
	
		
			
				|  |  | +            int N0,       // Number of parameters in block 0.
 | 
	
		
			
				|  |  | +            int N1 = 0,   // Number of parameters in block 1.
 | 
	
		
			
				|  |  | +            int N2 = 0,   // Number of parameters in block 2.
 | 
	
		
			
				|  |  | +            int N3 = 0,   // Number of parameters in block 3.
 | 
	
		
			
				|  |  | +            int N4 = 0,   // Number of parameters in block 4.
 | 
	
		
			
				|  |  | +            int N5 = 0,   // Number of parameters in block 5.
 | 
	
		
			
				|  |  | +            int N6 = 0,   // Number of parameters in block 6.
 | 
	
		
			
				|  |  | +            int N7 = 0,   // Number of parameters in block 7.
 | 
	
		
			
				|  |  | +            int N8 = 0,   // Number of parameters in block 8.
 | 
	
		
			
				|  |  | +            int N9 = 0>   // Number of parameters in block 9.
 | 
	
		
			
				|  |  | +     class AutoDiffCostFunction : public
 | 
	
		
			
				|  |  | +     SizedCostFunction<M, N0, N1, N2, N3, N4, N5, N6, N7, N8, N9> {
 | 
	
		
			
				|  |  | +     };
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  |     To get an auto differentiated cost function, you must define a
 | 
	
		
			
				|  |  |     class with a templated ``operator()`` (a functor) that computes the
 | 
	
	
		
			
				|  | @@ -234,14 +254,14 @@ having to touch the problem once it has been successfuly modeling.
 | 
	
		
			
				|  |  |     computing a 1-dimensional output from two arguments, both
 | 
	
		
			
				|  |  |     2-dimensional.
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  | -   The framework can currently accommodate cost functions of up to 6
 | 
	
		
			
				|  |  | -   independent variables, and there is no limit on the dimensionality of
 | 
	
		
			
				|  |  | -   each of them.
 | 
	
		
			
				|  |  | +   The framework can currently accommodate cost functions of up to 10
 | 
	
		
			
				|  |  | +   independent variables, and there is no limit on the dimensionality
 | 
	
		
			
				|  |  | +   of each of them.
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  |     **WARNING 1** Since the functor will get instantiated with
 | 
	
		
			
				|  |  |     different types for ``T``, you must convert from other numeric
 | 
	
		
			
				|  |  |     types to ``T`` before mixing computations with other variables
 | 
	
		
			
				|  |  | -   oftype ``T``. In the example above, this is seen where instead of
 | 
	
		
			
				|  |  | +   of type ``T``. In the example above, this is seen where instead of
 | 
	
		
			
				|  |  |     using ``k_`` directly, ``k_`` is wrapped with ``T(k_)``.
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  |     **WARNING 2** A common beginner's error when first using
 | 
	
	
		
			
				|  | @@ -253,12 +273,74 @@ having to touch the problem once it has been successfuly modeling.
 | 
	
		
			
				|  |  |     as the last template argument.
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  | +:class:`DynamicAutoDiffCostFunction`
 | 
	
		
			
				|  |  | +------------------------------------
 | 
	
		
			
				|  |  | +
 | 
	
		
			
				|  |  | +.. class:: DynamicAutoDiffCostFunction
 | 
	
		
			
				|  |  | +
 | 
	
		
			
				|  |  | +   :class:`AutoDiffCostFunction` requires that the number of parameter
 | 
	
		
			
				|  |  | +   blocks and their sizes be known at compile time, e.g., Bezier curve
 | 
	
		
			
				|  |  | +   fitting, Neural Network training etc. It also has an upper limit of
 | 
	
		
			
				|  |  | +   10 parameter blocks. In a number of applications, this is not
 | 
	
		
			
				|  |  | +   enough.
 | 
	
		
			
				|  |  | +
 | 
	
		
			
				|  |  | +     .. code-block:: c++
 | 
	
		
			
				|  |  | +
 | 
	
		
			
				|  |  | +      template <typename CostFunctor, int Stride = 4>
 | 
	
		
			
				|  |  | +      class DynamicAutoDiffCostFunction : public CostFunction {
 | 
	
		
			
				|  |  | +      };
 | 
	
		
			
				|  |  | +
 | 
	
		
			
				|  |  | +   In such cases :class:`DynamicAutoDiffCostFunction` can be
 | 
	
		
			
				|  |  | +   used. Like :class:`AutoDiffCostFunction` the user must define a
 | 
	
		
			
				|  |  | +   templated functor, but the signature of the functor differs
 | 
	
		
			
				|  |  | +   slightly. The expected interface for the cost functors is:
 | 
	
		
			
				|  |  | +
 | 
	
		
			
				|  |  | +     .. code-block:: c++
 | 
	
		
			
				|  |  | +
 | 
	
		
			
				|  |  | +       struct MyCostFunctor {
 | 
	
		
			
				|  |  | +         template<typename T>
 | 
	
		
			
				|  |  | +         bool operator()(T const* const* parameters, T* residuals) const {
 | 
	
		
			
				|  |  | +         }
 | 
	
		
			
				|  |  | +       }
 | 
	
		
			
				|  |  | +
 | 
	
		
			
				|  |  | +   Since the sizing of the parameters is done at runtime, you must
 | 
	
		
			
				|  |  | +   also specify the sizes after creating the dynamic autodiff cost
 | 
	
		
			
				|  |  | +   function. For example:
 | 
	
		
			
				|  |  | +
 | 
	
		
			
				|  |  | +     .. code-block:: c++
 | 
	
		
			
				|  |  | +
 | 
	
		
			
				|  |  | +       DynamicAutoDiffCostFunction<MyCostFunctor, 4> cost_function(
 | 
	
		
			
				|  |  | +           new MyCostFunctor());
 | 
	
		
			
				|  |  | +       cost_function.AddParameterBlock(5);
 | 
	
		
			
				|  |  | +       cost_function.AddParameterBlock(10);
 | 
	
		
			
				|  |  | +       cost_function.SetNumResiduals(21);
 | 
	
		
			
				|  |  | +
 | 
	
		
			
				|  |  | +   Under the hood, the implementation evaluates the cost function
 | 
	
		
			
				|  |  | +   multiple times, computing a small set of the derivatives (four by
 | 
	
		
			
				|  |  | +   default, controlled by the ``Stride`` template parameter) with each
 | 
	
		
			
				|  |  | +   pass. There is a performance tradeoff with the size of the passes;
 | 
	
		
			
				|  |  | +   Smaller sizes are more cache efficient but result in larger number
 | 
	
		
			
				|  |  | +   of passes, and larger stride lengths can destroy cache-locality
 | 
	
		
			
				|  |  | +   while reducing the number of passes over the cost function. The
 | 
	
		
			
				|  |  | +   optimal value depends on the number and sizes of the various
 | 
	
		
			
				|  |  | +   parameter blocks.
 | 
	
		
			
				|  |  | +
 | 
	
		
			
				|  |  | +   As a rule of thumb, try using :class:`AutoDiffCostFunction` before
 | 
	
		
			
				|  |  | +   you use :class:`DynamicAutoDiffCostFunction`.
 | 
	
		
			
				|  |  | +
 | 
	
		
			
				|  |  |  :class:`NumericDiffCostFunction`
 | 
	
		
			
				|  |  |  --------------------------------
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  |  .. class:: NumericDiffCostFunction
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  | -   .. code-block:: c++
 | 
	
		
			
				|  |  | +  In some cases, its not possible to define a templated cost functor,
 | 
	
		
			
				|  |  | +  for example when the evaluation of the residual involves a call to a
 | 
	
		
			
				|  |  | +  library function that you do not have control over.  In such a
 | 
	
		
			
				|  |  | +  situation, `numerical differentiation
 | 
	
		
			
				|  |  | +  <http://en.wikipedia.org/wiki/Numerical_differentiation>`_ can be
 | 
	
		
			
				|  |  | +  used.
 | 
	
		
			
				|  |  | +
 | 
	
		
			
				|  |  | +    .. code-block:: c++
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  |        template <typename CostFunctionNoJacobian,
 | 
	
		
			
				|  |  |                  NumericDiffMethod method = CENTRAL, int M = 0,
 | 
	
	
		
			
				|  | @@ -268,12 +350,6 @@ having to touch the problem once it has been successfuly modeling.
 | 
	
		
			
				|  |  |          : public SizedCostFunction<M, N0, N1, N2, N3, N4, N5, N6, N7, N8, N9> {
 | 
	
		
			
				|  |  |        };
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  | -
 | 
	
		
			
				|  |  | -   Create a :class:`CostFunction` as needed by the least squares
 | 
	
		
			
				|  |  | -   framework with jacobians computed via numeric (a.k.a. finite)
 | 
	
		
			
				|  |  | -   differentiation. For more details see
 | 
	
		
			
				|  |  | -   http://en.wikipedia.org/wiki/Numerical_differentiation.
 | 
	
		
			
				|  |  | -
 | 
	
		
			
				|  |  |     To get an numerically differentiated :class:`CostFunction`, you
 | 
	
		
			
				|  |  |     must define a class with a ``operator()`` (a functor) that computes
 | 
	
		
			
				|  |  |     the residuals. The functor must write the computed value in the
 | 
	
	
		
			
				|  | @@ -335,14 +411,15 @@ having to touch the problem once it has been successfuly modeling.
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  |       CostFunction* cost_function
 | 
	
		
			
				|  |  |           = new NumericDiffCostFunction<MyScalarCostFunctor, CENTRAL, 1, 2, 2>(
 | 
	
		
			
				|  |  | -             new MyScalarCostFunctor(1.0));                          ^  ^  ^
 | 
	
		
			
				|  |  | -                                                                 |   |  |  |
 | 
	
		
			
				|  |  | -                                     Finite Differencing Scheme -+   |  |  |
 | 
	
		
			
				|  |  | -                                     Dimension of residual ----------+  |  |
 | 
	
		
			
				|  |  | -                                     Dimension of x --------------------+  |
 | 
	
		
			
				|  |  | -                                     Dimension of y -----------------------+
 | 
	
		
			
				|  |  | +             new MyScalarCostFunctor(1.0));                    ^     ^  ^  ^
 | 
	
		
			
				|  |  | +                                                               |     |  |  |
 | 
	
		
			
				|  |  | +                                   Finite Differencing Scheme -+     |  |  |
 | 
	
		
			
				|  |  | +                                   Dimension of residual ------------+  |  |
 | 
	
		
			
				|  |  | +                                   Dimension of x ----------------------+  |
 | 
	
		
			
				|  |  | +                                   Dimension of y -------------------------+
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  | -   In this example, there is usually an instance for each measumerent of `k`.
 | 
	
		
			
				|  |  | +   In this example, there is usually an instance for each measurement
 | 
	
		
			
				|  |  | +   of `k`.
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  |     In the instantiation above, the template parameters following
 | 
	
		
			
				|  |  |     ``MyScalarCostFunctor``, ``1, 2, 2``, describe the functor as
 | 
	
	
		
			
				|  | @@ -398,92 +475,105 @@ having to touch the problem once it has been successfuly modeling.
 | 
	
		
			
				|  |  |     sizes 4 and 8 respectively. Look at the tests for a more detailed
 | 
	
		
			
				|  |  |     example.
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  | +:class:`NumericDiffFunctor`
 | 
	
		
			
				|  |  | +---------------------------
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  | -:class:`NormalPrior`
 | 
	
		
			
				|  |  | ---------------------
 | 
	
		
			
				|  |  | -
 | 
	
		
			
				|  |  | -.. class:: NormalPrior
 | 
	
		
			
				|  |  | -
 | 
	
		
			
				|  |  | -   .. code-block:: c++
 | 
	
		
			
				|  |  | -
 | 
	
		
			
				|  |  | -     class NormalPrior: public CostFunction {
 | 
	
		
			
				|  |  | -      public:
 | 
	
		
			
				|  |  | -       // Check that the number of rows in the vector b are the same as the
 | 
	
		
			
				|  |  | -       // number of columns in the matrix A, crash otherwise.
 | 
	
		
			
				|  |  | -       NormalPrior(const Matrix& A, const Vector& b);
 | 
	
		
			
				|  |  | -
 | 
	
		
			
				|  |  | -       virtual bool Evaluate(double const* const* parameters,
 | 
	
		
			
				|  |  | -                             double* residuals,
 | 
	
		
			
				|  |  | -                             double** jacobians) const;
 | 
	
		
			
				|  |  | -      };
 | 
	
		
			
				|  |  | -
 | 
	
		
			
				|  |  | -   Implements a cost function of the form
 | 
	
		
			
				|  |  | +.. class:: NumericDiffFunctor
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  | -   .. math::  cost(x) = ||A(x - b)||^2
 | 
	
		
			
				|  |  | +   Sometimes parts of a cost function can be differentiated
 | 
	
		
			
				|  |  | +   automatically or analytically but others require numeric
 | 
	
		
			
				|  |  | +   differentiation. :class:`NumericDiffFunctor` is a wrapper class
 | 
	
		
			
				|  |  | +   that takes a variadic functor evaluating a function, numerically
 | 
	
		
			
				|  |  | +   differentiates it and makes it available as a templated functor so
 | 
	
		
			
				|  |  | +   that it can be easily used as part of Ceres' automatic
 | 
	
		
			
				|  |  | +   differentiation framework.
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  | -   where, the matrix A and the vector b are fixed and x is the
 | 
	
		
			
				|  |  | -   variable. In case the user is interested in implementing a cost
 | 
	
		
			
				|  |  | -   function of the form
 | 
	
		
			
				|  |  | +   For example, let us assume that
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  | -  .. math::  cost(x) = (x - \mu)^T S^{-1} (x - \mu)
 | 
	
		
			
				|  |  | +   .. code-block:: c++
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  | -  where, :math:`\mu` is a vector and :math:`S` is a covariance matrix,
 | 
	
		
			
				|  |  | -  then, :math:`A = S^{-1/2}`, i.e the matrix :math:`A` is the square
 | 
	
		
			
				|  |  | -  root of the inverse of the covariance, also known as the stiffness
 | 
	
		
			
				|  |  | -  matrix. There are however no restrictions on the shape of
 | 
	
		
			
				|  |  | -  :math:`A`. It is free to be rectangular, which would be the case if
 | 
	
		
			
				|  |  | -  the covariance matrix :math:`S` is rank deficient.
 | 
	
		
			
				|  |  | +     struct IntrinsicProjection
 | 
	
		
			
				|  |  | +       IntrinsicProjection(const double* observations);
 | 
	
		
			
				|  |  | +       bool operator()(const double* calibration,
 | 
	
		
			
				|  |  | +                       const double* point,
 | 
	
		
			
				|  |  | +                       double* residuals);
 | 
	
		
			
				|  |  | +     };
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  | +   is a functor that implements the projection of a point in its local
 | 
	
		
			
				|  |  | +   coordinate system onto its image plane and subtracts it from the
 | 
	
		
			
				|  |  | +   observed point projection.
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  | -:class:`ConditionedCostFunction`
 | 
	
		
			
				|  |  | ---------------------------------
 | 
	
		
			
				|  |  | +   Now we would like to compose the action of this functor with the
 | 
	
		
			
				|  |  | +   action of camera extrinsics, i.e., rotation and translation, which
 | 
	
		
			
				|  |  | +   is given by the following templated function
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  | -.. class:: ConditionedCostFunction
 | 
	
		
			
				|  |  | +   .. code-block:: c++
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  | -   This class allows you to apply different conditioning to the residual
 | 
	
		
			
				|  |  | -   values of a wrapped cost function. An example where this is useful is
 | 
	
		
			
				|  |  | -   where you have an existing cost function that produces N values, but you
 | 
	
		
			
				|  |  | -   want the total cost to be something other than just the sum of these
 | 
	
		
			
				|  |  | -   squared values - maybe you want to apply a different scaling to some
 | 
	
		
			
				|  |  | -   values, to change their contribution to the cost.
 | 
	
		
			
				|  |  | +     template<typename T>
 | 
	
		
			
				|  |  | +     void RotateAndTranslatePoint(const T* rotation,
 | 
	
		
			
				|  |  | +                                  const T* translation,
 | 
	
		
			
				|  |  | +                                  const T* point,
 | 
	
		
			
				|  |  | +                                  T* result);
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  | -   Usage:
 | 
	
		
			
				|  |  | +   To compose the extrinsics and intrinsics, we can construct a
 | 
	
		
			
				|  |  | +   ``CameraProjection`` functor as follows.
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  |     .. code-block:: c++
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  | -       //  my_cost_function produces N residuals
 | 
	
		
			
				|  |  | -       CostFunction* my_cost_function = ...
 | 
	
		
			
				|  |  | -       CHECK_EQ(N, my_cost_function->num_residuals());
 | 
	
		
			
				|  |  | -       vector<CostFunction*> conditioners;
 | 
	
		
			
				|  |  | +    struct CameraProjection {
 | 
	
		
			
				|  |  | +       typedef NumericDiffFunctor<IntrinsicProjection, CENTRAL, 2, 5, 3>
 | 
	
		
			
				|  |  | +          IntrinsicProjectionFunctor;
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  | -       //  Make N 1x1 cost functions (1 parameter, 1 residual)
 | 
	
		
			
				|  |  | -       CostFunction* f_1 = ...
 | 
	
		
			
				|  |  | -       conditioners.push_back(f_1);
 | 
	
		
			
				|  |  | +      CameraProjection(double* observation) {
 | 
	
		
			
				|  |  | +        intrinsic_projection_.reset(
 | 
	
		
			
				|  |  | +            new IntrinsicProjectionFunctor(observation)) {
 | 
	
		
			
				|  |  | +      }
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  | -       CostFunction* f_N = ...
 | 
	
		
			
				|  |  | -       conditioners.push_back(f_N);
 | 
	
		
			
				|  |  | -       ConditionedCostFunction* ccf =
 | 
	
		
			
				|  |  | -         new ConditionedCostFunction(my_cost_function, conditioners);
 | 
	
		
			
				|  |  | +      template <typename T>
 | 
	
		
			
				|  |  | +      bool operator()(const T* rotation,
 | 
	
		
			
				|  |  | +                      const T* translation,
 | 
	
		
			
				|  |  | +                      const T* intrinsics,
 | 
	
		
			
				|  |  | +                      const T* point,
 | 
	
		
			
				|  |  | +                      T* residuals) const {
 | 
	
		
			
				|  |  | +        T transformed_point[3];
 | 
	
		
			
				|  |  | +        RotateAndTranslatePoint(rotation, translation, point, transformed_point);
 | 
	
		
			
				|  |  | +        return (*intrinsic_projection_)(intrinsics, transformed_point, residual);
 | 
	
		
			
				|  |  | +      }
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  | +     private:
 | 
	
		
			
				|  |  | +      scoped_ptr<IntrinsicProjectionFunctor> intrinsic_projection_;
 | 
	
		
			
				|  |  | +    };
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  | -   Now ``ccf`` 's ``residual[i]`` (i=0..N-1) will be passed though the
 | 
	
		
			
				|  |  | -   :math:`i^{\text{th}}` conditioner.
 | 
	
		
			
				|  |  | +   Here, we made the choice of using ``CENTRAL`` differences to compute
 | 
	
		
			
				|  |  | +   the jacobian of ``IntrinsicProjection``.
 | 
	
		
			
				|  |  | +
 | 
	
		
			
				|  |  | +   Now, we are ready to construct an automatically differentiated cost
 | 
	
		
			
				|  |  | +   function as
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  |     .. code-block:: c++
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  | -      ccf_residual[i] = f_i(my_cost_function_residual[i])
 | 
	
		
			
				|  |  | +    CostFunction* cost_function =
 | 
	
		
			
				|  |  | +        new AutoDiffCostFunction<CameraProjection, 2, 3, 3, 5>(
 | 
	
		
			
				|  |  | +           new CameraProjection(observations));
 | 
	
		
			
				|  |  | +
 | 
	
		
			
				|  |  | +   ``cost_function`` now seamlessly integrates automatic
 | 
	
		
			
				|  |  | +   differentiation of ``RotateAndTranslatePoint`` with a numerically
 | 
	
		
			
				|  |  | +   differentiated version of ``IntrinsicProjection``.
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  | -   and the Jacobian will be affected appropriately.
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  |  :class:`CostFunctionToFunctor`
 | 
	
		
			
				|  |  |  ------------------------------
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  |  .. class:: CostFunctionToFunctor
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  | -   :class:`CostFunctionToFunctor` is an adapter class that allows users to use
 | 
	
		
			
				|  |  | -   :class:`CostFunction` objects in templated functors which are to be used for
 | 
	
		
			
				|  |  | -   automatic differentiation.  This allows the user to seamlessly mix
 | 
	
		
			
				|  |  | -   analytic, numeric and automatic differentiation.
 | 
	
		
			
				|  |  | +   Just like :class:`NumericDiffFunctor` allows numeric
 | 
	
		
			
				|  |  | +   differentiation to be mixed with automatic differentiation,
 | 
	
		
			
				|  |  | +   :class:`CostFunctionToFunctor` provides an even more general
 | 
	
		
			
				|  |  | +   mechanism.  :class:`CostFunctionToFunctor` is an adapter class that
 | 
	
		
			
				|  |  | +   allows users to use :class:`CostFunction` objects in templated
 | 
	
		
			
				|  |  | +   functors which are to be used for automatic differentiation.  This
 | 
	
		
			
				|  |  | +   allows the user to seamlessly mix analytic, numeric and automatic
 | 
	
		
			
				|  |  | +   differentiation.
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  |     For example, let us assume that
 | 
	
		
			
				|  |  |  
 | 
	
	
		
			
				|  | @@ -534,8 +624,8 @@ having to touch the problem once it has been successfuly modeling.
 | 
	
		
			
				|  |  |          T transformed_point[3];
 | 
	
		
			
				|  |  |          RotateAndTranslatePoint(rotation, translation, point, transformed_point);
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  | -        //   Note that we call intrinsic_projection_, just like it was
 | 
	
		
			
				|  |  | -        //   any other templated functor.
 | 
	
		
			
				|  |  | +        // Note that we call intrinsic_projection_, just like it was
 | 
	
		
			
				|  |  | +        // any other templated functor.
 | 
	
		
			
				|  |  |          return (*intrinsic_projection_)(intrinsics, transformed_point, residual);
 | 
	
		
			
				|  |  |        }
 | 
	
		
			
				|  |  |  
 | 
	
	
		
			
				|  | @@ -544,87 +634,83 @@ having to touch the problem once it has been successfuly modeling.
 | 
	
		
			
				|  |  |      };
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  | -:class:`NumericDiffFunctor`
 | 
	
		
			
				|  |  | ----------------------------
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  | -.. class:: NumericDiffFunctor
 | 
	
		
			
				|  |  | +:class:`ConditionedCostFunction`
 | 
	
		
			
				|  |  | +--------------------------------
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  | -   A wrapper class that takes a variadic functor evaluating a
 | 
	
		
			
				|  |  | -   function, numerically differentiates it and makes it available as a
 | 
	
		
			
				|  |  | -   templated functor so that it can be easily used as part of Ceres'
 | 
	
		
			
				|  |  | -   automatic differentiation framework.
 | 
	
		
			
				|  |  | +.. class:: ConditionedCostFunction
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  | -   For example, let us assume that
 | 
	
		
			
				|  |  | +   This class allows you to apply different conditioning to the residual
 | 
	
		
			
				|  |  | +   values of a wrapped cost function. An example where this is useful is
 | 
	
		
			
				|  |  | +   where you have an existing cost function that produces N values, but you
 | 
	
		
			
				|  |  | +   want the total cost to be something other than just the sum of these
 | 
	
		
			
				|  |  | +   squared values - maybe you want to apply a different scaling to some
 | 
	
		
			
				|  |  | +   values, to change their contribution to the cost.
 | 
	
		
			
				|  |  | +
 | 
	
		
			
				|  |  | +   Usage:
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  |     .. code-block:: c++
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  | -     struct IntrinsicProjection
 | 
	
		
			
				|  |  | -       IntrinsicProjection(const double* observations);
 | 
	
		
			
				|  |  | -       bool operator()(const double* calibration,
 | 
	
		
			
				|  |  | -                       const double* point,
 | 
	
		
			
				|  |  | -                       double* residuals);
 | 
	
		
			
				|  |  | -     };
 | 
	
		
			
				|  |  | +       //  my_cost_function produces N residuals
 | 
	
		
			
				|  |  | +       CostFunction* my_cost_function = ...
 | 
	
		
			
				|  |  | +       CHECK_EQ(N, my_cost_function->num_residuals());
 | 
	
		
			
				|  |  | +       vector<CostFunction*> conditioners;
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  | -   is a functor that implements the projection of a point in its local
 | 
	
		
			
				|  |  | -   coordinate system onto its image plane and subtracts it from the
 | 
	
		
			
				|  |  | -   observed point projection.
 | 
	
		
			
				|  |  | +       //  Make N 1x1 cost functions (1 parameter, 1 residual)
 | 
	
		
			
				|  |  | +       CostFunction* f_1 = ...
 | 
	
		
			
				|  |  | +       conditioners.push_back(f_1);
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  | -   Now we would like to compose the action of this functor with the
 | 
	
		
			
				|  |  | -   action of camera extrinsics, i.e., rotation and translation, which
 | 
	
		
			
				|  |  | -   is given by the following templated function
 | 
	
		
			
				|  |  | +       CostFunction* f_N = ...
 | 
	
		
			
				|  |  | +       conditioners.push_back(f_N);
 | 
	
		
			
				|  |  | +       ConditionedCostFunction* ccf =
 | 
	
		
			
				|  |  | +         new ConditionedCostFunction(my_cost_function, conditioners);
 | 
	
		
			
				|  |  | +
 | 
	
		
			
				|  |  | +
 | 
	
		
			
				|  |  | +   Now ``ccf`` 's ``residual[i]`` (i=0..N-1) will be passed though the
 | 
	
		
			
				|  |  | +   :math:`i^{\text{th}}` conditioner.
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  |     .. code-block:: c++
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  | -     template<typename T>
 | 
	
		
			
				|  |  | -     void RotateAndTranslatePoint(const T* rotation,
 | 
	
		
			
				|  |  | -                                  const T* translation,
 | 
	
		
			
				|  |  | -                                  const T* point,
 | 
	
		
			
				|  |  | -                                  T* result);
 | 
	
		
			
				|  |  | +      ccf_residual[i] = f_i(my_cost_function_residual[i])
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  | -   To compose the extrinsics and intrinsics, we can construct a
 | 
	
		
			
				|  |  | -   ``CameraProjection`` functor as follows.
 | 
	
		
			
				|  |  | +   and the Jacobian will be affected appropriately.
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  | -   .. code-block:: c++
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  | -    struct CameraProjection {
 | 
	
		
			
				|  |  | -       typedef NumericDiffFunctor<IntrinsicProjection, CENTRAL, 2, 5, 3>
 | 
	
		
			
				|  |  | -          IntrinsicProjectionFunctor;
 | 
	
		
			
				|  |  | +:class:`NormalPrior`
 | 
	
		
			
				|  |  | +--------------------
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  | -      CameraProjection(double* observation) {
 | 
	
		
			
				|  |  | -        intrinsic_projection_.reset(
 | 
	
		
			
				|  |  | -            new IntrinsicProjectionFunctor(observation)) {
 | 
	
		
			
				|  |  | -      }
 | 
	
		
			
				|  |  | +.. class:: NormalPrior
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  | -      template <typename T>
 | 
	
		
			
				|  |  | -      bool operator()(const T* rotation,
 | 
	
		
			
				|  |  | -                      const T* translation,
 | 
	
		
			
				|  |  | -                      const T* intrinsics,
 | 
	
		
			
				|  |  | -                      const T* point,
 | 
	
		
			
				|  |  | -                      T* residuals) const {
 | 
	
		
			
				|  |  | -        T transformed_point[3];
 | 
	
		
			
				|  |  | -        RotateAndTranslatePoint(rotation, translation, point, transformed_point);
 | 
	
		
			
				|  |  | -        return (*intrinsic_projection_)(intrinsics, transformed_point, residual);
 | 
	
		
			
				|  |  | -      }
 | 
	
		
			
				|  |  | +   .. code-block:: c++
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  | -     private:
 | 
	
		
			
				|  |  | -      scoped_ptr<IntrinsicProjectionFunctor> intrinsic_projection_;
 | 
	
		
			
				|  |  | -    };
 | 
	
		
			
				|  |  | +     class NormalPrior: public CostFunction {
 | 
	
		
			
				|  |  | +      public:
 | 
	
		
			
				|  |  | +       // Check that the number of rows in the vector b are the same as the
 | 
	
		
			
				|  |  | +       // number of columns in the matrix A, crash otherwise.
 | 
	
		
			
				|  |  | +       NormalPrior(const Matrix& A, const Vector& b);
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  | -   Here, we made the choice of using ``CENTRAL`` differences to compute
 | 
	
		
			
				|  |  | -   the jacobian of ``IntrinsicProjection``.
 | 
	
		
			
				|  |  | +       virtual bool Evaluate(double const* const* parameters,
 | 
	
		
			
				|  |  | +                             double* residuals,
 | 
	
		
			
				|  |  | +                             double** jacobians) const;
 | 
	
		
			
				|  |  | +      };
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  | -   Now, we are ready to construct an automatically differentiated cost
 | 
	
		
			
				|  |  | -   function as
 | 
	
		
			
				|  |  | +   Implements a cost function of the form
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  | -   .. code-block:: c++
 | 
	
		
			
				|  |  | +   .. math::  cost(x) = ||A(x - b)||^2
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  | -    CostFunction* cost_function =
 | 
	
		
			
				|  |  | -        new AutoDiffCostFunction<CameraProjection, 2, 3, 3, 5>(
 | 
	
		
			
				|  |  | -           new CameraProjection(observations));
 | 
	
		
			
				|  |  | +   where, the matrix A and the vector b are fixed and x is the
 | 
	
		
			
				|  |  | +   variable. In case the user is interested in implementing a cost
 | 
	
		
			
				|  |  | +   function of the form
 | 
	
		
			
				|  |  | +
 | 
	
		
			
				|  |  | +  .. math::  cost(x) = (x - \mu)^T S^{-1} (x - \mu)
 | 
	
		
			
				|  |  | +
 | 
	
		
			
				|  |  | +  where, :math:`\mu` is a vector and :math:`S` is a covariance matrix,
 | 
	
		
			
				|  |  | +  then, :math:`A = S^{-1/2}`, i.e the matrix :math:`A` is the square
 | 
	
		
			
				|  |  | +  root of the inverse of the covariance, also known as the stiffness
 | 
	
		
			
				|  |  | +  matrix. There are however no restrictions on the shape of
 | 
	
		
			
				|  |  | +  :math:`A`. It is free to be rectangular, which would be the case if
 | 
	
		
			
				|  |  | +  the covariance matrix :math:`S` is rank deficient.
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  | -   ``cost_function`` now seamlessly integrates automatic
 | 
	
		
			
				|  |  | -   differentiation of ``RotateAndTranslatePoint`` with a numerically
 | 
	
		
			
				|  |  | -   differentiated version of ``IntrinsicProjection``.
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  |  :class:`LossFunction`
 | 
	
	
		
			
				|  | @@ -689,7 +775,6 @@ having to touch the problem once it has been successfuly modeling.
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  |     **Scaling**
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  | -
 | 
	
		
			
				|  |  |     Given one robustifier :math:`\rho(s)` one can change the length
 | 
	
		
			
				|  |  |     scale at which robustification takes place, by adding a scale
 | 
	
		
			
				|  |  |     factor :math:`a > 0` which gives us :math:`\rho(s,a) = a^2 \rho(s /
 | 
	
	
		
			
				|  | @@ -705,9 +790,9 @@ having to touch the problem once it has been successfuly modeling.
 | 
	
		
			
				|  |  |  Instances
 | 
	
		
			
				|  |  |  ^^^^^^^^^
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  | -Ceres includes a number of other loss functions. For simplicity we
 | 
	
		
			
				|  |  | -described their unscaled versions. The figure below illustrates their
 | 
	
		
			
				|  |  | -shape graphically. More details can be found in
 | 
	
		
			
				|  |  | +Ceres includes a number of predefined loss functions. For simplicity
 | 
	
		
			
				|  |  | +we described their unscaled versions. The figure below illustrates
 | 
	
		
			
				|  |  | +their shape graphically. More details can be found in
 | 
	
		
			
				|  |  |  ``include/ceres/loss_function.h``.
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  |  .. figure:: loss.png
 | 
	
	
		
			
				|  | @@ -743,10 +828,68 @@ shape graphically. More details can be found in
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  |  .. class:: ComposedLoss
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  | +   Given two loss functions ``f`` and ``g``, implements the loss
 | 
	
		
			
				|  |  | +   function ``h(s) = f(g(s))``.
 | 
	
		
			
				|  |  | +
 | 
	
		
			
				|  |  | +   .. code-block:: c++
 | 
	
		
			
				|  |  | +
 | 
	
		
			
				|  |  | +      class ComposedLoss : public LossFunction {
 | 
	
		
			
				|  |  | +       public:
 | 
	
		
			
				|  |  | +        explicit ComposedLoss(const LossFunction* f,
 | 
	
		
			
				|  |  | +                              Ownership ownership_f,
 | 
	
		
			
				|  |  | +                              const LossFunction* g,
 | 
	
		
			
				|  |  | +                              Ownership ownership_g);
 | 
	
		
			
				|  |  | +      };
 | 
	
		
			
				|  |  | +
 | 
	
		
			
				|  |  |  .. class:: ScaledLoss
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  | +   Sometimes you want to simply scale the output value of the
 | 
	
		
			
				|  |  | +   robustifier. For example, you might want to weight different error
 | 
	
		
			
				|  |  | +   terms differently (e.g., weight pixel reprojection errors
 | 
	
		
			
				|  |  | +   differently from terrain errors).
 | 
	
		
			
				|  |  | +
 | 
	
		
			
				|  |  | +   Given a loss function :math:`\rho(s)` and a scalar :math:`a`, :class:`ScaledLoss`
 | 
	
		
			
				|  |  | +   implements the function :math:`a \rho(s)`.
 | 
	
		
			
				|  |  | +
 | 
	
		
			
				|  |  | +   Since we treat the a ``NULL`` Loss function as the Identity loss
 | 
	
		
			
				|  |  | +   function, :math:`rho` = ``NULL``: is a valid input and will result
 | 
	
		
			
				|  |  | +   in the input being scaled by :math:`a`. This provides a simple way
 | 
	
		
			
				|  |  | +   of implementing a scaled ResidualBlock.
 | 
	
		
			
				|  |  | +
 | 
	
		
			
				|  |  |  .. class:: LossFunctionWrapper
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  | +   Sometimes after the optimization problem has been constructed, we
 | 
	
		
			
				|  |  | +   wish to mutate the scale of the loss function. For example, when
 | 
	
		
			
				|  |  | +   performing estimation from data which has substantial outliers,
 | 
	
		
			
				|  |  | +   convergence can be improved by starting out with a large scale,
 | 
	
		
			
				|  |  | +   optimizing the problem and then reducing the scale. This can have
 | 
	
		
			
				|  |  | +   better convergence behavior than just using a loss function with a
 | 
	
		
			
				|  |  | +   small scale.
 | 
	
		
			
				|  |  | +
 | 
	
		
			
				|  |  | +   This templated class allows the user to implement a loss function
 | 
	
		
			
				|  |  | +   whose scale can be mutated after an optimization problem has been
 | 
	
		
			
				|  |  | +   constructed. e.g,
 | 
	
		
			
				|  |  | +
 | 
	
		
			
				|  |  | +   .. code-block:: c++
 | 
	
		
			
				|  |  | +
 | 
	
		
			
				|  |  | +     Problem problem;
 | 
	
		
			
				|  |  | +
 | 
	
		
			
				|  |  | +     // Add parameter blocks
 | 
	
		
			
				|  |  | +
 | 
	
		
			
				|  |  | +     CostFunction* cost_function =
 | 
	
		
			
				|  |  | +         new AutoDiffCostFunction < UW_Camera_Mapper, 2, 9, 3>(
 | 
	
		
			
				|  |  | +             new UW_Camera_Mapper(feature_x, feature_y));
 | 
	
		
			
				|  |  | +
 | 
	
		
			
				|  |  | +     LossFunctionWrapper* loss_function(new HuberLoss(1.0), TAKE_OWNERSHIP);
 | 
	
		
			
				|  |  | +     problem.AddResidualBlock(cost_function, loss_function, parameters);
 | 
	
		
			
				|  |  | +
 | 
	
		
			
				|  |  | +     Solver::Options options;
 | 
	
		
			
				|  |  | +     Solver::Summary summary;
 | 
	
		
			
				|  |  | +     Solve(options, &problem, &summary);
 | 
	
		
			
				|  |  | +
 | 
	
		
			
				|  |  | +     loss_function->Reset(new HuberLoss(1.0), TAKE_OWNERSHIP);
 | 
	
		
			
				|  |  | +     Solve(options, &problem, &summary);
 | 
	
		
			
				|  |  | +
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  |  Theory
 | 
	
		
			
				|  |  |  ^^^^^^
 | 
	
	
		
			
				|  | @@ -763,8 +906,8 @@ Then, the robustified gradient and the Gauss-Newton Hessian are
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  |  .. math::
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  | -	g(x) &= \rho'J^\top(x)f(x)\\
 | 
	
		
			
				|  |  | -	H(x) &= J^\top(x)\left(\rho' + 2 \rho''f(x)f^\top(x)\right)J(x)
 | 
	
		
			
				|  |  | +        g(x) &= \rho'J^\top(x)f(x)\\
 | 
	
		
			
				|  |  | +        H(x) &= J^\top(x)\left(\rho' + 2 \rho''f(x)f^\top(x)\right)J(x)
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  |  where the terms involving the second derivatives of :math:`f(x)` have
 | 
	
		
			
				|  |  |  been ignored. Note that :math:`H(x)` is indefinite if
 | 
	
	
		
			
				|  | @@ -783,9 +926,9 @@ Then, define the rescaled residual and Jacobian as
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  |  .. math::
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  | -	\tilde{f}(x) &= \frac{\sqrt{\rho'}}{1 - \alpha} f(x)\\
 | 
	
		
			
				|  |  | -	\tilde{J}(x) &= \sqrt{\rho'}\left(1 - \alpha
 | 
	
		
			
				|  |  | - 	                \frac{f(x)f^\top(x)}{\left\|f(x)\right\|^2} \right)J(x)
 | 
	
		
			
				|  |  | +        \tilde{f}(x) &= \frac{\sqrt{\rho'}}{1 - \alpha} f(x)\\
 | 
	
		
			
				|  |  | +        \tilde{J}(x) &= \sqrt{\rho'}\left(1 - \alpha
 | 
	
		
			
				|  |  | +                        \frac{f(x)f^\top(x)}{\left\|f(x)\right\|^2} \right)J(x)
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  |  In the case :math:`2 \rho''\left\|f(x)\right\|^2 + \rho' \lesssim 0`,
 | 
	
	
		
			
				|  | @@ -793,7 +936,7 @@ we limit :math:`\alpha \le 1- \epsilon` for some small
 | 
	
		
			
				|  |  |  :math:`\epsilon`. For more details see [Triggs]_.
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  |  With this simple rescaling, one can use any Jacobian based non-linear
 | 
	
		
			
				|  |  | -least squares algorithm to robustifed non-linear least squares
 | 
	
		
			
				|  |  | +least squares algorithm to robustified non-linear least squares
 | 
	
		
			
				|  |  |  problems.
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  |  
 | 
	
	
		
			
				|  | @@ -917,6 +1060,80 @@ Instances
 | 
	
		
			
				|  |  |     of :eq:`quaternion`.
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  | +
 | 
	
		
			
				|  |  | +:class:`AutoDiffLocalParameterization`
 | 
	
		
			
				|  |  | +--------------------------------------
 | 
	
		
			
				|  |  | +
 | 
	
		
			
				|  |  | +.. class:: AutoDiffLocalParameterization
 | 
	
		
			
				|  |  | +
 | 
	
		
			
				|  |  | +  :class:`AutoDiffLocalParameterization` does for
 | 
	
		
			
				|  |  | +  :class:`LocalParameterization` what :class:`AutoDiffCostFunction`
 | 
	
		
			
				|  |  | +  does for :class:`CostFunction`. It allows the user to define a
 | 
	
		
			
				|  |  | +  templated functor that implements the
 | 
	
		
			
				|  |  | +  :func:`LocalParameterization::Plus` operation and it uses automatic
 | 
	
		
			
				|  |  | +  differentiation to implement the computation of the Jacobian.
 | 
	
		
			
				|  |  | +
 | 
	
		
			
				|  |  | +  To get an auto differentiated local parameterization, you must
 | 
	
		
			
				|  |  | +  define a class with a templated operator() (a functor) that computes
 | 
	
		
			
				|  |  | +
 | 
	
		
			
				|  |  | +     .. math:: x' = \boxplus(x, \Delta x),
 | 
	
		
			
				|  |  | +
 | 
	
		
			
				|  |  | +  For example, Quaternions have a three dimensional local
 | 
	
		
			
				|  |  | +  parameterization. It's plus operation can be implemented as (taken
 | 
	
		
			
				|  |  | +  from `internal/ceres/auto_diff_local_parameterization_test.cc
 | 
	
		
			
				|  |  | +  <https://ceres-solver.googlesource.com/ceres-solver/+/master/include/ceres/local_parameterization.h>`_
 | 
	
		
			
				|  |  | +  )
 | 
	
		
			
				|  |  | +
 | 
	
		
			
				|  |  | +    .. code-block:: c++
 | 
	
		
			
				|  |  | +
 | 
	
		
			
				|  |  | +      struct QuaternionPlus {
 | 
	
		
			
				|  |  | +        template<typename T>
 | 
	
		
			
				|  |  | +        bool operator()(const T* x, const T* delta, T* x_plus_delta) const {
 | 
	
		
			
				|  |  | +          const T squared_norm_delta =
 | 
	
		
			
				|  |  | +              delta[0] * delta[0] + delta[1] * delta[1] + delta[2] * delta[2];
 | 
	
		
			
				|  |  | +
 | 
	
		
			
				|  |  | +          T q_delta[4];
 | 
	
		
			
				|  |  | +          if (squared_norm_delta > T(0.0)) {
 | 
	
		
			
				|  |  | +            T norm_delta = sqrt(squared_norm_delta);
 | 
	
		
			
				|  |  | +            const T sin_delta_by_delta = sin(norm_delta) / norm_delta;
 | 
	
		
			
				|  |  | +            q_delta[0] = cos(norm_delta);
 | 
	
		
			
				|  |  | +            q_delta[1] = sin_delta_by_delta * delta[0];
 | 
	
		
			
				|  |  | +            q_delta[2] = sin_delta_by_delta * delta[1];
 | 
	
		
			
				|  |  | +            q_delta[3] = sin_delta_by_delta * delta[2];
 | 
	
		
			
				|  |  | +          } else {
 | 
	
		
			
				|  |  | +            // We do not just use q_delta = [1,0,0,0] here because that is a
 | 
	
		
			
				|  |  | +            // constant and when used for automatic differentiation will
 | 
	
		
			
				|  |  | +            // lead to a zero derivative. Instead we take a first order
 | 
	
		
			
				|  |  | +            // approximation and evaluate it at zero.
 | 
	
		
			
				|  |  | +            q_delta[0] = T(1.0);
 | 
	
		
			
				|  |  | +            q_delta[1] = delta[0];
 | 
	
		
			
				|  |  | +            q_delta[2] = delta[1];
 | 
	
		
			
				|  |  | +            q_delta[3] = delta[2];
 | 
	
		
			
				|  |  | +          }
 | 
	
		
			
				|  |  | +
 | 
	
		
			
				|  |  | +          Quaternionproduct(q_delta, x, x_plus_delta);
 | 
	
		
			
				|  |  | +          return true;
 | 
	
		
			
				|  |  | +        }
 | 
	
		
			
				|  |  | +      };
 | 
	
		
			
				|  |  | +
 | 
	
		
			
				|  |  | +  Then given this struct, the auto differentiated local
 | 
	
		
			
				|  |  | +  parameterization can now be constructed as
 | 
	
		
			
				|  |  | +
 | 
	
		
			
				|  |  | +  .. code-block:: c++
 | 
	
		
			
				|  |  | +
 | 
	
		
			
				|  |  | +     LocalParameterization* local_parameterization =
 | 
	
		
			
				|  |  | +         new AutoDiffLocalParameterization<QuaternionPlus, 4, 3>;
 | 
	
		
			
				|  |  | +                                                           |  |
 | 
	
		
			
				|  |  | +                                Global Size ---------------+  |
 | 
	
		
			
				|  |  | +                                Local Size -------------------+
 | 
	
		
			
				|  |  | +
 | 
	
		
			
				|  |  | +  **WARNING:** Since the functor will get instantiated with different
 | 
	
		
			
				|  |  | +  types for ``T``, you must to convert from other numeric types to
 | 
	
		
			
				|  |  | +  ``T`` before mixing computations with other variables of type
 | 
	
		
			
				|  |  | +  ``T``. In the example above, this is seen where instead of using
 | 
	
		
			
				|  |  | +  ``k_`` directly, ``k_`` is wrapped with ``T(k_)``.
 | 
	
		
			
				|  |  | +
 | 
	
		
			
				|  |  | +
 | 
	
		
			
				|  |  |  :class:`Problem`
 | 
	
		
			
				|  |  |  ----------------
 | 
	
		
			
				|  |  |  
 | 
	
	
		
			
				|  | @@ -953,31 +1170,18 @@ Instances
 | 
	
		
			
				|  |  |     of the term is just the squared norm of the residuals.
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  |     The user has the option of explicitly adding the parameter blocks
 | 
	
		
			
				|  |  | -   using :func:`Problem::AddParameterBlock`. This causes additional correctness
 | 
	
		
			
				|  |  | -   checking; however, :func:`Problem::AddResidualBlock` implicitly adds the
 | 
	
		
			
				|  |  | -   parameter blocks if they are not present, so calling
 | 
	
		
			
				|  |  | -   :func:`Problem::AddParameterBlock` explicitly is not required.
 | 
	
		
			
				|  |  | -
 | 
	
		
			
				|  |  | -
 | 
	
		
			
				|  |  | -   :class:`Problem` by default takes ownership of the ``cost_function`` and
 | 
	
		
			
				|  |  | -   ``loss_function`` pointers. These objects remain live for the life of
 | 
	
		
			
				|  |  | -   the :class:`Problem` object. If the user wishes to keep control over the
 | 
	
		
			
				|  |  | -   destruction of these objects, then they can do this by setting the
 | 
	
		
			
				|  |  | -   corresponding enums in the ``Problem::Options`` struct.
 | 
	
		
			
				|  |  | -
 | 
	
		
			
				|  |  | -
 | 
	
		
			
				|  |  | -   Note that even though the Problem takes ownership of ``cost_function``
 | 
	
		
			
				|  |  | -   and ``loss_function``, it does not preclude the user from re-using
 | 
	
		
			
				|  |  | -   them in another residual block. The destructor takes care to call
 | 
	
		
			
				|  |  | -   delete on each ``cost_function`` or ``loss_function`` pointer only
 | 
	
		
			
				|  |  | -   once, regardless of how many residual blocks refer to them.
 | 
	
		
			
				|  |  | +   using :func:`Problem::AddParameterBlock`. This causes additional
 | 
	
		
			
				|  |  | +   correctness checking; however, :func:`Problem::AddResidualBlock`
 | 
	
		
			
				|  |  | +   implicitly adds the parameter blocks if they are not present, so
 | 
	
		
			
				|  |  | +   calling :func:`Problem::AddParameterBlock` explicitly is not
 | 
	
		
			
				|  |  | +   required.
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  |     :func:`Problem::AddParameterBlock` explicitly adds a parameter
 | 
	
		
			
				|  |  |     block to the :class:`Problem`. Optionally it allows the user to
 | 
	
		
			
				|  |  | -   associate a :class:`LocalParameterization` object with the parameter
 | 
	
		
			
				|  |  | -   block too. Repeated calls with the same arguments are
 | 
	
		
			
				|  |  | +   associate a :class:`LocalParameterization` object with the
 | 
	
		
			
				|  |  | +   parameter block too. Repeated calls with the same arguments are
 | 
	
		
			
				|  |  |     ignored. Repeated calls with the same double pointer but a
 | 
	
		
			
				|  |  | -   different size results in undefined behaviour.
 | 
	
		
			
				|  |  | +   different size results in undefined behavior.
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  |     You can set any parameter block to be constant using
 | 
	
		
			
				|  |  |     :func:`Problem::SetParameterBlockConstant` and undo this using
 | 
	
	
		
			
				|  | @@ -1003,11 +1207,11 @@ Instances
 | 
	
		
			
				|  |  |     destruction of these objects, then they can do this by setting the
 | 
	
		
			
				|  |  |     corresponding enums in the :class:`Problem::Options` struct.
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  | -   Even though :class:`Problem` takes ownership of these pointers, it
 | 
	
		
			
				|  |  | -   does not preclude the user from re-using them in another residual
 | 
	
		
			
				|  |  | -   or parameter block. The destructor takes care to call delete on
 | 
	
		
			
				|  |  | -   each pointer only once.
 | 
	
		
			
				|  |  | -
 | 
	
		
			
				|  |  | +   Note that even though the Problem takes ownership of ``cost_function``
 | 
	
		
			
				|  |  | +   and ``loss_function``, it does not preclude the user from re-using
 | 
	
		
			
				|  |  | +   them in another residual block. The destructor takes care to call
 | 
	
		
			
				|  |  | +   delete on each ``cost_function`` or ``loss_function`` pointer only
 | 
	
		
			
				|  |  | +   once, regardless of how many residual blocks refer to them.
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  |  .. function:: ResidualBlockId Problem::AddResidualBlock(CostFunction* cost_function, LossFunction* loss_function, const vector<double*> parameter_blocks)
 | 
	
		
			
				|  |  |  
 | 
	
	
		
			
				|  | @@ -1056,14 +1260,14 @@ Instances
 | 
	
		
			
				|  |  |     Add a parameter block with appropriate size to the problem.
 | 
	
		
			
				|  |  |     Repeated calls with the same arguments are ignored. Repeated calls
 | 
	
		
			
				|  |  |     with the same double pointer but a different size results in
 | 
	
		
			
				|  |  | -   undefined behaviour.
 | 
	
		
			
				|  |  | +   undefined behavior.
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  |  .. function:: void Problem::AddParameterBlock(double* values, int size)
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  |     Add a parameter block with appropriate size and parameterization to
 | 
	
		
			
				|  |  |     the problem. Repeated calls with the same arguments are
 | 
	
		
			
				|  |  |     ignored. Repeated calls with the same double pointer but a
 | 
	
		
			
				|  |  | -   different size results in undefined behaviour.
 | 
	
		
			
				|  |  | +   different size results in undefined behavior.
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  |  .. function:: void Problem::RemoveResidualBlock(ResidualBlockId residual_block)
 | 
	
		
			
				|  |  |  
 | 
	
	
		
			
				|  | @@ -1076,8 +1280,8 @@ Instances
 | 
	
		
			
				|  |  |     the removal is fast (almost constant time). Otherwise, removing a
 | 
	
		
			
				|  |  |     parameter block will incur a scan of the entire Problem object.
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  | -   WARNING: Removing a residual or parameter block will destroy the
 | 
	
		
			
				|  |  | -   implicit ordering, rendering the jacobian or residuals returned
 | 
	
		
			
				|  |  | +   **WARNING:** Removing a residual or parameter block will destroy
 | 
	
		
			
				|  |  | +   the implicit ordering, rendering the jacobian or residuals returned
 | 
	
		
			
				|  |  |     from the solver uninterpretable. If you depend on the evaluated
 | 
	
		
			
				|  |  |     jacobian, do not use remove! This may change in a future release.
 | 
	
		
			
				|  |  |  
 | 
	
	
		
			
				|  | @@ -1088,10 +1292,10 @@ Instances
 | 
	
		
			
				|  |  |     residual block will not get deleted immediately; won't happen until the
 | 
	
		
			
				|  |  |     problem itself is deleted.
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  | -   WARNING: Removing a residual or parameter block will destroy the implicit
 | 
	
		
			
				|  |  | -   ordering, rendering the jacobian or residuals returned from the solver
 | 
	
		
			
				|  |  | -   uninterpretable. If you depend on the evaluated jacobian, do not use
 | 
	
		
			
				|  |  | -   remove! This may change in a future release.
 | 
	
		
			
				|  |  | +   **WARNING:** Removing a residual or parameter block will destroy
 | 
	
		
			
				|  |  | +   the implicit ordering, rendering the jacobian or residuals returned
 | 
	
		
			
				|  |  | +   from the solver uninterpretable. If you depend on the evaluated
 | 
	
		
			
				|  |  | +   jacobian, do not use remove! This may change in a future release.
 | 
	
		
			
				|  |  |     Hold the indicated parameter block constant during optimization.
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  |  
 | 
	
	
		
			
				|  | @@ -1103,7 +1307,6 @@ Instances
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  |     Allow the indicated parameter to vary during optimization.
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  | -
 | 
	
		
			
				|  |  |  .. function:: void Problem::SetParameterization(double* values, LocalParameterization* local_parameterization)
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  |     Set the local parameterization for one of the parameter blocks.
 | 
	
	
		
			
				|  | @@ -1133,6 +1336,23 @@ Instances
 | 
	
		
			
				|  |  |     The size of the residual vector obtained by summing over the sizes
 | 
	
		
			
				|  |  |     of all of the residual blocks.
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  | +.. function int Problem::ParameterBlockSize(const double* values) const;
 | 
	
		
			
				|  |  | +
 | 
	
		
			
				|  |  | +   The size of the parameter block.
 | 
	
		
			
				|  |  | +
 | 
	
		
			
				|  |  | +.. function int Problem::ParameterBlockLocalSize(const double* values) const;
 | 
	
		
			
				|  |  | +
 | 
	
		
			
				|  |  | +  The size of local parameterization for the parameter block. If
 | 
	
		
			
				|  |  | +  there is no local parameterization associated with this parameter
 | 
	
		
			
				|  |  | +  block, then ``ParameterBlockLocalSize`` = ``ParameterBlockSize``.
 | 
	
		
			
				|  |  | +
 | 
	
		
			
				|  |  | +
 | 
	
		
			
				|  |  | +.. function void Problem::GetParameterBlocks(vector<double*>* parameter_blocks) const;
 | 
	
		
			
				|  |  | +
 | 
	
		
			
				|  |  | +  Fills the passed ``parameter_blocks`` vector with pointers to the
 | 
	
		
			
				|  |  | +  parameter blocks currently in the problem. After this call,
 | 
	
		
			
				|  |  | +  ``parameter_block.size() == NumParameterBlocks``.
 | 
	
		
			
				|  |  | +
 | 
	
		
			
				|  |  |  .. function:: bool Problem::Evaluate(const Problem::EvaluateOptions& options, double* cost, vector<double>* residuals, vector<double>* gradient, CRSMatrix* jacobian)
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  |     Evaluate a :class:`Problem`. Any of the output pointers can be
 | 
	
	
		
			
				|  | @@ -1148,7 +1368,6 @@ Instances
 | 
	
		
			
				|  |  |       double cost = 0.0;
 | 
	
		
			
				|  |  |       problem.Evaluate(Problem::EvaluateOptions(), &cost, NULL, NULL, NULL);
 | 
	
		
			
				|  |  |  
 | 
	
		
			
				|  |  | -
 | 
	
		
			
				|  |  |     The cost is evaluated at `x = 1`. If you wish to evaluate the
 | 
	
		
			
				|  |  |     problem at `x = 2`, then
 | 
	
		
			
				|  |  |  
 |