| 123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403 | .. default-domain:: cpp.. cpp:namespace:: ceres.. _chapter-numerical_derivatives:===================Numeric derivatives===================The other extreme from using analytic derivatives is to use numericderivatives. The key observation here is that the process ofdifferentiating a function :math:`f(x)` w.r.t :math:`x` can be writtenas the limiting process:.. math::   Df(x) = \lim_{h \rightarrow 0} \frac{f(x + h) - f(x)}{h}Forward Differences===================Now of course one cannot perform the limiting operation numerically ona computer so we do the next best thing, which is to choose a smallvalue of :math:`h` and approximate the derivative as.. math::   Df(x) \approx \frac{f(x + h) - f(x)}{h}The above formula is the simplest most basic form of numericdifferentiation. It is known as the *Forward Difference* formula.So how would one go about constructing a numerically differentiatedversion of ``Rat43Analytic`` (`Rat43<http://www.itl.nist.gov/div898/strd/nls/data/ratkowsky3.shtml>`_) inCeres Solver. This is done in two steps:  1. Define *Functor* that given the parameter values will evaluate the     residual for a given :math:`(x,y)`.  2. Construct a :class:`CostFunction` by using     :class:`NumericDiffCostFunction` to wrap an instance of     ``Rat43CostFunctor``... code-block:: c++  struct Rat43CostFunctor {    Rat43CostFunctor(const double x, const double y) : x_(x), y_(y) {}    bool operator()(const double* parameters, double* residuals) const {      const double b1 = parameters[0];      const double b2 = parameters[1];      const double b3 = parameters[2];      const double b4 = parameters[3];      residuals[0] = b1 * pow(1.0 + exp(b2 -  b3 * x_), -1.0 / b4) - y_;      return true;    }    const double x_;    const double y_;  }  CostFunction* cost_function =    new NumericDiffCostFunction<Rat43CostFunctor, FORWARD, 1, 4>(      new Rat43CostFunctor(x, y));This is about the minimum amount of work one can expect to do todefine the cost function. The only thing that the user needs to do isto make sure that the evaluation of the residual is implementedcorrectly and efficiently.Before going further, it is instructive to get an estimate of theerror in the forward difference formula. We do this by considering the`Taylor expansion <https://en.wikipedia.org/wiki/Taylor_series>`_ of:math:`f` near :math:`x`... math::   \begin{align}   f(x+h) &= f(x) + h Df(x) + \frac{h^2}{2!} D^2f(x) +   \frac{h^3}{3!}D^3f(x) + \cdots \\   Df(x) &= \frac{f(x + h) - f(x)}{h} - \left [\frac{h}{2!}D^2f(x) +   \frac{h^2}{3!}D^3f(x) + \cdots  \right]\\   Df(x) &= \frac{f(x + h) - f(x)}{h} + O(h)   \end{align}i.e., the error in the forward difference formula is:math:`O(h)` [#f4]_.Implementation Details----------------------:class:`NumericDiffCostFunction` implements a generic algorithm tonumerically differentiate a given functor. While the actualimplementation of :class:`NumericDiffCostFunction` is complicated, thenet result is a :class:`CostFunction` that roughly looks somethinglike the following:.. code-block:: c++  class Rat43NumericDiffForward : public SizedCostFunction<1,4> {     public:       Rat43NumericDiffForward(const Rat43Functor* functor) : functor_(functor) {}       virtual ~Rat43NumericDiffForward() {}       virtual bool Evaluate(double const* const* parameters,                             double* residuals,			     double** jacobians) const { 	 functor_(parameters[0], residuals);	 if (!jacobians) return true;	 double* jacobian = jacobians[0];	 if (!jacobian) return true;	 const double f = residuals[0];	 double parameters_plus_h[4];	 for (int i = 0; i < 4; ++i) {	   std::copy(parameters, parameters + 4, parameters_plus_h);	   const double kRelativeStepSize = 1e-6;	   const double h = std::abs(parameters[i]) * kRelativeStepSize;	   parameters_plus_h[i] += h;           double f_plus;  	   functor_(parameters_plus_h, &f_plus);	   jacobian[i] = (f_plus - f) / h;         }	 return true;       }     private:       std::unique_ptr<Rat43Functor> functor_;   };Note the choice of step size :math:`h` in the above code, instead ofan absolute step size which is the same for all parameters, we use arelative step size of :math:`\text{kRelativeStepSize} = 10^{-6}`. Thisgives better derivative estimates than an absolute step size [#f2]_[#f3]_. This choice of step size only works for parameter values thatare not close to zero. So the actual implementation of:class:`NumericDiffCostFunction`, uses a more complex step sizeselection logic, where close to zero, it switches to a fixed stepsize.Central Differences===================:math:`O(h)` error in the Forward Difference formula is okay but notgreat. A better method is to use the *Central Difference* formula:.. math::   Df(x) \approx \frac{f(x + h) - f(x - h)}{2h}Notice that if the value of :math:`f(x)` is known, the ForwardDifference formula only requires one extra evaluation, but the CentralDifference formula requires two evaluations, making it twice asexpensive. So is the extra evaluation worth it?To answer this question, we again compute the error of approximationin the central difference formula:.. math::   \begin{align}  f(x + h) &= f(x) + h Df(x) + \frac{h^2}{2!}  D^2f(x) + \frac{h^3}{3!} D^3f(x) + \frac{h^4}{4!} D^4f(x) + \cdots\\    f(x - h) &= f(x) - h Df(x) + \frac{h^2}{2!}  D^2f(x) - \frac{h^3}{3!} D^3f(c_2) + \frac{h^4}{4!} D^4f(x) +  \cdots\\  Df(x) & =  \frac{f(x + h) - f(x - h)}{2h} + \frac{h^2}{3!}  D^3f(x) +  \frac{h^4}{5!}  D^5f(x) + \cdots \\  Df(x) & =  \frac{f(x + h) - f(x - h)}{2h} + O(h^2)   \end{align}The error of the Central Difference formula is :math:`O(h^2)`, i.e.,the error goes down quadratically whereas the error in the ForwardDifference formula only goes down linearly.Using central differences instead of forward differences in CeresSolver is a simple matter of changing a template argument to:class:`NumericDiffCostFunction` as follows:.. code-block:: c++  CostFunction* cost_function =    new NumericDiffCostFunction<Rat43CostFunctor, CENTRAL, 1, 4>(      new Rat43CostFunctor(x, y));But what do these differences in the error mean in practice? To seethis, consider the problem of evaluating the derivative of theunivariate function.. math::   f(x) = \frac{e^x}{\sin x - x^2},at :math:`x = 1.0`.It is easy to determine that :math:`Df(1.0) =140.73773557129658`. Using this value as reference, we can now computethe relative error in the forward and central difference formulae as afunction of the absolute step size and plot them... figure:: forward_central_error.png   :figwidth: 100%   :align: centerReading the graph from right to left, a number of things stand out inthe above graph: 1. The graph for both formulae have two distinct regions. At first,    starting from a large value of :math:`h` the error goes down as    the effect of truncating the Taylor series dominates, but as the    value of :math:`h` continues to decrease, the error starts    increasing again as roundoff error starts to dominate the    computation. So we cannot just keep on reducing the value of    :math:`h` to get better estimates of :math:`Df`. The fact that we    are using finite precision arithmetic becomes a limiting factor. 2. Forward Difference formula is not a great method for evaluating    derivatives. Central Difference formula converges much more    quickly to a more accurate estimate of the derivative with    decreasing step size. So unless the evaluation of :math:`f(x)` is    so expensive that you absolutely cannot afford the extra    evaluation required by central differences, **do not use the    Forward Difference formula**. 3. Neither formula works well for a poorly chosen value of :math:`h`.Ridders' Method===============So, can we get better estimates of :math:`Df` without requiring suchsmall values of :math:`h` that we start hitting floating pointroundoff errors?One possible approach is to find a method whose error goes down fasterthan :math:`O(h^2)`. This can be done by applying `RichardsonExtrapolation<https://en.wikipedia.org/wiki/Richardson_extrapolation>`_ to theproblem of differentiation. This is also known as *Ridders' Method*[Ridders]_.Let us recall, the error in the central differences formula... math::   \begin{align}   Df(x) & =  \frac{f(x + h) - f(x - h)}{2h} + \frac{h^2}{3!}   D^3f(x) +  \frac{h^4}{5!}   D^5f(x) + \cdots\\           & =  \frac{f(x + h) - f(x - h)}{2h} + K_2 h^2 + K_4 h^4 + \cdots   \end{align}The key thing to note here is that the terms :math:`K_2, K_4, ...`are independent of :math:`h` and only depend on :math:`x`.Let us now define:.. math::   A(1, m) = \frac{f(x + h/2^{m-1}) - f(x - h/2^{m-1})}{2h/2^{m-1}}.Then observe that.. math::   Df(x) = A(1,1) + K_2 h^2 + K_4 h^4 + \cdotsand.. math::   Df(x) = A(1, 2) + K_2 (h/2)^2 + K_4 (h/2)^4 + \cdotsHere we have halved the step size to obtain a second centraldifferences estimate of :math:`Df(x)`. Combining these two estimates,we get:.. math::   Df(x) = \frac{4 A(1, 2) - A(1,1)}{4 - 1} + O(h^4)which is an approximation of :math:`Df(x)` with truncation error thatgoes down as :math:`O(h^4)`. But we do not have to stop here. We caniterate this process to obtain even more accurate estimates asfollows:.. math::   A(n, m) =  \begin{cases}    \frac{\displaystyle f(x + h/2^{m-1}) - f(x -    h/2^{m-1})}{\displaystyle 2h/2^{m-1}} & n = 1 \\   \frac{\displaystyle 4^{n-1} A(n - 1, m + 1) - A(n - 1, m)}{\displaystyle 4^{n-1} - 1} & n > 1   \end{cases}It is straightforward to show that the approximation error in:math:`A(n, 1)` is :math:`O(h^{2n})`. To see how the above formula canbe implemented in practice to compute :math:`A(n,1)` it is helpful tostructure the computation as the following tableau:.. math::   \begin{array}{ccccc}   A(1,1) & A(1, 2) & A(1, 3) & A(1, 4) & \cdots\\          & A(2, 1) & A(2, 2) & A(2, 3) & \cdots\\	  &         & A(3, 1) & A(3, 2) & \cdots\\	  &         &         & A(4, 1) & \cdots \\	  &         &         &         & \ddots   \end{array}So, to compute :math:`A(n, 1)` for increasing values of :math:`n` wemove from the left to the right, computing one column at atime. Assuming that the primary cost here is the evaluation of thefunction :math:`f(x)`, the cost of computing a new column of the abovetableau is two function evaluations. Since the cost of evaluating:math:`A(1, n)`, requires evaluating the central difference formulafor step size of :math:`2^{1-n}h`Applying this method to :math:`f(x) = \frac{e^x}{\sin x - x^2}`starting with a fairly large step size :math:`h = 0.01`, we get:.. math::   \begin{array}{rrrrr}   141.678097131 &140.971663667 &140.796145400 &140.752333523 &140.741384778\\   &140.736185846 &140.737639311 &140.737729564 &140.737735196\\   & &140.737736209 &140.737735581 &140.737735571\\   & & &140.737735571 &140.737735571\\   & & & &140.737735571\\   \end{array}Compared to the *correct* value :math:`Df(1.0) = 140.73773557129658`,:math:`A(5, 1)` has a relative error of :math:`10^{-13}`. Forcomparison, the relative error for the central difference formula withthe same stepsize (:math:`0.01/2^4 = 0.000625`) is :math:`10^{-5}`.The above tableau is the basis of Ridders' method for numericdifferentiation. The full implementation is an adaptive scheme thattracks its own estimation error and stops automatically when thedesired precision is reached. Of course it is more expensive than theforward and central difference formulae, but is also significantlymore robust and accurate.Using Ridder's method instead of forward or central differences inCeres is again a simple matter of changing a template argument to:class:`NumericDiffCostFunction` as follows:.. code-block:: c++  CostFunction* cost_function =    new NumericDiffCostFunction<Rat43CostFunctor, RIDDERS, 1, 4>(      new Rat43CostFunctor(x, y));The following graph shows the relative error of the three methods as afunction of the absolute step size. For Ridders's method we assumethat the step size for evaluating :math:`A(n,1)` is :math:`2^{1-n}h`... figure:: forward_central_ridders_error.png   :figwidth: 100%   :align: centerUsing the 10 function evaluations that are needed to compute:math:`A(5,1)` we are able to approximate :math:`Df(1.0)` about a 1000times better than the best central differences estimate. To put thesenumbers in perspective, machine epsilon for double precisionarithmetic is :math:`\approx 2.22 \times 10^{-16}`.Going back to ``Rat43``, let us also look at the runtime cost of thevarious methods for computing numeric derivatives.==========================   =========CostFunction                 Time (ns)==========================   =========Rat43Analytic                      255Rat43AnalyticOptimized              92Rat43NumericDiffForward            262Rat43NumericDiffCentral            517Rat43NumericDiffRidders           3760==========================   =========As expected, Central Differences is about twice as expensive asForward Differences and the remarkable accuracy improvements ofRidders' method cost an order of magnitude more runtime.Recommendations===============Numeric differentiation should be used when you cannot compute thederivatives either analytically or using automatic differentiation. Thisis usually the case when you are calling an external library orfunction whose analytic form you do not know or even if you do, youare not in a position to re-write it in a manner required to use:ref:`chapter-automatic_derivatives`.When using numeric differentiation, use at least Central Differences,and if execution time is not a concern or the objective function issuch that determining a good static relative step size is hard,Ridders' method is recommended... rubric:: Footnotes.. [#f2] `Numerical Differentiation	 <https://en.wikipedia.org/wiki/Numerical_differentiation#Practical_considerations_using_floating_point_arithmetic>`_.. [#f3] [Press]_ Numerical Recipes, Section 5.7.. [#f4] In asymptotic error analysis, an error of :math:`O(h^k)`	 means that the absolute-value of the error is at most some	 constant times :math:`h^k` when :math:`h` is close enough to	 :math:`0`.
 |