• 8

A PHP Error was encountered

Severity: Notice

Message: Undefined index: userid

Filename: views/question.php

Line Number: 191


File: /home/prodcxja/public_html/questions/application/views/question.php
Line: 191
Function: _error_handler

File: /home/prodcxja/public_html/questions/application/controllers/Questions.php
Line: 433
Function: view

File: /home/prodcxja/public_html/questions/index.php
Line: 315
Function: require_once

I have been using scipy.optimize.leastsq to fit some data. I would like to get some confidence intervals on these estimates so I look into the cov_x output but the documentation is very unclear as to what this is and how to get the covariance matrix for my parameters from this.

First of all it says that it is a Jacobian, but in the notes it also says that "cov_x is a Jacobian approximation to the Hessian" so that it is not actually a Jacobian but a Hessian using some approximation from the Jacobian. Which of these statements is correct?

Secondly this sentence to me is confusing:

This matrix must be multiplied by the residual variance to get the covariance of the parameter estimates – see curve_fit.

I indeed go look at the source code for curve_fit where they do:

s_sq = (func(popt, *args)**2).sum()/(len(ydata)-len(p0))
pcov = pcov * s_sq

which corresponds to multiplying cov_x by s_sq but I cannot find this equation in any reference. Can someone explain why this equation is correct? My intuition tells me that it should be the other way around since cov_x is supposed to be a derivative (Jacobian or Hessian) so I was thinking: cov_x * covariance(parameters) = sum of errors(residuals) where sigma(parameters) is the thing I want.

How do I connect the thing curve_fit is doing with what I see at eg. wikipedia: http://en.wikipedia.org/wiki/Propagation_of_uncertainty#Non-linear_combinations

OK, I think I found the answer. First the solution: cov_x*s_sq is simply the covariance of the parameters which is what you want. Taking sqrt of the diagonal elements will give you standard deviation (but be careful about covariances!).

Residual variance = reduced chi square = s_sq = sum[(f(x)-y)^2]/(N-n), where N is number of data points and n is the number of fitting parameters. Reduced chi square.

The reason for my confusion is that cov_x as given by leastsq is not actually what is called cov(x) in other places rather it is the reduced cov(x) or fractional cov(x). The reason it does not show up in any of the other references is that it is a simple rescaling which is useful in numerical computations, but is not relevant for a textbook.

About Hessian versus Jacobian, the documentation is poorly worded. It is the Hessian that is calculated in both cases as is obvious since the Jacobian is zero at a minimum. What they mean is that they are using an approximation to the Jacobian to find the Hessian.

A further note. It seems that the curve_fit result does not actually account for the absolute size of the errors, but only take into account the relative size of the sigmas provided. This means that the pcov returned doesn't change even if the errorbars change by a factor of a million. This is of course not right, but seems to be standard practice ie. Matlab does the same thing when using their Curve fitting toolbox. The correct procedure is described here: https://en.wikipedia.org/wiki/Linear_least_squares_(mathematics)#Parameter_errors_and_correlation

It seems fairly straightforward to do this once the optimum has been found, at least for Linear Least squares.

  • 28
Reply Report
      • 1
    • “It seems that the curve_fit result does not actually account for the absolute size of the errors, but only take into account the relative size of the sigmas provided.” There is a flag for this: absolute_sigma. If it's off (default), then curve_fit will estimate the var(y) based on your data; otherwise, it will take use your provided sigma values.

I found this solution during my search for a similar question, and I have only a small improvement on HansHarhoff's answer. The full output from leastsq provides a return value infodict, which contains infodict['fvec'] = f(x) -y. Thus, to compute the reduced chi square = (in the above notation)

s_sq = (infodict['fvec']**2).sum()/ (N-n)

BTW. Thanks HansHarhoff for doing most of the heavy lifting to solve this.

  • 7
Reply Report
    • Very nice! Exactly what I was looking for. Specificially: result=scipy.optimize.leastsq(...,, full_output=True);s_sq = (result]['fvec']**2).sum()/(len(result[2]['fvec'])-len(result[0]))
    • I don't know what the minimum version would be to support the full_output option, but I use scipy 0.13.3 and 0.14.1.
      • 1
    • @CharlesPlager you are missing a [2 there ... s_sq = (result[2]['fvec']**2).sum()/(len(result[2]['fvec'])-len(result[0]))

Trending Tags