Professor is hosted by Hepforge, IPPP Durham

Minimization Stability

4D - parameter means (Mar. 11.)

There are new plots: https://users.hepforge.org/~eike/minimization-20080311/all.html

They are based on the same results as the ones below (Mar. 10.). I just made the list of results unique, i.e. only one result per choice of runs. Additionally I calculated mean and deviation for the 4 parameters. The results are in the small green boxes. The means/deviations are displayed as green bars.

For the mean calculation I used 3 different subsets of the results: I only took into account those results with a chi2 value within a 1.0, 2.5 and 3.0 sigma intervall around the mean chi2 of all 30 results. These intervalls are shown as blue horizontal bars.

4D (Mar. 10.)

I generated plots with chi2 vs. fit results for each parameter. I used both Minuit and Scipy(fmin_powell). They can be found here: https://users.hepforge.org/~eike/minimization-20080310/all.html

Right column are the scipy results, left the ones from Minuit. The first 4 rows are with a linear chi2 axis the last 4 with a log-scale.

Fitting

For the fitting I used 29 out of the 30 MC runs. Thus there are 30 different interpolations and we would get 30 points in the chi2-param planes if the minimization was not depending on the used starting point. I selected observables which showed sensitivity in the latest sensitivity plots. The observables I used where:

  • /DELPHI_1996_S3430090/d02-x01-y01
  • /DELPHI_1996_S3430090/d04-x01-y01
  • /DELPHI_1996_S3430090/d06-x01-y02
  • /DELPHI_1996_S3430090/d09-x01-y01
  • /DELPHI_1996_S3430090/d21-x01-y01

This resulted in 87 bins for the chi2 calculation.

Plots

The plots differentiate between 3 methods for selecting the start point for the minimization:

  • random: use a random point (used this 5 times),
  • minmc: select the anchor point which yields the smallest chi2,
  • center: use the center of the scaled parameter space.

Interpretation

For Minuit the results do not depend on the used starting point: The resulting chi2 and parameter values are the same and we have 30 points in the plots.

Using Scipy.fmin_powell the results do depend on the starting point: You can see points with the same chi2 values and differing parameter values which I interpret belonging to the same choice of runs. This shows, that the chi2 function is quite flat in the minimum and the fmin_powell algorithm does not handle this properly. But the chi2 values by themselfs do not differ strongly between Minuit and Scipy.

Therefore except for computing time reasons it does not seem necessary to prefer one of the methods. But since the minimization is not very time consuming ( O(2min) ), it's good just for security reasons.

Another observation is that there are outliers, e.g. there is one point with chi2 over 100000, which has very different parameter values for all parameters, too. My idea to handle this automatically is that we can calculate a mean chi2 value and rms from all 30 selections of runs and then throw away outliers which have a chi2 value outside of a foo*rms intervall around the mean chi2.

The parameter values itself are comparable to the results Hendrik mailed on Feb. 29. . Except for PARJ(41) which yields only negative values.

3D

I compared the fit results in the 3-parameter test case when using all 30 MC runs and when using only 29 out of 30 runs. The ROOT fitter was chosen as minimizer, because it returns fit uncertainties.

On the page http://users.hepforge.org/~hoeth/plots/3dim_testcase_fit_stability_29_sets/ you find the pull plots for the fit results of the three parameters. Of course the results using 29 runs are correlated, since they only differ by one run each. Nevertheless I think the plots show that we are not too dependend on the individual runs and can use both the fit from all runs or the mean of the fits with n-1 runs as result. Especially since we're gonna see larger differences than these when changing the choice of observables ...

Last modified 10 years ago Last modified on Mar 11, 2008, 7:51:13 PM