Statistics: Difference between revisions

From CIRPwiki
Jump to navigation Jump to search
No edit summary
Line 4: Line 4:
The Root-Mean-Squared Error (RMSE) also referred to as Root-Mean-Squared Deviation (RMSD) is defined as
The Root-Mean-Squared Error (RMSE) also referred to as Root-Mean-Squared Deviation (RMSD) is defined as
{{Equation|<math>
{{Equation|<math>
   RMSE = \sqrt{ \bigg\langle \big( x_m - x_c  \big)^2  \bigg\rangle  }
   RMSE = \sqrt{ \bigg\langle \big( x_m - x_c  \bi)^2  \bigg\rangle  }
</math>|3}}
</math>|3}}



Revision as of 19:36, 4 November 2013

Given the initial measured values , final observed or measured values and final calculated values , there are several goodness-of-fit statistics which can be calculated. The definition for some of the more common ones are provided below.

Root-Mean-Squared Error

The Root-Mean-Squared Error (RMSE) also referred to as Root-Mean-Squared Deviation (RMSD) is defined as

  Failed to parse (unknown function "\bi"): {\displaystyle RMSE = \sqrt{ \bigg\langle \big( x_m - x_c \bi)^2 \bigg\rangle } } (3)

The RMSE has the same units as the measured and calculated data. Smaller values indicate better agreement between measured and calculated values.

Example Matlab Code:

 RMSE = sqrt(mean((xc(:)-xm(:)).^2));

Mean-Absolute Error

The mean absolute error is given by

  (4)

Similarly to the RMSE, smaller MAE values indicate better agreement between measured and calculated values.

Example Matlab code:

 MAE = mean(abs(xc(:)-xm(:)));

Bias

The bias is a measure of the over or under estimation and is defined as

  (6)

The bias is a measure of the over or under prediction of a variable. Positive values indicate overprediction and negative values indicate underprediction.

Example Matlab code:

 B = mean(xc(:)-xm(:));

Normalization

The dimensional statistics above, namely RMSE, MAE, and B; can be normalized to produce a nondimensional statistic. When the variable is normalized the statistic is commonly prefixed by a letter N for normalized or R for relative (e.g. NRMSE, EMAE, and NB). This also has facilitates the comparison between different datasets or models which have different scales. For example, when comparing models to laboratory data the dimensional statistics will produce relatively smaller dimensional goodness-of-fit statistics compared to field data comparisons. One drawback of normalization is that there is no consistent means of normalization. Different types of data or normalized differently literature. For example, water levels are commonly normalized by the tidal range, while wave heights may be normalized by the offshore wave height. In some cases, the range of the measured data is a good choice. The range is defined as the maximum value minus the minimum value.

  (8)


Another common approach to nomralization is to use the mean value of the measurements

  (9)

When the RMS value is normalized by the mean measured value, is sometimes referred to as the scatter index (SI) (Zambresky 1989). When the RMS value is normalized by a specific measured value used to drive a model, it is sometimes referred to as the Operational Performance Index (OPI) (Ris et al. 1999). The OPI can be used for example to give an estimate of the performance of a nearshore wave height transformation model based on the offshore measured wave height.

More important than the choice of normalization variable is to properly describe how the statistics have been normalized.

Performance Scores

There are several goodness-of-fit statitics in literature of the form

  (2)

where is a reference value(s). When the reference value is equal to the base or initial measurements , then the Peformance Score is referred to as the Brier Skill Score (BSS) or Brier Skill Index (BSI). When the reference value is equal to the mean measured value , then the Performance Score is referred to the Nash-Sutcliffe Coefficient (E) or Nash-Sutcliffe Score (ES). When the reference value is a specific measured value such as a model forcing value, then it is referred to as the Model Performance Index (MPI) or Model Performance Score (MPS).

The various performance scores ranges between negative infinity and one. A performance score of 1 indicates a perfect agreement between measured and calculated values. Scores equal to or less than 0 indicates that the initial value is as or more accurate than the calculated values. Recommended qualifications for different BSS ranges are provided in Table 1.

Table 1. Performance Score Qualifications

Range Qualification
0.8<PS<1.0 Excellent
0.6<PS<0.8 Good
0.3<PS<0.6 Reasonable
0<PS<0.3 Poor
PS<0 Bad

Example Matlab Code:

 BSS = 1 - mean((xc(:)-xm(:)).^2)/mean((xm(:)-x0(:)).^2);
 ES = 1 - mean((xc(:)-xm(:)).^2)/mean((xm(:)-mean(xm(:))).^2);
 MPS = 1 - mean((xc(:)-xm(:)).^2)/mean((xm(:)-xR).^2);

Correlation Coefficient

The correlation is a measure of the strength and direction of a linear relationship between two variables. The correlation coefficient is defined as

  (5)

A correlation of 1 indicates a perfect one-to-one linear relationship and -1 indicates a negative relationship. The square of the correlation coefficient describes how much of the variance between two variables is described by a linear fit.

Example Matlab code:

 R = corrcoef(yc,ym);

References

  • Ris, R.C., Holthuijsen, L.H., and Booij, N. 1999. A third-generation wave model for coastal regions 2, verification. Journal of Geophysical Research, 104(C4) 7667-7681.
  • Zambreskey, L., 1988. A verification study of the global WAM model, December 1987 – November 1988. GKSS Forschungzentrum Geesthacht GMBH Report GKSS 89/E/37.

Symbols

A description of all the symbols in the equations above is provided in Table 3.

Table 3. Description of symbols

Symbol Description
Measured values
Calculated values
Initial measured values
Normalization value
Expectation (averaging) operator

Documentation Portal