Statistics: Difference between revisions

From CIRPwiki
Jump to navigation Jump to search
No edit summary
 
(105 intermediate revisions by 3 users not shown)
Line 1: Line 1:
Given the initial measured values <math>x_0</math>, final observed or measured values <math>x_m</math> and final calculated values <math>x_c</math>, there are several goodness of fit statistics or skill scores which can be calculated. The definition for some of the more common ones are provided below.
Given the initial measured values <math>x_0</math>, final observed or measured values <math>x_m</math> and final calculated values <math>x_c</math>, there are several goodness-of-fit statistics which can be calculated. The definition for some of the more common ones are provided below.  


== Brier Skill Score ==
= Dimensional Statistics =
The Bier Skill Score (BSS) is given by
== Mean Error ==
The mean error (ME), also referred to as bias (B) is given by
{{Equation|<math>
{{Equation|<math>
  BSS = 1 - \frac{\bigg\langle \big(x_m-x_c\big)^2 \bigg\rangle}{\bigg \langle \big(x_m-x_0\big)^2 \bigg\rangle }
  ME =  \langle x_c - x_m  \rangle  =  \langle x_c \rangle - \langle x_m \rangle  
</math>|1}}
</math>|1}}


where <math>x_m</math> is the measured or observed values, <math>x_c</math> is the calculated values, <math>x_0</math> is the initial measured values and the angled brackets indicate averaging. The BSS ranges between negative infinity and one. A BSS value of 1 indicates a perfect agreement between measured and calculated values. Scores equal to or less than 0 indicates that the mean value is as or more accurate than the calculated values.
Smaller absolute ME values indicate better agreement between measured and calculated values. Positive values indicate positively biased computed values (overprediction) while negative values indicate negatively biased computed values (underprediction).  


'''Table 1. Brier Skill Score Quantifications'''
Example Matlab code:
{|border="1"
  ME = mean(xc(:)-xm(:));
|'''Range''' ||'''Quantification'''
|-
|0.8<BSS<1.0 || Excellent
|-
|0.6<BSS<0.8  || Good
|-
|0.3<BSS<0.6  || Reasonable
|-
|0<BSS<0.3 || Poor
|-
|BSS<0 || Bad
|}


== Nash-Sutcliffe Coefficient ==
== Mean-Absolute Error ==
The Nash-Sutcliffe Coefficient (E) is commonly used to assess the predictive power of a model. It is defined as
The mean absolute error is given by
 
{{Equation|<math>
\begin{equation} \tag{2}
  MAE =  \bigg\langle \big| x_c - x_m \big| \bigg\rangle  
E = 1 - \frac{\bigg\langle \big(x_m-x_c\big)^2 \bigg\rangle}{\bigg\langle  \big(x_m-  \langle x_m \rangle \big)^2 \bigg\rangle }
</math>|2}}
\end{equation}


where <math>x_m</math> is the measured or observed values, <math>x_c</math> is the calculated values, and the angled brackets indicate averaging. The Nash-Sutcliffe efficiency coefficient ranges from negative infinity to one. An efficiency of 1 corresponds to a perfect match between measured and calculated values. An efficiencies equal 0 or less indicates that the mean observed value is as or more accurate than the calculated values.
Similarly to the RMSE, smaller MAE values indicate better agreement between measured and calculated values.  


'''Table 2. Nash-Sutcliffe Coefficient Quantifications'''
Example Matlab code:
{|border="1"
  MAE = mean(abs(xc(:)-xm(:)));
|'''Range''' ||'''Quantification'''
|-
|0.8<E<1.0 || Excellent
|-
|0.6<E<0.8  || Good
|-
|0.3<E<0.6  || Reasonable
|-
|0<E<0.3 || Poor
|-
|E<0 || Bad
|}


== Root-Mean-Squared Error ==
== Root-Mean-Squared Error ==
The Root-Mean-Squared Error (RMSE) also referred to as Root-Mean-Squared Deviation (RMSD) is defined as
The Root-Mean-Squared Error (RMSE) also referred to as Root-Mean-Squared Deviation (RMSD) is defined as
{{Equation|<math>
  RMSE = \sqrt{ \bigg\langle \big( x_c - x_m  \big)^2  \bigg\rangle  }
</math>|3}}
The RMSE has the same units as the measured and calculated data. Smaller values indicate better agreement between measured and calculated values.
Example Matlab Code:
  RMSE = sqrt(mean((xc(:)-xm(:)).^2));
== Standard Deviation of Residuals ==
The standard deviation of residuals (SDR) is calculated as
{{Equation|<math>
  SDR = \sqrt{ \bigg\langle \bigg[ (x_c - x_m  \big)  - (\langle x_c \rangle - \langle x_m \rangle) \bigg]^2}
</math>|4}}
SDR is a measure of the dynamical correspondence. Smaller values indicate better agreement. The RMSE, ME, STD are related by the following formula
{{Equation|<math>
  RMSE^2 = ME^2 + SDR^2
</math>|5}}
Example Matlab Code:
  SDR= sqrt(mean((xc(:)-xm(:)-mean(xc(:))+mean(xm(:))).^2));
== Normalization ==
The dimensional statistics above, namely RMSE, MAE, and B; can be normalized to produce a nondimensional statistic. When the variable is normalized the statistic is commonly prefixed by a letter N for normalized or R for relative (e.g. NRMSE, EMAE, and NB). This also has facilitates the comparison between different datasets or models which have different scales. For example, when comparing models to laboratory data the dimensional statistics will produce relatively smaller dimensional goodness-of-fit statistics compared to field data comparisons. One drawback of normalization is that there is no consistent means of normalization. Different types of data or normalized differently literature. For example, water levels are commonly normalized by the tidal range, while wave heights may be normalized by the offshore wave height. In some cases, the range of the measured data is a good choice. The range is defined as the maximum value minus the minimum value.
{{Equation|<math>x_N=range(x_m)=\max(x_m)-\min(x_m)</math>|6}}


\begin{equation} \tag{3}
Another common approach to nomralization is to use the mean value of the measurements
  RMSE = \sqrt{ \bigg\langle \big( x_m - x_c  \big)^2  \bigg\rangle  }
{{Equation|<math>x_N = mean(x_m) </math>|7}}
\end{equation}


where where <math>x_m</math> is the measured or observed  values, <math>x_c</math> is the calculated values, and the angled brackets indicate averaging. The RMSE has the same units as the measured and calculated data.
When the RMS value is normalized by the mean measured value, is sometimes referred to as the scatter index (SI) (Zambresky 1989). When the RMS value is normalized by a specific measured value used to drive a model, it is sometimes referred to as the Operational Performance Index (OPI) (Ris et al. 1999). The OPI can be used for example to give an estimate of the performance of a nearshore wave height transformation model based on the offshore measured wave height.  


== Normalized-Root-Mean-Squared Error ==
More important than the choice of normalization variable is to properly describe how the statistics have been normalized.
In order to make comparing different RMSE with different units or scales (lab vs field) several non-dimensional forms of the RMSE have been proposed in literature. Here the Normalized-Root-Mean-Squared Error (NRMSE) is defined as


\begin{equation} \tag{4}
= Nondimensional Statistics =
  NRMSE = \frac{\sqrt{ \bigg\langle \big( x_m - x_c  \big)^2  \bigg\rangle }}{\max{(x_m)}-\min{(x_m)}}  
== Performance Scores==
\end{equation}
There are several goodness-of-fit statitics in literature of the form
{{Equation|<math>
  PS = 1 - \frac{\bigg\langle \big(x_c-x_m\big)^2  \bigg\rangle}{\bigg\langle  \big(x_m - x_R \big)^2 \bigg\rangle }
</math>|8}}


where <math>x_m</math> is the measured or observed values, <math>x_c</math> is the calculated values, and the angled brackets indicate averaging. The NRMSE is often expressed in units of percent. Smaller values indicate a better agreement between measured and calculated values.
where <math> x_R </math> is a reference value(s). When the reference value is equal to the base or initial measurements <math> x_R = x_0</math>, then the Peformance Score is referred to as the Brier Skill Score (BSS) or Brier Skill Index (BSI). When the reference value is equal to the mean measured value <math> x_R = \langle x_m \rangle</math>, then the Performance Score is referred to the Nash-Sutcliffe Coefficient (E) or Nash-Sutcliffe Score (ES) (Nash and Sutcliffe 1970). When the reference value is a specific measured value such as a model forcing value, then it is referred to as the Model Performance Index (MPI) or Model Performance Score (MPS).  


== Mean-Absolute Error ==
The various performance scores ranges between negative infinity and one. A performance score of 1 indicates a perfect agreement between measured and calculated values. Scores equal to or less than 0 indicates that the initial value is as or more accurate than the calculated values. Recommended qualifications for different BSS ranges are provided in Table 1.
\begin{equation} \tag{5}
MAE =  \bigg\langle \big| x_m - x_c \big|  \bigg\rangle
\end{equation}


where where <math>x_m</math> is the measured or observed values, <math>x_c</math> is the calculated values, and the angled brackets indicate averaging.  
'''Table 1. Performance Score Qualifications'''
{|border="1"
|'''Range''' ||'''Qualification'''
|-
|0.8<PS<1.0 || Excellent
|-
|0.6<PS<0.8 || Good
|-
|0.3<PS<0.6  || Reasonable
|-
|0<PS<0.3 || Poor
|-
|PS<0 || Bad
|}


== Normalized-Mean-Absolute Error ==
Example Matlab Code:
The normalized-Mean-Absolute Error is defined as
  BSS = 1 - mean((xc(:)-xm(:)).^2)/mean((xm(:)-x0(:)).^2);
  ES = 1 - mean((xc(:)-xm(:)).^2)/mean((xm(:)-mean(xm(:))).^2);
  MPS = 1 - mean((xc(:)-xm(:)).^2)/mean((xm(:)-xR).^2);


\begin{equation} \tag{6}
== Index of Agreement ==
NMAE = \frac{\bigg\langle \big| x_m - x_c \big| \bigg\rangle }{ \max{(x_m)}-\min{(x_m)} }  
The index of agreement (IA or d) is given by (Willmott et al. 1985)
\end{equation}
{{Equation|<math>
  IA = 1 - \frac{\bigg\langle \big(x_c-x_m\big)^2 \bigg\rangle}{\bigg\langle  \big(| x_c - \langle x_c \rangle | + | x_m - \langle x_m \rangle |\big)^2 \bigg\rangle }
</math>|9}}


where <math>x_m</math> is the measured or observed values, <math>x_c</math> is the calculated values.
The denominator in the above equation is referred to as the potential error. IA is a nondimensional and bounded measure with values closer to 1 indicating better agreement.  


== Correlation coefficient is defined as ==
Example Matlab code: 
Correlation is a measure of the strength and direction of a linear relationship between two variables. The correlation coefficient <math> R </math> is defined as
  IA = 1 - mean((xc(:)-xm(:)).^2)/max(mean((abs(xc(:)-mean(xm(:)))+abs(xm(:)-mean(xm(:)))).^2),eps)


\begin{equation} \tag{7}
== Correlation Coefficient ==
The correlation is a measure of the strength and direction of a linear relationship between two variables. The correlation coefficient <math> R </math> is defined as
{{Equation|<math>
   R = \frac { \langle x_m x_c \rangle - \langle x_m \rangle \langle x_c \rangle  }{ \sqrt{ \langle x_m^2 \rangle - \langle x_m \rangle ^2} \sqrt{ \langle x_c^2 \rangle - \langle x_c \rangle ^2} }
   R = \frac { \langle x_m x_c \rangle - \langle x_m \rangle \langle x_c \rangle  }{ \sqrt{ \langle x_m^2 \rangle - \langle x_m \rangle ^2} \sqrt{ \langle x_c^2 \rangle - \langle x_c \rangle ^2} }
\end{equation}
</math>|10}}
 
A correlation of 1 indicates a perfect one-to-one linear relationship and -1 indicates a negative relationship. The square of the correlation coefficient describes how much of the variance between two variables is described by a linear fit.


where where <math>x_m</math> is the measured or observed values, <math>x_c </math> is the calculated values, and the angled brackets indicate averaging. A correlation of 1 indicates a perfect one-to-one linear relationship and -1 indicates a negative relationship. The square of the correlation coefficient describes how much of the variance between two variables is described by a linear fit.
Example Matlab code:  
  R = corrcoef(yc,ym);


==Bias ==
= References =
The bias is a measure of the over or under estimation and is defined as
*  Nash, J.E., and Sutcliffe, J.V. 1970. River flow forecasting through conceptual models part I — A discussion of principles, Journal of Hydrology, 10(3), 282–290.


\begin{equation} \tag{8} B =  \langle x_c - x_m \rangle \end{equation}
* Ris, R.C., Holthuijsen, L.H., and Booij, N. 1999. A third-generation wave model for coastal regions 2, verification. Journal of Geophysical Research, 104(C4) 7667-7681.


where <math>x_m</math> is the measured or observed values, <math>x_c</math> is the calculated values, and the angled brackets indicate averaging. The bias is a measure of the over or under prediction of a variable. Positive values indicate overprediction and negative values indicate underprediction.
* Willmott, C.J., Ackleson, S.G., Davis, R.E., Feddema, J.J., Klink, K.M., Legates, D.R., O’Donnell, J., and Rowe, C.M. 1985. Statistics for the evaluation and comparison of models, Journal of Geophysical Research, 90(C5), 8995–9005.


== Normalized Bias ==
* Zambreskey, L., 1988. A verification study of the global WAM model, December 1987 – November 1988. GKSS Forschungzentrum Geesthacht GMBH Report GKSS 89/E/37.
The normalized bias is a measure of the over or under estimation and is defined as
\begin{equation} \tag{9} NB =  \frac{\langle x_c - x_m \rangle}{\max{(x_m)}-\min{(x_m)}} \end{equation}


where <math>x_m</math> is the measured or observed values, <math>x_c</math> is the calculated values, and the angled brackets indicate averaging. The normalized bias is a measure of the over or under prediction of a variable and is often expressed as a percentage. Positive values indicate overprediction and negative values indicate underprediction.
= Symbols =
A description of all the symbols in the equations above is provided in Table 3.
 
'''Table 3. Description of symbols'''
{|border="1"
|'''Symbol''' ||'''Description'''
|-
| <math>x_m</math> || Measured values
|-
| <math>x_c</math> || Calculated values
|-
| <math>x_0</math>  || Initial measured values  
|-
| <math>x_N</math>  || Normalization value
|-
| <math>\langle \rangle</math> || Expectation (averaging) operator
|}


----
----


[[CMS#Documentation_Portal | Documentation Portal]]
[[CMS#Documentation_Portal | Documentation Portal]]

Latest revision as of 19:48, 5 June 2014

Given the initial measured values , final observed or measured values and final calculated values , there are several goodness-of-fit statistics which can be calculated. The definition for some of the more common ones are provided below.

Dimensional Statistics

Mean Error

The mean error (ME), also referred to as bias (B) is given by

  (1)

Smaller absolute ME values indicate better agreement between measured and calculated values. Positive values indicate positively biased computed values (overprediction) while negative values indicate negatively biased computed values (underprediction).

Example Matlab code:

 ME = mean(xc(:)-xm(:));

Mean-Absolute Error

The mean absolute error is given by

  (2)

Similarly to the RMSE, smaller MAE values indicate better agreement between measured and calculated values.

Example Matlab code:

 MAE = mean(abs(xc(:)-xm(:)));

Root-Mean-Squared Error

The Root-Mean-Squared Error (RMSE) also referred to as Root-Mean-Squared Deviation (RMSD) is defined as

  (3)

The RMSE has the same units as the measured and calculated data. Smaller values indicate better agreement between measured and calculated values.

Example Matlab Code:

 RMSE = sqrt(mean((xc(:)-xm(:)).^2));

Standard Deviation of Residuals

The standard deviation of residuals (SDR) is calculated as

  (4)

SDR is a measure of the dynamical correspondence. Smaller values indicate better agreement. The RMSE, ME, STD are related by the following formula

  (5)

Example Matlab Code:

 SDR= sqrt(mean((xc(:)-xm(:)-mean(xc(:))+mean(xm(:))).^2));

Normalization

The dimensional statistics above, namely RMSE, MAE, and B; can be normalized to produce a nondimensional statistic. When the variable is normalized the statistic is commonly prefixed by a letter N for normalized or R for relative (e.g. NRMSE, EMAE, and NB). This also has facilitates the comparison between different datasets or models which have different scales. For example, when comparing models to laboratory data the dimensional statistics will produce relatively smaller dimensional goodness-of-fit statistics compared to field data comparisons. One drawback of normalization is that there is no consistent means of normalization. Different types of data or normalized differently literature. For example, water levels are commonly normalized by the tidal range, while wave heights may be normalized by the offshore wave height. In some cases, the range of the measured data is a good choice. The range is defined as the maximum value minus the minimum value.

  (6)

Another common approach to nomralization is to use the mean value of the measurements

  (7)

When the RMS value is normalized by the mean measured value, is sometimes referred to as the scatter index (SI) (Zambresky 1989). When the RMS value is normalized by a specific measured value used to drive a model, it is sometimes referred to as the Operational Performance Index (OPI) (Ris et al. 1999). The OPI can be used for example to give an estimate of the performance of a nearshore wave height transformation model based on the offshore measured wave height.

More important than the choice of normalization variable is to properly describe how the statistics have been normalized.

Nondimensional Statistics

Performance Scores

There are several goodness-of-fit statitics in literature of the form

  (8)

where is a reference value(s). When the reference value is equal to the base or initial measurements , then the Peformance Score is referred to as the Brier Skill Score (BSS) or Brier Skill Index (BSI). When the reference value is equal to the mean measured value , then the Performance Score is referred to the Nash-Sutcliffe Coefficient (E) or Nash-Sutcliffe Score (ES) (Nash and Sutcliffe 1970). When the reference value is a specific measured value such as a model forcing value, then it is referred to as the Model Performance Index (MPI) or Model Performance Score (MPS).

The various performance scores ranges between negative infinity and one. A performance score of 1 indicates a perfect agreement between measured and calculated values. Scores equal to or less than 0 indicates that the initial value is as or more accurate than the calculated values. Recommended qualifications for different BSS ranges are provided in Table 1.

Table 1. Performance Score Qualifications

Range Qualification
0.8<PS<1.0 Excellent
0.6<PS<0.8 Good
0.3<PS<0.6 Reasonable
0<PS<0.3 Poor
PS<0 Bad

Example Matlab Code:

 BSS = 1 - mean((xc(:)-xm(:)).^2)/mean((xm(:)-x0(:)).^2);
 ES = 1 - mean((xc(:)-xm(:)).^2)/mean((xm(:)-mean(xm(:))).^2);
 MPS = 1 - mean((xc(:)-xm(:)).^2)/mean((xm(:)-xR).^2);

Index of Agreement

The index of agreement (IA or d) is given by (Willmott et al. 1985)

  (9)

The denominator in the above equation is referred to as the potential error. IA is a nondimensional and bounded measure with values closer to 1 indicating better agreement.

Example Matlab code:

 IA = 1 - mean((xc(:)-xm(:)).^2)/max(mean((abs(xc(:)-mean(xm(:)))+abs(xm(:)-mean(xm(:)))).^2),eps)

Correlation Coefficient

The correlation is a measure of the strength and direction of a linear relationship between two variables. The correlation coefficient is defined as

  (10)

A correlation of 1 indicates a perfect one-to-one linear relationship and -1 indicates a negative relationship. The square of the correlation coefficient describes how much of the variance between two variables is described by a linear fit.

Example Matlab code:

 R = corrcoef(yc,ym);

References

  • Nash, J.E., and Sutcliffe, J.V. 1970. River flow forecasting through conceptual models part I — A discussion of principles, Journal of Hydrology, 10(3), 282–290.
  • Ris, R.C., Holthuijsen, L.H., and Booij, N. 1999. A third-generation wave model for coastal regions 2, verification. Journal of Geophysical Research, 104(C4) 7667-7681.
  • Willmott, C.J., Ackleson, S.G., Davis, R.E., Feddema, J.J., Klink, K.M., Legates, D.R., O’Donnell, J., and Rowe, C.M. 1985. Statistics for the evaluation and comparison of models, Journal of Geophysical Research, 90(C5), 8995–9005.
  • Zambreskey, L., 1988. A verification study of the global WAM model, December 1987 – November 1988. GKSS Forschungzentrum Geesthacht GMBH Report GKSS 89/E/37.

Symbols

A description of all the symbols in the equations above is provided in Table 3.

Table 3. Description of symbols

Symbol Description
Measured values
Calculated values
Initial measured values
Normalization value
Expectation (averaging) operator

Documentation Portal