lucid  0.0.1
Lifting-based Uncertain Control Invariant Dynamics
Loading...
Searching...
No Matches
lucid::scorer Namespace Reference

Collection of utilities used to score the accuracy of estimators. More...

Typedefs

using Scorer = std::function<double(const Estimator&, ConstMatrixRef, ConstMatrixRef)>
 Function type used to score the estimator.
 
using ScorerType = double (*)(const Estimator&, ConstMatrixRef, ConstMatrixRef)
 Function pointer type used to score the estimator.
 

Functions

double r2_score (ConstMatrixRef x, ConstMatrixRef y)
 Score the closeness between x and y assigning it a numerical value.
 
double r2_score (const Estimator &estimator, ConstMatrixRef evaluation_inputs, ConstMatrixRef evaluation_outputs)
 Score the estimator assigning a numerical value to its accuracy in predicting the evaluation_outputs given the evaluation_inputs.
 
double mse_score (ConstMatrixRef x, ConstMatrixRef y)
 Compute the mean squared error (MSE) between x and y.
 
double mse_score (const Estimator &estimator, ConstMatrixRef evaluation_inputs, ConstMatrixRef evaluation_outputs)
 Compute the mean squared error (MSE) score of the estimator on the given evaluation data.
 
double rmse_score (ConstMatrixRef x, ConstMatrixRef y)
 Compute the root mean squared error (RMSE) score of the closeness between x and y.
 
double rmse_score (const Estimator &estimator, ConstMatrixRef evaluation_inputs, ConstMatrixRef evaluation_outputs)
 Compute the root mean squared error (RMSE) score of the estimator on the given evaluation data.
 
double mape_score (ConstMatrixRef x, ConstMatrixRef y)
 Compute the mean absolute percentage error (MAPE) score of the closeness between x and y.
 
double mape_score (const Estimator &estimator, ConstMatrixRef evaluation_inputs, ConstMatrixRef evaluation_outputs)
 Compute the mean absolute percentage error (MAPE) score of the estimator on the given evaluation data.
 

Detailed Description

Collection of utilities used to score the accuracy of estimators.

Typedef Documentation

◆ Scorer

using lucid::scorer::Scorer = std::function<double(const Estimator&, ConstMatrixRef, ConstMatrixRef)>

Function type used to score the estimator.

Parameters
estimatorEstimator object to score
evaluation_inputs\( n \times d_x \) matrix of row vectors in the input space \( \mathcal{X} \)
evaluation_outputs\( n \times d_y \) matrix of row vectors in the output space \( \mathcal{Y} \)

◆ ScorerType

using lucid::scorer::ScorerType = double (*)(const Estimator&, ConstMatrixRef, ConstMatrixRef)

Function pointer type used to score the estimator.

Parameters
estimatorEstimator object to score
evaluation_inputs\( n \times d_x \) matrix of row vectors in the input space \( \mathcal{X} \)
evaluation_outputs\( n \times d_y \) matrix of row vectors in the output space \( \mathcal{Y} \)

Function Documentation

◆ mape_score() [1/2]

double lucid::scorer::mape_score ( const Estimator & estimator,
ConstMatrixRef evaluation_inputs,
ConstMatrixRef evaluation_outputs )

Compute the mean absolute percentage error (MAPE) score of the estimator on the given evaluation data.

Given the evaluation inputs \( x = \{ x_1, \dots, x_n \} \), where \( x_i \in \mathcal{X} \subseteq \mathbb{R}^{d_x}, 0 \le i \le n \), we want to compute the mean absolute percentage error of the model's predictions \( \hat{y} = \{ \hat{y}_1, \dots, \hat{y}_n \} \) where \( \hat{y}_i \in \mathcal{Y} \subseteq \mathbb{R}^{d_y}, 0 \le i \le n \), with respect to the true outputs \( y = \{ y_1, \dots, y_n \} \) where \( y_i \in \mathcal{Y} , 0 \le i \le n \). The score belongs in the range \( [-\infty, 0 ] \), where \( 0 \) indicates a perfect fit, and more negative values indicate a worse fit.

\[\text{MAPE} = -\frac{1}{n} \sum_{i=1}^n \left| \frac{y_i - \hat{y}_i}{y_i} \right| \]

where \( n \) is the number of rows in the evaluation data. The MAPE score is always non-positive, and a higher value indicates a better fit.

Warning
The MAPE score is non-positive by definition in this implementation.
Precondition
The estimator must be able to make predictions, i.e., it should have been fitted or consolidated before calling this method.
The estimator's prediction must belong to a vector space with the same number of dimensions as the one the evaluation outputs inhabit.
The number of rows in evaluation_inputs must be equal to the number of rows in evaluation_outputs.
None of the elements in evaluation_outputs should be zero, as this would lead to division by zero in the MAPE calculation.
Parameters
estimatorestimator to score

◆ mape_score() [2/2]

double lucid::scorer::mape_score ( ConstMatrixRef x,
ConstMatrixRef y )

Compute the mean absolute percentage error (MAPE) score of the closeness between x and y.

We are given the set of row vectors \( x = \{ x_1, \dots, x_n \} \), where \( x_i \in \mathcal{X} \subseteq \mathbb{R}^{d}, 0 \le i \le n \), and the set of row vectors \( y = \{ y_1, \dots, y_n \} \), where \( y_i \in \mathcal{X} \subseteq \mathbb{R}^{d}, 0 \le i \le n \). The score belongs in the range \( [-\infty, 0 ] \), where \( 0 \) indicates no distance (i.e., perfect predictions) and more negative values indicate more distance.

\[\text{MAPE} = -\frac{1}{n} \sum_{i=1}^n \left| \frac{y_i - x_i}{y_i} \right| \]

where \( n \) is the number of rows in the evaluation data. The MAPE score is always non-positive, and a higher value indicates a better fit.

Warning
The MAPE score is non-positive by definition in this implementation.
Precondition
The number of rows in x must be equal to the number of rows in y.
None of the elements in y should be zero, as this would lead to division by zero in the MAPE calculation.
Parameters
x\( \texttip{n}{Number of samples} \times \texttip{d}{Dimension of the vector space} \) matrix of row vectors
y\( \texttip{n}{Number of samples} \times \texttip{d}{Dimension of the vector space} \) matrix of row vectors
Returns
mean absolute percentage error between the two sets of row vectors

◆ mse_score() [1/2]

double lucid::scorer::mse_score ( const Estimator & estimator,
ConstMatrixRef evaluation_inputs,
ConstMatrixRef evaluation_outputs )

Compute the mean squared error (MSE) score of the estimator on the given evaluation data.

Given the evaluation inputs \( x = \{ x_1, \dots, x_n \} \), where \( x_i \in \mathcal{X} \subseteq \mathbb{R}^{d_x}, 0 \le i \le n \), we want to compute the mean squared error of the model's predictions \( \hat{y} = \{ \hat{y}_1, \dots, \hat{y}_n \} \) where \( \hat{y}_i \in \mathcal{Y} \subseteq \mathbb{R}^{d_y}, 0 \le i \le n \), with respect to the true outputs \( y = \{ y_1, \dots, y_n \} \) where \( y_i \in \mathcal{Y} , 0 \le i \le n \). The score belongs in the range \( [-\infty, 0 ] \), where \( 0 \) indicates a perfect fit, and more negative values indicate a worse fit.

\[\text{MSE} = -\frac{1}{n} \sum_{i=1}^n (y_i - \hat{y}_i)^2 \]

where \( n \) is the number of rows in the evaluation data. The MSE score is always non-positive, and a higher value indicates a better fit.

Warning
The MSE score is non-positive by definition in this implementation.
Precondition
The estimator must be able to make predictions, i.e., it should have been fitted or consolidated before calling this method.
The estimator's prediction must belong to a vector space with the same number of dimensions as the one the evaluation outputs inhabit.
The number of rows in evaluation_inputs must be equal to the number of rows in evaluation_outputs.
Parameters
estimatorestimator to score
evaluation_inputs\( \texttip{n}{Number of samples} \times \texttip{d_x}{Dimension of the input vector space} \) evaluation input data
evaluation_outputs\( \texttip{n}{Number of samples} \times \texttip{d_y}{Dimension of the output vector space} \) evaluation output data
Returns
mean squared error score of the model

◆ mse_score() [2/2]

double lucid::scorer::mse_score ( ConstMatrixRef x,
ConstMatrixRef y )

Compute the mean squared error (MSE) between x and y.

We are given the set of row vectors \( x = \{ x_1, \dots, x_n \} \), where \( x_i \in \mathcal{X} \subseteq \mathbb{R}^{d}, 0 \le i \le n \), and the set of row vectors \( y = \{ y_1, \dots, y_n \} \), where \( y_i \in \mathcal{X} \subseteq \mathbb{R}^{d}, 0 \le i \le n \). The score belongs in the range \( [-\infty, 0 ] \), where \( 0 \) indicates no distance (i.e., perfect predictions) and more negative values indicate more distance.

\[\text{MSE} = -\frac{1}{n} \sum_{i=1}^n (y_i - x_i)^2 \]

where \( n \) is the number of rows in the evaluation data. The MSE score is always non-positive, and a higher value indicates a better fit.

Warning
The MSE score is non-positive by definition in this implementation.
Precondition
The number of rows in x must be equal to the number of rows in y.
Parameters
x\( \texttip{n}{Number of samples} \times \texttip{d}{Dimension of the vector space} \) matrix of row vectors
y\( \texttip{n}{Number of samples} \times \texttip{d}{Dimension of the vector space} \) matrix of row vectors
Returns
mean squared error between the two sets of row vectors

◆ r2_score() [1/2]

double lucid::scorer::r2_score ( const Estimator & estimator,
ConstMatrixRef evaluation_inputs,
ConstMatrixRef evaluation_outputs )

Score the estimator assigning a numerical value to its accuracy in predicting the evaluation_outputs given the evaluation_inputs.

Given the evaluation inputs \( x = \{ x_1, \dots, x_n \} \), where \( x_i \in \mathcal{X} \subseteq \mathbb{R}^{d_x}, 0 \le i \le n \), we want to give a numerical score to the model's predictions \( \hat{y} = \{ \hat{y}_1, \dots, \hat{y}_n \} \) where \( \hat{y}_i \in \mathcal{Y} \subseteq \mathbb{R}^{d_y}, 0 \le i \le n \), with respect to the true outputs \( y = \{ y_1, \dots, y_n \} \) where \( y_i \in \mathcal{Y} , 0 \le i \le n \). The score is computed as the coefficient of determination, also known as \( R^2 \) score, defined as

\[R^2 = 1 - \frac{\sum_{i=1}^n (y_i - \hat{y}_i)^2}{\sum_{i=1}^n (y_i - \bar{y})^2} \]

where \( \bar{y} \) is the mean of the true outputs \( y \) and the variance of the true output, \( \sum_{i=1}^n (y_i - \bar{y})^2 \), is greater than 0. The score belongs in the range \( [-\infty, 1 ] \), where \( 1 \) indicates a perfect fit, and \( 0 \) indicates that the model is no better than simply predicting the expected value of the true outputs.

Precondition
The estimator must be able to make predictions, i.e., it should have been fitted or consolidated before calling this method.
The estimator's prediction must belong to a vector space with the same number of dimensions as the one the evaluation outputs inhabit.
The variance of the evaluation_outputs must be greater than 0. This is trivially false if all outputs are equal or only one row is present. If this precondition is not met, the result may be NaN.
The number of rows in evaluation_inputs must be equal to the number of rows in evaluation_outputs.
Parameters
estimatorestimator to score
evaluation_inputs\( \texttip{n}{Number of samples} \times \texttip{d_x}{Dimension of the input vector space} \) evaluation input data
evaluation_outputs\( \texttip{n}{Number of samples} \times \texttip{d_y}{Dimension of the output vector space} \) evaluation output data
Returns
score of the model

◆ r2_score() [2/2]

double lucid::scorer::r2_score ( ConstMatrixRef x,
ConstMatrixRef y )

Score the closeness between x and y assigning it a numerical value.

We are given the set of row vectors \( x = \{ x_1, \dots, x_n \} \), where \( x_i \in \mathcal{X} \subseteq \mathbb{R}^{d}, 0 \le i \le n \), and the set of row vectors \( y = \{ y_1, \dots, y_n \} \), where \( y_i \in \mathcal{X} \subseteq \mathbb{R}^{d}, 0 \le i \le n \). The score is computed as the coefficient of determination, also known as \( R^2 \) score, defined as

\[R^2 = 1 - \frac{\sum_{i=1}^n (y_i - x_i)^2}{\sum_{i=1}^n (y_i - \bar{y})^2} \]

where \( \bar{y} \) is the mean of \( y \) and the variance of \( y \), \( \sum_{i=1}^n (y_i - \bar{y})^2 \), is greater than 0. The score belongs in the range \( [-\infty, 1 ] \), where \( 1 \) indicates no distance (i.e., perfect predictions) and \( 0 \) indicates that \( x \) matches the expected value of \( y \).

Precondition
The variance of y must be greater than 0. This is trivially false if all outputs are equal or only one row is present. If this precondition is not met, the result may be NaN.
The number of rows in x must be equal to the number of rows in y.
Parameters
x\( \texttip{n}{Number of samples} \times \texttip{d}{Dimension of the vector space} \) matrix of row vectors
y\( \texttip{n}{Number of samples} \times \texttip{d}{Dimension of the vector space} \) matrix of row vectors
Returns
closeness score between the two sets of row vectors

◆ rmse_score() [1/2]

double lucid::scorer::rmse_score ( const Estimator & estimator,
ConstMatrixRef evaluation_inputs,
ConstMatrixRef evaluation_outputs )

Compute the root mean squared error (RMSE) score of the estimator on the given evaluation data.

Given the evaluation inputs \( x = \{ x_1, \dots, x_n \} \), where \( x_i \in \mathcal{X} \subseteq \mathbb{R}^{d_x}, 0 \le i \le n \), we want to compute the root mean squared error of the model's predictions \( \hat{y} = \{ \hat{y}_1, \dots, \hat{y}_n \} \) where \( \hat{y}_i \in \mathcal{Y} \subseteq \mathbb{R}^{d_y}, 0 \le i \le n \), with respect to the true outputs \( y = \{ y_1, \dots, y_n \} \) where \( y_i \in \mathcal{Y} , 0 \le i \le n \). The score belongs in the range \( [-\infty, 0 ] \), where \( 0 \) indicates a perfect fit, and more negative values indicate a worse fit.

\[\text{RMSE} = -\sqrt{\frac{1}{n} \sum_{i=1}^n (y_i - \hat{y}_i)^2} \]

where \( n \) is the number of rows in the evaluation data. The RMSE score is always non-positive, and a higher value indicates a better fit.

Warning
The RMSE score is non-positive by definition in this implementation.
Precondition
The estimator must be able to make predictions, i.e., it should have been fitted or consolidated before calling this method.
The estimator's prediction must belong to a vector space with the same number of dimensions as the one the evaluation outputs inhabit.
The number of rows in evaluation_inputs must be equal to the number of rows in evaluation_outputs.
Parameters
estimatorestimator to score
evaluation_inputs\( \texttip{n}{Number of samples} \times \texttip{d_x}{Dimension of the input vector space} \) evaluation input data
evaluation_outputs\( \texttip{n}{Number of samples} \times \texttip{d_y}{Dimension of the output vector space} \) evaluation output data
Returns
root mean squared error score of the model

◆ rmse_score() [2/2]

double lucid::scorer::rmse_score ( ConstMatrixRef x,
ConstMatrixRef y )

Compute the root mean squared error (RMSE) score of the closeness between x and y.

We are given the set of row vectors \( x = \{ x_1, \dots, x_n \} \), where \( x_i \in \mathcal{X} \subseteq \mathbb{R}^{d}, 0 \le i \le n \), and the set of row vectors \( y = \{ y_1, \dots, y_n \} \), where \( y_i \in \mathcal{X} \subseteq \mathbb{R}^{d}, 0 \le i \le n \). The score belongs in the range \( [-\infty, 0 ] \), where \( 0 \) indicates no distance (i.e., perfect predictions) and more negative values indicate more distance.

\[\text{RMSE} = -\sqrt{\frac{1}{n} \sum_{i=1}^n (y_i - x_i)^2} \]

where \( n \) is the number of rows in the evaluation data. The RMSE score is always non-positive, and a higher value indicates less distance.

Warning
The RMSE score is non-positive by definition in this implementation.
Precondition
The number of rows in x must be equal to the number of rows in y.
Parameters
x\( \texttip{n}{Number of samples} \times \texttip{d_x}{Dimension of the input vector space} \) evaluation input data
y\( \texttip{n}{Number of samples} \times \texttip{d_y}{Dimension of the output vector space} \) evaluation output data
Returns
root mean squared error score of the model