|
|
lucid
0.0.1
Lifting-based Uncertain Control Invariant Dynamics
|
Collection of utilities used to score the accuracy of estimators. More...
Typedefs | |
| using | Scorer = std::function<double(const Estimator&, ConstMatrixRef, ConstMatrixRef)> |
| Function type used to score the estimator. | |
| using | ScorerType = double (*)(const Estimator&, ConstMatrixRef, ConstMatrixRef) |
| Function pointer type used to score the estimator. | |
Functions | |
| double | r2_score (ConstMatrixRef x, ConstMatrixRef y) |
Score the closeness between x and y assigning it a numerical value. | |
| double | r2_score (const Estimator &estimator, ConstMatrixRef evaluation_inputs, ConstMatrixRef evaluation_outputs) |
Score the estimator assigning a numerical value to its accuracy in predicting the evaluation_outputs given the evaluation_inputs. | |
| double | mse_score (ConstMatrixRef x, ConstMatrixRef y) |
Compute the mean squared error (MSE) between x and y. | |
| double | mse_score (const Estimator &estimator, ConstMatrixRef evaluation_inputs, ConstMatrixRef evaluation_outputs) |
Compute the mean squared error (MSE) score of the estimator on the given evaluation data. | |
| double | rmse_score (ConstMatrixRef x, ConstMatrixRef y) |
Compute the root mean squared error (RMSE) score of the closeness between x and y. | |
| double | rmse_score (const Estimator &estimator, ConstMatrixRef evaluation_inputs, ConstMatrixRef evaluation_outputs) |
Compute the root mean squared error (RMSE) score of the estimator on the given evaluation data. | |
| double | mape_score (ConstMatrixRef x, ConstMatrixRef y) |
Compute the mean absolute percentage error (MAPE) score of the closeness between x and y. | |
| double | mape_score (const Estimator &estimator, ConstMatrixRef evaluation_inputs, ConstMatrixRef evaluation_outputs) |
Compute the mean absolute percentage error (MAPE) score of the estimator on the given evaluation data. | |
Collection of utilities used to score the accuracy of estimators.
| using lucid::scorer::Scorer = std::function<double(const Estimator&, ConstMatrixRef, ConstMatrixRef)> |
Function type used to score the estimator.
| estimator | Estimator object to score |
| evaluation_inputs | \( n \times d_x \) matrix of row vectors in the input space \( \mathcal{X} \) |
| evaluation_outputs | \( n \times d_y \) matrix of row vectors in the output space \( \mathcal{Y} \) |
| using lucid::scorer::ScorerType = double (*)(const Estimator&, ConstMatrixRef, ConstMatrixRef) |
Function pointer type used to score the estimator.
| estimator | Estimator object to score |
| evaluation_inputs | \( n \times d_x \) matrix of row vectors in the input space \( \mathcal{X} \) |
| evaluation_outputs | \( n \times d_y \) matrix of row vectors in the output space \( \mathcal{Y} \) |
| double lucid::scorer::mape_score | ( | const Estimator & | estimator, |
| ConstMatrixRef | evaluation_inputs, | ||
| ConstMatrixRef | evaluation_outputs ) |
Compute the mean absolute percentage error (MAPE) score of the estimator on the given evaluation data.
Given the evaluation inputs \( x = \{ x_1, \dots, x_n \} \), where \( x_i \in \mathcal{X} \subseteq \mathbb{R}^{d_x}, 0 \le i \le n \), we want to compute the mean absolute percentage error of the model's predictions \( \hat{y} = \{ \hat{y}_1, \dots, \hat{y}_n \} \) where \( \hat{y}_i \in \mathcal{Y} \subseteq \mathbb{R}^{d_y}, 0 \le i \le n \), with respect to the true outputs \( y = \{ y_1, \dots, y_n \} \) where \( y_i \in \mathcal{Y} , 0 \le i \le n \). The score belongs in the range \( [-\infty, 0 ] \), where \( 0 \) indicates a perfect fit, and more negative values indicate a worse fit.
\[\text{MAPE} = -\frac{1}{n} \sum_{i=1}^n \left| \frac{y_i - \hat{y}_i}{y_i} \right| \]
where \( n \) is the number of rows in the evaluation data. The MAPE score is always non-positive, and a higher value indicates a better fit.
evaluation_inputs must be equal to the number of rows in evaluation_outputs. evaluation_outputs should be zero, as this would lead to division by zero in the MAPE calculation. | estimator | estimator to score |
| double lucid::scorer::mape_score | ( | ConstMatrixRef | x, |
| ConstMatrixRef | y ) |
Compute the mean absolute percentage error (MAPE) score of the closeness between x and y.
We are given the set of row vectors \( x = \{ x_1, \dots, x_n \} \), where \( x_i \in \mathcal{X} \subseteq \mathbb{R}^{d}, 0 \le i \le n \), and the set of row vectors \( y = \{ y_1, \dots, y_n \} \), where \( y_i \in \mathcal{X} \subseteq \mathbb{R}^{d}, 0 \le i \le n \). The score belongs in the range \( [-\infty, 0 ] \), where \( 0 \) indicates no distance (i.e., perfect predictions) and more negative values indicate more distance.
\[\text{MAPE} = -\frac{1}{n} \sum_{i=1}^n \left| \frac{y_i - x_i}{y_i} \right| \]
where \( n \) is the number of rows in the evaluation data. The MAPE score is always non-positive, and a higher value indicates a better fit.
x must be equal to the number of rows in y. y should be zero, as this would lead to division by zero in the MAPE calculation. | x | \( \texttip{n}{Number of samples} \times \texttip{d}{Dimension of the vector space} \) matrix of row vectors |
| y | \( \texttip{n}{Number of samples} \times \texttip{d}{Dimension of the vector space} \) matrix of row vectors |
| double lucid::scorer::mse_score | ( | const Estimator & | estimator, |
| ConstMatrixRef | evaluation_inputs, | ||
| ConstMatrixRef | evaluation_outputs ) |
Compute the mean squared error (MSE) score of the estimator on the given evaluation data.
Given the evaluation inputs \( x = \{ x_1, \dots, x_n \} \), where \( x_i \in \mathcal{X} \subseteq \mathbb{R}^{d_x}, 0 \le i \le n \), we want to compute the mean squared error of the model's predictions \( \hat{y} = \{ \hat{y}_1, \dots, \hat{y}_n \} \) where \( \hat{y}_i \in \mathcal{Y} \subseteq \mathbb{R}^{d_y}, 0 \le i \le n \), with respect to the true outputs \( y = \{ y_1, \dots, y_n \} \) where \( y_i \in \mathcal{Y} , 0 \le i \le n \). The score belongs in the range \( [-\infty, 0 ] \), where \( 0 \) indicates a perfect fit, and more negative values indicate a worse fit.
\[\text{MSE} = -\frac{1}{n} \sum_{i=1}^n (y_i - \hat{y}_i)^2 \]
where \( n \) is the number of rows in the evaluation data. The MSE score is always non-positive, and a higher value indicates a better fit.
evaluation_inputs must be equal to the number of rows in evaluation_outputs. | estimator | estimator to score |
| evaluation_inputs | \( \texttip{n}{Number of samples} \times \texttip{d_x}{Dimension of the input vector space} \) evaluation input data |
| evaluation_outputs | \( \texttip{n}{Number of samples} \times \texttip{d_y}{Dimension of the output vector space} \) evaluation output data |
| double lucid::scorer::mse_score | ( | ConstMatrixRef | x, |
| ConstMatrixRef | y ) |
Compute the mean squared error (MSE) between x and y.
We are given the set of row vectors \( x = \{ x_1, \dots, x_n \} \), where \( x_i \in \mathcal{X} \subseteq \mathbb{R}^{d}, 0 \le i \le n \), and the set of row vectors \( y = \{ y_1, \dots, y_n \} \), where \( y_i \in \mathcal{X} \subseteq \mathbb{R}^{d}, 0 \le i \le n \). The score belongs in the range \( [-\infty, 0 ] \), where \( 0 \) indicates no distance (i.e., perfect predictions) and more negative values indicate more distance.
\[\text{MSE} = -\frac{1}{n} \sum_{i=1}^n (y_i - x_i)^2 \]
where \( n \) is the number of rows in the evaluation data. The MSE score is always non-positive, and a higher value indicates a better fit.
x must be equal to the number of rows in y. | x | \( \texttip{n}{Number of samples} \times \texttip{d}{Dimension of the vector space} \) matrix of row vectors |
| y | \( \texttip{n}{Number of samples} \times \texttip{d}{Dimension of the vector space} \) matrix of row vectors |
| double lucid::scorer::r2_score | ( | const Estimator & | estimator, |
| ConstMatrixRef | evaluation_inputs, | ||
| ConstMatrixRef | evaluation_outputs ) |
Score the estimator assigning a numerical value to its accuracy in predicting the evaluation_outputs given the evaluation_inputs.
Given the evaluation inputs \( x = \{ x_1, \dots, x_n \} \), where \( x_i \in \mathcal{X} \subseteq \mathbb{R}^{d_x}, 0 \le i \le n \), we want to give a numerical score to the model's predictions \( \hat{y} = \{ \hat{y}_1, \dots, \hat{y}_n \} \) where \( \hat{y}_i \in \mathcal{Y} \subseteq \mathbb{R}^{d_y}, 0 \le i \le n \), with respect to the true outputs \( y = \{ y_1, \dots, y_n \} \) where \( y_i \in \mathcal{Y} , 0 \le i \le n \). The score is computed as the coefficient of determination, also known as \( R^2 \) score, defined as
\[R^2 = 1 - \frac{\sum_{i=1}^n (y_i - \hat{y}_i)^2}{\sum_{i=1}^n (y_i - \bar{y})^2} \]
where \( \bar{y} \) is the mean of the true outputs \( y \) and the variance of the true output, \( \sum_{i=1}^n (y_i - \bar{y})^2 \), is greater than 0. The score belongs in the range \( [-\infty, 1 ] \), where \( 1 \) indicates a perfect fit, and \( 0 \) indicates that the model is no better than simply predicting the expected value of the true outputs.
evaluation_outputs must be greater than 0. This is trivially false if all outputs are equal or only one row is present. If this precondition is not met, the result may be NaN. evaluation_inputs must be equal to the number of rows in evaluation_outputs. | estimator | estimator to score |
| evaluation_inputs | \( \texttip{n}{Number of samples} \times \texttip{d_x}{Dimension of the input vector space} \) evaluation input data |
| evaluation_outputs | \( \texttip{n}{Number of samples} \times \texttip{d_y}{Dimension of the output vector space} \) evaluation output data |
| double lucid::scorer::r2_score | ( | ConstMatrixRef | x, |
| ConstMatrixRef | y ) |
Score the closeness between x and y assigning it a numerical value.
We are given the set of row vectors \( x = \{ x_1, \dots, x_n \} \), where \( x_i \in \mathcal{X} \subseteq \mathbb{R}^{d}, 0 \le i \le n \), and the set of row vectors \( y = \{ y_1, \dots, y_n \} \), where \( y_i \in \mathcal{X} \subseteq \mathbb{R}^{d}, 0 \le i \le n \). The score is computed as the coefficient of determination, also known as \( R^2 \) score, defined as
\[R^2 = 1 - \frac{\sum_{i=1}^n (y_i - x_i)^2}{\sum_{i=1}^n (y_i - \bar{y})^2} \]
where \( \bar{y} \) is the mean of \( y \) and the variance of \( y \), \( \sum_{i=1}^n (y_i - \bar{y})^2 \), is greater than 0. The score belongs in the range \( [-\infty, 1 ] \), where \( 1 \) indicates no distance (i.e., perfect predictions) and \( 0 \) indicates that \( x \) matches the expected value of \( y \).
y must be greater than 0. This is trivially false if all outputs are equal or only one row is present. If this precondition is not met, the result may be NaN. x must be equal to the number of rows in y. | x | \( \texttip{n}{Number of samples} \times \texttip{d}{Dimension of the vector space} \) matrix of row vectors |
| y | \( \texttip{n}{Number of samples} \times \texttip{d}{Dimension of the vector space} \) matrix of row vectors |
| double lucid::scorer::rmse_score | ( | const Estimator & | estimator, |
| ConstMatrixRef | evaluation_inputs, | ||
| ConstMatrixRef | evaluation_outputs ) |
Compute the root mean squared error (RMSE) score of the estimator on the given evaluation data.
Given the evaluation inputs \( x = \{ x_1, \dots, x_n \} \), where \( x_i \in \mathcal{X} \subseteq \mathbb{R}^{d_x}, 0 \le i \le n \), we want to compute the root mean squared error of the model's predictions \( \hat{y} = \{ \hat{y}_1, \dots, \hat{y}_n \} \) where \( \hat{y}_i \in \mathcal{Y} \subseteq \mathbb{R}^{d_y}, 0 \le i \le n \), with respect to the true outputs \( y = \{ y_1, \dots, y_n \} \) where \( y_i \in \mathcal{Y} , 0 \le i \le n \). The score belongs in the range \( [-\infty, 0 ] \), where \( 0 \) indicates a perfect fit, and more negative values indicate a worse fit.
\[\text{RMSE} = -\sqrt{\frac{1}{n} \sum_{i=1}^n (y_i - \hat{y}_i)^2} \]
where \( n \) is the number of rows in the evaluation data. The RMSE score is always non-positive, and a higher value indicates a better fit.
evaluation_inputs must be equal to the number of rows in evaluation_outputs. | estimator | estimator to score |
| evaluation_inputs | \( \texttip{n}{Number of samples} \times \texttip{d_x}{Dimension of the input vector space} \) evaluation input data |
| evaluation_outputs | \( \texttip{n}{Number of samples} \times \texttip{d_y}{Dimension of the output vector space} \) evaluation output data |
| double lucid::scorer::rmse_score | ( | ConstMatrixRef | x, |
| ConstMatrixRef | y ) |
Compute the root mean squared error (RMSE) score of the closeness between x and y.
We are given the set of row vectors \( x = \{ x_1, \dots, x_n \} \), where \( x_i \in \mathcal{X} \subseteq \mathbb{R}^{d}, 0 \le i \le n \), and the set of row vectors \( y = \{ y_1, \dots, y_n \} \), where \( y_i \in \mathcal{X} \subseteq \mathbb{R}^{d}, 0 \le i \le n \). The score belongs in the range \( [-\infty, 0 ] \), where \( 0 \) indicates no distance (i.e., perfect predictions) and more negative values indicate more distance.
\[\text{RMSE} = -\sqrt{\frac{1}{n} \sum_{i=1}^n (y_i - x_i)^2} \]
where \( n \) is the number of rows in the evaluation data. The RMSE score is always non-positive, and a higher value indicates less distance.
x must be equal to the number of rows in y. | x | \( \texttip{n}{Number of samples} \times \texttip{d_x}{Dimension of the input vector space} \) evaluation input data |
| y | \( \texttip{n}{Number of samples} \times \texttip{d_y}{Dimension of the output vector space} \) evaluation output data |