Common Metrics for Regression and Classification
Mean Absolute Error (MAE)
Definition: The average absolute difference between targeted and actual model values.
Mean Square Error (MSE)
Definition: The average squared difference between targeted and actual model values.
Root Mean Square Error (RMSE)
Definition: The square root of the MSE, providing a more interpretable scale (same unit as the target variable).
R-Squared (R2)
Definition: The proportion of the variance in the dependent variable that is predictable from the independent variable(s).
Additional Metrics for Regression (only in Auto mode)
%Prediction Absolute Error (PAE)
Definition: Building on Mean Absolute Percentage Error (MAPE), a commonly used metric for evaluating the accuracy of forecasts or predictions, which measures the average absolute percentage difference between the predicted and actual values, we have defined %Prediction Absolute Error (PAE) as follows:

PAE expresses prediction errors as a percentage of the actual values, making it easier to interpret and compare across different datasets or forecasts. A lower PAE indicates better accuracy, with a PAE equal to zero indicating a perfect prediction, where the predicted values exactly match the actual values.
%Uncertainty (UNC)
Definition: The %Uncertainty (UNC) is the relative uncertainty of the mean Absolute Relevant Error (ARE). It provides insight into how much variation there is in the forecasted (prediction) error relative to the overall average error, and is defined as follows:

A higher UNC suggests that the forecasted (or prediction) errors are more dispersed around their mean, indicating greater variability or uncertainty in the forecasts (or predictions). Conversely, a lower UNC implies that the errors are closer to the mean, indicating less variability or uncertainty.
Additional Metrics for Classification
TP = True Positives
FN = False Negatives
TN = True Negatives
False Classified Examples (FCE)
Definition: The total number of incorrectly classified examples.
False Positive Ratio (FPR)
Definition: The ratio of false positives to the total negatives.
False Negative Ratio (FNR)
Definition: The ratio of false negatives to the total positives.
True Positive Ratio (TPR) / Sensitivity / Recall
Definition: The ratio of true positives to the total positives.

True Negative Ratio (TNR) / Specificity
Definition: The ratio of true negatives to the total negatives.

Accuracy (ACC)
Definition: The ratio of correctly classified instances to the total instances.

Balanced Accuracy (BACC)
Definition: Balanced accuracy accounts for imbalanced datasets by averaging TPR and TNR.

Positive Predictive Value (PPV) / Precision
Definition: The ratio of true positives to the total predicted positives.

F1 Score (F1)
Definition: Harmonic mean of precision and recall, useful for imbalanced datasets.

Fowlkes-Mallows index (FM)
Definition: Geometric mean of precision and recall, considers both false positives and false negatives.

Jaccard index (JI) / Critical Success index (CSI)
Definition: The ratio of the intersection to the union of predicted and actual positives.
