Accuracy, precision, recall, and F1-score can be calculated using the values from a confusion matrix. Here's how to calculate these metrics:
-
Accuracy: Accuracy measures the overall correctness of the model's predictions.
Accuracy = (TP + TN) / (TP + TN + FP + FN)
-
Precision: Precision represents the proportion of true positive predictions out of all positive predictions. It indicates the model's ability to avoid false positives.
Precision = TP / (TP + FP)
-
Recall (Sensitivity or True Positive Rate): Recall measures the proportion of true positive predictions out of all actual positive instances. It indicates the model's ability to identify all positive instances.
Recall = TP / (TP + FN)
-
F1 Score: The F1 score is a combined metric that balances precision and recall. It provides a single measure that considers both metrics.
F1 Score = 2 * (Precision * Recall) / (Precision + Recall)
In each of these calculations, TP represents the number of true positive predictions, TN represents the number of true negative predictions, FP represents the number of false positive predictions, and FN represents the number of false negative predictions obtained from the confusion matrix.
By calculating these metrics, you can evaluate the performance of your classification model and assess its accuracy, precision, recall, and F1 score.