August 18, 2020

Accuracy, Precision, and Recall

  1. Accuracy is calculated by finding the total number of correctly classified points and dividing by the total number of points.

    1. (True Positives + True Negatives) / (True Positives + True Negatives + False Positives + False Negatives)

    2. True Positive is when the algorithm predicted you would get above a B and you did.

    3. True Negative is when the algorithm predicted you would get below a B, and you did.

    4. False Positive is when the algorithm predicted you would get above a B, and you didn’t.

    5. False Negative is when the algorithm predicted you would get below a B, and you didn’t.

  2. Precision is the number of true positives divided by the number of times the model predicted positive.

    1. True Positives / (True Positives + False Positives)

  3. Recall measures the percentage of relevant items that your classifier found.

    1. True Positives / (True Positives + False Negatives)

  4. F1 Score is the harmonic mean of precision and recall.

    1. 2 * (Precision * Recall / Precision + Recall)

  5. Precision and Recall are tied to each other, as one goes up, the other goes down.

  6. Python’s Sci-kit Learn library has functions that will find accuracy, recall, precision, and F1 Score.

    1. accuracy_score

    2. recall_score

    3. precision_score

    4. f1_score

    5. They each take 2 parameters: true labels and predictions

Previous
Previous

August 19, 2020

Next
Next

August 17, 2020