Precision and recall are two ways of measuring model accuracy. At Sift we are usually looking at precision and recall as they pertain to a specific score threshold.
Precision answers the question, "Above a given score threshold, what percent of cases is actually fraud?". For example, if 95 out of 100 orders scored above 80 are confirmed to be fraud, your precision at 80 is 95%.
Recall answers the question, "Out of all confirmed fraud cases, what percent are captured above a given threshold?". For example, if 700 out of 1,000 total fraud cases are scored above 91, then your recall at 91 is 70%.
Machine learning allows us to achieve both high recall and high precision, but there are tradeoffs between the two because they are inversely related.