When you decision an event/entity, a few things happen:
- We immediately update your fraud prediction model with the new information. When you decision an entity "Looks Bad," your model immediately becomes better able to find similar fraudulent behaviors and traits. After you send a decision, all subsequent events we receive will be scored using these new insights.
- We immediately re-score any user identifiers sharing signals, e.g. the same device, to help you catch connected fraudsters as quickly as possible.
- We immediately re-score the user identifier the decision was sent for. This re-score is based on your fraud prediction model and not on the decision itself. In other words, a decision does not add points to a score. When the user identifier is re-scored, their score can go up, down, or stay the same. How the score changes depends on the events and decisions we've received for all of your other user identifiers in the meantime.
As a result of a re-score, sometimes the score change will not be intuitive - you'll decision an entity as "Looks Bad" and the score will go down, or apply a "Looks OK" decision and the score will go up. Your predictions improve with each new event and decision you send, and over time the score threshold for taking actions like "block" "review/hold/add verification" and "accept" can shift.
As an example, if you decision an entity with a score of 95 as "Looks Bad" the score might change to 93. There are two important things to note here: first, when you automate with our APIs you can use a "Looks Bad" decision when making decisions, and in those cases the score wouldn't be a factor. Second, this change can be an indication that your average score has shifted and your accuracy has improved. If you're currently blocking at scores of 94 and above, this is a good time to take a look at entities with scores of 92 and 93. It's quite possible that Sift is now doing a better job of separating good and fraudulent users so blocking at scores of 92 or 93 is ideal.
Same with "Looks OK": If you decision an entity as "Looks OK" and the score goes up this isn't cause for concern. We look at thousands of fraud signals, and all entities have some signals that push their score up and some that pull their score down. The higher the score, the more signals pushing the score up. Giving the "Looks OK" decision teaches Sift that those signals don't always indicate fraud. However, those signals still exist, so the score will not immediately drop down. If the score continues to go up, it may be time to re-evaluate as new learnings may be indicative of fraud.
If you think a score continues to be way too high or too low, a useful question to ask is: is there anything I know about this entity that tells me they're good or bad that isn't in the Sift console? If so, this might be a useful new event or field to send to improve your integration and bolster Sift's scoring.
Finally, it's best to put minor score shifts in perspective. If a user with a score of 50 goes to 46 or 54, they still likely fall in same range as they did before (e.g. the "allow" range or the "review" range).
When reviewing entities in the Sift Console, you can filter lists by decision. This way, you can choose to look at -- or not look at -- those with a "Looks Bad" or "Looks OK" decisions when reviewing.