Performance tracking tab allows dashboard users to track what happens to users after the predictive scores have been computed unlike model visualization where already observed data is shown. To measure the real performance of the predictive score, we track the performance of all users scored on a given date, similar to cohort analysis in analytical tools. We group users who were scored on a given data based on their score deciles(0-10, 10-20, 20-30, ...) and compare the score deciles with their actual conversion rates. A good model would show increasing conversion rate for increasing order of deciles. Sample data is shown below.
Campaigns or external experiments targeted on scored users could influence the performance, therefore this report is not meant to measure the performance lift or draw any conclusions. Historical context and conditions in which we learned the AI model could have been different than now. For example, historical behaviors could have come from a past holiday sale or when a site-wide promotional campaign was live. This results in actual performance might deviate from the expected performance (i.e score X converts at X%). Overall we should expect a directional increase in conversion rate in the ascending order of deciles. To measure true lift perform A/B testing as described in the following “Testing Predictive Scores” section.