When you train a model on Intersect Labs, you see a metric for the accuracy of the model on the model report page (as below). What is the best way to interpret it?

For starters, depending on which column you are trying to predict, you may either see "Error Margin" or "Accuracy".

Accuracy

When all values in your target column fit in one of a small number of buckets (e.g., "Won/Lost", "Churned/Not churned" etc), it's a classification problem. Intersect Labs will surface "Accuracy" as a metric in such cases. If the accuracy is, let's say, 84%, you can interpret that as "this model can correctly classify new data into one of two buckets 84% of the time", or in other words, "it will be wrong 16% of cases". 

For such projects, you want the accuracy to be as close as possible to 100%, but know that when it involves human behavior (e.g., whether a customer will churn or not, or whether a lead will convert to a customer or not), we think any accuracy greater than about 75% is amazing. Think about it – human behavior is complex, and is dependent on so many factors. Getting it right 3 out of 4 times is an outstanding result! On the other hand, if you are trying to model the behavior of a physical system (e.g., a chemical process), your goal should be to be higher than 90%. Of course, you don't want your model to be worse than a coin toss, so you can expect that your minimum accuracy should be 50%

Error Margin

Similarly, if your target column is a set of numbers (e.g., number of expected orders next month, lifetime value of a new customer etc), it's a regression problem. In such cases, you will see a metric called "Error margin." If the error margin is, let's say, 8%, you can interpret that as "on average, the predictions from this model are off from the actual values by 8%". 

For a regression problem, you would like your predictions to be exactly equal to the actual value every time – which would translate into an ideal error margin of 0%. Of course, no model is that good, so you want to be as close to that number as possible. There is no real upper bound for error margin; if your data is completely unpredictive, you could see error margins as high as even 10,000%.


Did this answer your question?