Full disclosure, I have almost certainly written about error types previously. But given that I didn’t write anything for 18 months, and I think this is a critically important subject, I’m going to risk repeating myself.
There are two main ways to be wrong about a hypothesis: you can think it is correct when it’s wrong, and you can think it is incorrect when it’s right. Those are Type I and Type II errors, respectively.
A helpful way to remember this is the story of The Boy Who Cried Wolf. This is a story of Type I error (thinking there’s a wolf when there isn’t) followed by Type II error (thinking there isn’t a wolf when there is). It took me weeks of reading statistics textbooks to fully internalize the difference between these two and then someone just related to a fable and now it’s locked in my brain forever.
At the end of the day you’re wrong in both scenarios, but the cost of being wrong tends to be very different on one end than the other. In The Boy Who Cried Wolf, the Type II error was much more costly. In infectious disease testing, Type II error (thinking someone is negative when they are positive) is much more costly. In credit score modeling, Type I error (thinking someone has acceptable credit when they do not) is much more costly.
Because of this, organizations often focus on eliminating the more costly error. The problem is that at a certain point this also limits the goal you’re trying to pursue. Maybe a doctor ends up misdiagnosing a specific condition too often, leading to people to stop trusting their advice on unrelated medical issues. Maybe a credit card company gets too conservative in their issuance and an entire class of people are excluded from accessing credit.
As Cathy O’Neil demonstrates in one of my all-time favorite books, these decisions can have massive downstream impacts.
There are ways to model in different types of errors and keep testing against them. That said, a lot of companies, even surprisingly large ones, don’t do this for any number of reasons. If you have algorithmic decision making driving a part of your business it’s worth asking the team in charge of that algorithm how they’re making sure to minimize false negatives just as much as false positives.