Precise Thinking
Think of the binary addition operator, "+".
If you write "foo + bar", and foo and bar are both strings, then "+" does string concatenation. But if foo and bar are numbers, it does addition. But if you add a string and a number, you get a TypeError.
Now imagine someone doesn't understand that the string "1" does not mean the same thing as the integer 1.
They may write "1" + 1, and be baffled at the error.
You don't have this problem, because you can place the string "1" and the number 1 in different categories in your mind. You understand they are related in a sense, but there are logical rules to follow in how they can and cannot interact.
And you realize that the int 1 and the float 1.0 are also different...
But they have more permissive rules in how they can interact.
Now take this simple example, and extend the idea to class hierarchies. Comparing instances of different subclasses.
Similar concept, yes? So you can reason about how components can be combined and interact in your program, and how they cannot.
This is an example of how, as programmers, we learn to be precise in our thinking.
It's a skill we develop, and get better at over time.
And it comes from making correct distinctions. If we don't distinguish, in our minds, between "1" and 1...
Or if we make distinctions that divide our attention without any benefit to our reasoning...
Then we are limited in how effectively we can write software.
This precision in thinking is massively valuable in the world. It's a great benefit to doing what we do.
Whenever you find yourself running into the same kind of bug again and again, ask yourself:
- Can I be more precise in thinking about this problem?
- Can I start making some useful distinction I haven't been making before now?
- Am I making a false or unhelpful distinction, that's just confusing me?