I apologize for the title; I think it's funny to give an article about humility a pretentious title. I don't want to get mired in epistemology, (I refer you to Hume for a philosophical treatment of the subject) but simply to report on a recent encounter I had with the limits of my own knowledge.
I was working with a Python script that had a parameter with a default value of the empty dictionary. I'll spare you the original, but challenge you to determine, by inspection, the output of this program:
def f(x,y,d={}):
d[x]=y
return len(d)
f(1,2)
print f(2,3)
Got it figured out? The answer is 2, of course. The function f
returns the number of elements in the dictionary d
. Each call to f
adds a single element to the dictionary. We print out the return value of the second call, when the dictionary looks like {1:2, 2:3}
.
Obvious? Not quite. It is entirely non-obvious that the default value of d
would persist across function calls. That was the mistake I made; I believed that each time f
was called and used the default value for d
, d
would be re-initialized to the empty dictionary {}
.
A moment's thought reveals that can't happen: the right hand side of the default assignment is evaluated only once, when the function is defined. For example:
def g(x=(1+1)):
return x
This is valid and g()
returns 2. Did I believe that the function captures the expression for the default value as code and evaluates it each time the default was used? Apparently so. But of course the default value of d
is simply a reference, just like everything else. The 'problem' occurs because I'm mutating the object being referenced. In general, using mutable objects without considering side effects will get us into trouble.
Getting All Philosophical
So what happened here? A feature of a programming language surprised me, and it took some work to figure out what was going on. That doesn't seem like a big deal. But...
I would have sworn up and down that I'd get a fresh, empty dictionary each time I called it. Or rather, I wouldn't have, because it wouldn't even enter the horizon of my consciousness. My mental model was wrong, and I didn't even know it.
It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so. - Josh Billings
That kind of ignorance is wicked: you don't know what you don't know. How many of the things that I 'know' are wrong? I don't know, but I can estimate from past experience that it's non-zero. This is a somewhat terrifying prospect... so I'm glad to know that there's a simple solution:
If you want to find the limits of your knowledge, test it against reality.
There's one its-bitsy problem with that: by relying on experimentation rather than proof or logic, we've abandoned all hope of sure knowledge, and have to deal in probabilities instead. But proof isn't sure either; even mathematicians make mistakes. We severely limit ourselves if we restrict our thought to the realm of the provable, and it doesn't even give us sure knowledge, because we can never be sure of ourselves.
Beware of bugs in the above code; I have only proved it correct, not tried it. - Donald Knuth
Testing doesn't just find mistakes; it finds the mistakes that we're blind to. This is true of code, and it's true of knowledge. So I guess all I really had to say was, we should test our own knowledge as carefully as we would test our code.
- Oran Looney May 16th 2007
Thanks for reading. This blog is in "archive" mode and comments and RSS feed are disabled. We appologize for the inconvenience.