It seems the memory management really plays all the roles here, be it a pointer, a reference or a vector of objects.
It's a big source of errors, but not the only one. Index errors and off-by-one errors are also a good source :p
My defense against index errors is to use one form of arrays and stick with them. I use 0-based arrays since my youth, and can program them faultlessly in my sleep now :p
I find off-by-one errors are best detected by programming the code, then use a simple case (usually the first and/or the last entry), and check the index expression is correct by computing the index value in your mind. It also keeps your brain trained in doing calculations with numbers, which is always a useful skill :)
Finally, pointer manipulations are always fun. Coding a routine to insert or remove a node in a double linked list or so works best for me by drawing the start situation at a piece of paper, then coding a statement, update the situation at the paper, code another statement, etc. I often also try a few other edge cases on paper that way, to make sure it always works.
Quite a challenge it seems - an efficient programmer should really be aware all the times what s/he's doing, each line from up to down when it comes to creating objects of a class, and then.. (what with them? delete them? nope, delete can only be applied to pointers initialized by "new"). Hmm.. this really needs to take some time to swallow all the concept and get it into mind, permanently, and thus, be immune to the most painful - the unpredictable run-time errors.
There are other tricks to reduce the amount of work. One thing I do is being very paranoid, I don't trust myself too much with the zillion of details I need to take care of. My code is littered with assert checks, so the computer checks at all the time whether my ideas hold.
As soon as I find that some condition should always hold at some point in the code, I throw in an assert to check it. This often happens on input values for a function, or output results, but also eg when computing a size, allocating memory, and then filling the memory. At the end, the size of the filled memory should match with the computed size.
Similarly, when you expect a pointer to be assigned only once, check that it's NULL before the assignment (assuming you can trust its value). If you are afraid of memory leaks, check that a pointer is indeed NULL just before going out of scope, for example in a destructor.
Adding "default: assert(false);" in a switch when you don' expect unhandled values.
It may seem weird to add these checks, and cause the program to crash more often. However, it's intentional. The idea behind it is that the program is checking things that never happen. Thus if it does happen, something is very seriously wrong. Continuing is basically pointless, since basic assumptions about the program are broken.
If you do this right, the number of crashes due to failed checks is much larger than the number of crashes due to other causes.
While it's bad that it crashes, the good news is that you caught it before a real crash, ie you're closer to the real cause. In my experience, it's usually also pretty good reproducable, which is good for bug hunting.
Also, you found an "impossible" situation, which is often a good clue as to what can be wrong. (And in quite a few times, the computer was right, and my ideas were wrong :) )
A final trick I do is to have the system generate a core dump on a crash, and always run a binary with debugging data attached to it. It makes that I can load a core dump into the debugger, and I can look at the stack, examine values etc. Very useful to get an idea of what the program was attempting to do, just by looking at names of the nested function calls.
My project: Messing about in FreeRCT
, dev blog
, and IRC #freerct at oftc.net