• Redkey
    link
    fedilink
    arrow-up
    3
    ·
    1 year ago

    I tought myself programming as a kid in the 80s and 90s, and just got used to diagnostic print statements because it was the first thing that occurred to me and I had no (advanced) books, mentors, teachers, or Internet to tell me any different.

    Then in university one of my lecturers insisted that diagnostic prints are completely unreliable and that we must always use a debugger. He may have overstated the case, but I saw that he had a point when I started working on the university’s time-sharing mainframe systems and found my work constantly being preempted and moved around in memory in the middle of critical sections. Diagnostic prints would disappear, or worse, appear where, in theory, they shouldn’t be able to, and they would come and go like a restless summer breeze. But for as much as that lecturer banged on about debuggers, he hardly taught us anything about how to use them, and they confused the hell out of me, so I made it through the rest of my degree without using debuggers except for one part of one subject (the “learn about debuggers” part).

    Over 20 years later, after a little professional work and a lot of personal projects and making things for other non-coding jobs I’ve had, I still haven’t really used debuggers much. But lately I’ve been forcing myself to use them sometimes, partly to help me pick apart quirks in external libraries that I’m linking, and partly because I’d like to start using superscalar instructions and threading in my programs, and I remember how that sort of thing screwed up my diagnostic prints in university.