I don’t entirely subscribe to the first paragraph – I’ve never worked at a place so dear to me that spurred me to spend time thinking about its architecture (beyond the usual rants). Other than that, spot on

  • @Mikina
    link
    262 months ago

    I’m starting to think that “good code” is simply a myth. They’ve drilled a lot of “best practices” into me during my masters, yet no matter how mich you try, you will eventually end up with something overengineered, or a new feature or a bug that’s really difficult to squeeze into whatever you’ve chosen.

    But, ok, that doesn’t proove anything, maybe I’m just a vad programmer.

    What made me sceptical however isn’t that I never managed to do it right in any of my projects, but the last two years of experience working on porting games, some of them well-known and larger games, to consoles.

    I’ve already seen several codebases, each one with different take on how to make the core game architecture, and each one inevitably had some horrible issues that turned up during bugfixing. Making changes was hard, it was either overengineersled and almost impenetrable, or we had to resort tonugly hacks since there simply wasn’t a way how to do it properly without rewriting a huge chunk.

    Right now, my whole prpgramming knowledge about game aechitecture is a list of “this desn’t work in the long run”, and if I were to start a new project, I’d be really at loss about what the fuck should i choose. It’s a hopeless battle, every aproach I’ve seen or tried still ran into problems.

    And I think this may be authors problem - ot’s really easy to see that something doesn’t work. " I’d have done it diferently" or “There has to be a better way” is something that you notice very quickly. But I’m certain that watever would he propose, it’d just lead to a different set of problems. And I suspect that’s what may ve happening with his leads not letting him stick his nose into stuff. They have probably seen that before, at it rarely helps.

    • andyburke
      link
      fedilink
      202 months ago

      We have an almost total lack of real discipline and responsibility in software engineering.

      “Good enough” is the current gold standard, so you get what we have.

      If we were more serious there wouldn’t be 100 various different languages to choose from, just a handful based on the requirements and those would become truly time worn, tested and reliable.

      Instead, we have no apprenticeships, no unions, very little institutional knowledge older than a few years. We are pretending at being an actual discipline.

      • kbin_space_program
        link
        fedilink
        132 months ago

        Best project Ive worked on, we went and implemented a scrict code standard, based on the code standard that a firm that contracted my team to do the work had.

        Worked perfectly. Beautiful, maintainable code. Still used today without major reworks, doesnt need it. Front end got several major updates, but the back end uses what is now called microservice architecture, and we implemented it long before the phrase was common.

        Got the opportunity to go back to it this year. Devs with the 2nd firm not only ignored all of the documentation we put out, they ignored their own coding standards document.

          • @Kissaki
            link
            English
            12 months ago

            When I miss them I create them. Then they’re there.

    • magic_lobster_party
      link
      fedilink
      142 months ago

      There’s bad code and then there’s worse code. “Best practices” might help you to avoid writing worse code.

      Good code might appear occasionally. In the rare event when it’s also useful, people will start to have opinions about what it should do. Suddenly requirements change and now it’s bad code.

      • @nous
        link
        English
        52 months ago

        “Best practices” might help you to avoid writing worse code.

        TBH I am not sure about this. I have seen many “Best practices” make code worst not better. Not because the rules themselves are bad but people take them as religious gospel and apply them to every situation in hopes of making their code better without actually looking at if it is making their code better.

        For instance I see this a lot in DRY code. While the rules themselves are useful to know and apply they are too easily over applied removing any benefit they originally gave and result in overly abstract code. The number of times I have added duplication back into code to remove a layer of abstraction that was not working only to maybe reapply it in a different way, often keeping some duplication.

        Suddenly requirements change and now it’s bad code.

        This only leads to bad code when people get to afraid to refactor things in light of the new requirements.Which sadly happens far to often. People seem to like to keep what was there already and follow existing patterns even well after they are no longer suitable. I have made quite a lot of bad code better by just ripping out the old patterns and putting back something that better fits the current requirements - quite often in code I have written before and others have added to over time.

        • Dark Arc
          link
          fedilink
          English
          3
          edit-2
          2 months ago

          This only leads to bad code when people get to afraid to refactor things in light of the new requirements.Which sadly happens far to often. People seem to like to keep what was there already and follow existing patterns even well after they are no longer suitable. I have made quite a lot of bad code better by just ripping out the old patterns and putting back something that better fits the current requirements - quite often in code I have written before and others have added to over time.

          Yup, this is part of what’s lead me to advocate for SRP (the single responsibility principle). If you have everything broken down into pieces where the description of the function/class is something like “given X this function does Y” (and unrelated things thus aren’t unnecessarily coupled) it makes reorganization of the higher level logic to fit the current requirements a lot easier.

          For instance I see this a lot in DRY code. While the rules themselves are useful to know and apply they are too easily over applied removing any benefit they originally gave and result in overly abstract code. The number of times I have added duplication back into code to remove a layer of abstraction that was not working only to maybe reapply it in a different way, often keeping some duplication.

          Preach. DRY is IMO the most abused/mis-understood best practice particularly by newer programmers. DRY is not about compressing your code/minimizing line count. It’s about … avoiding things like writing the exact same general (e.g., a sort) algorithm inline in a dozen places. People are really good at finding patterns and “over fitting” making up abstractions that make no sense.

          • @nous
            link
            English
            42 months ago

            Yup, this is part of what’s lead me to advocate for SRP (the single responsibility principle).

            Even that gets overused and abused. My big problem with it is what is a single responsibility. It is poorly defined and leads to people thinking that the smallest possible thing is one responsibility. But when people think like that they create thousands of one to three line functions which just ends up losing the what the program is trying to do. Following logic through deeply nested function calls IMO is just as bad if not worst than having everything in a single function.

            There is a nice middle ground where SRP makes sense but like all patterns they never talk about where that line is. Overuse of any pattern, methodology or principle is a bad thing and it is very easy to do if you don’t think about what it is trying to achieve and when applying it no longer fits that goal.

            Basically, everything in moderation and never lean on a single thing.

            • Dark Arc
              link
              fedilink
              English
              1
              edit-2
              2 months ago

              Hmmm… That’s true, my rough litmus test is “can you explain what this thing does in fairly precise language without having to add a bunch of qualifiers for different cases?”

              If you meet that bar the function is probably fine/doesn’t need broken up further.

              That said, I don’t particularly care how many functions I have to jump through or what their line count is because I can verify “did the function do the thing it says it’s supposed to do?” after it’s called in a debugger. If it did, then I know my bug isn’t there. If it didn’t, I know where to look.

              Just like with commits, I’d rather have more small commits to feed through git bisect than a few larger commits because it makes identifying where/when a contract/test case/invariant was violated much more straight forward.

        • magic_lobster_party
          link
          fedilink
          22 months ago

          For instance I see this a lot in DRY code.

          DRY is one of the most misunderstood practices. If you read pragmatic programmer (where DRY was coined), they make it clear that DRY doesn’t mean “avoid all repetition at all cost”. Just because two pieces of code look identical doesn’t necessarily mean they are the same. If they can grow independently of each other, then they’re not repetitions according to DRY and should be left alone.

          • @nous
            link
            English
            32 months ago

            Yup, and that is because people only ever lean DRY coding by its name. It is never really what it originally meant, when to use it and more importantly when not to use it. So loads of people apply it religiously and over use it. This is true of all the popular catchy named methodologies/principals etc.

    • @[email protected]
      link
      fedilink
      122 months ago

      Good code is code that’s easy to delete.

      I’m not a game dev, but it’s got a reputation for being more of a software engineering shit show than other software industries, which your story only reinforces.

      • magic_lobster_party
        link
        fedilink
        62 months ago

        Good code is code that’s easy to delete.

        This is why there’s nothing more permanent than a temporary fix.

    • Dark Arc
      link
      fedilink
      English
      52 months ago

      I’ll contest their is such a thing as good code. I don’t think experienced devs always do the best job at passing on what works and what doesn’t though. Universities certainly could do more software engineering/architecture.

      My personal take is that SRP (the single responsibility principle) is the #1 thing to keep in mind. In my experience DRY (do not repeat yourself) often takes precedence over SRP – IMO because DRY is easy to (mis-)understand – and that ends up making some major messes when good/reasonable code is rewritten into some ultra-compact (typically) inheritance or template-based mess that’s “fewer lines of code, so better.”

      I’ve never regretted using composition (and thus having a few extra lines and a little bit more boilerplate) over inheritance. I’ve similarly never regretted breaking down a function into smaller functions (even if it introduces more lines of code). I’ve also never regretted generalizing code that’s actually general (e.g., a sum N elements function is always a sum N elements function).

      The most important thing with all of these best practices though is “apply it if it makes sense.” If you’re writing some code and you’ve got a good reason to have a function that does multiple things … just write the function, don’t bend over backwards doing something really weird to “technically” abide by the best practice.

      • magic_lobster_party
        link
        fedilink
        4
        edit-2
        2 months ago

        I’ve never regretted using composition (and thus having a few extra lines and a little bit more boilerplate) over inheritance.

        I second this. It doesn’t necessarily eliminate bad code, but it certainly makes it more manageable.

        Every time inheritance is used, it will almost certainly causes pain points that’s hard to revert. It leads to all these overly abstracted class hierarchies that give OOP a bad rep. And it can be easily avoided in almost all cases.

        Just use interfaces if you really need polymorphism. Often you don’t even need polymorphism at all.