• 0 Posts
  • 121 Comments
Joined 2 years ago
cake
Cake day: June 18th, 2023

help-circle
  • athekentoProgrammingDead Man Switch
    link
    fedilink
    arrow-up
    1
    ·
    9 months ago

    Many of the “frivolous” lawsuits you’ve actually heard about are effectively smear campaigns against the plaintiffs.

    The lawsuit against MacDonald’s for the hot coffee one is a great example.

    Yes, people do dumb stuff, or fantasize that a situation could be their golden ticket, but that is the cost of having a civil judicial system. You either have to allow some crazy in, or prejudge and filter out what you’d consider legitimate cases. I just don’t have the energy, or care enough to be annoyed by inefficiency in the a system that I rarely interact with.

    The criminal system is a different story, because the way we currently prosecute different kinds of crimes offends my basic sense of fairness.


  • athekentoProgrammingDead Man Switch
    link
    fedilink
    arrow-up
    2
    ·
    9 months ago

    Is it the word “lawyer” or spending some small amount of money?

    Lawyers are bound by law and an ethical code to conduct business in a particular way. They also tend to have support infrastructure and continuity plans that private individuals do not.

    If making sure something actually happens is important to you, this is the best option.





  • Not that I would recommend it, but you can also try something like extension methods on Func.

    public static RunAndLog(this Func<T> f) {
     LogStart(f);
    var x = f();
    LogEnd(f);
    return x;
    }
    

    I think you may have enough diagnostic info with reflection to write something meaningful, and you can add overloads with generic parameters for the func params, and/or Action.

    You may also create an IDisposable that logs a start and end statement around a function call:

    using var _ = CallLogger(“func name”) {
    f();
    }
    

    Personally, I wouldn’t do either of these. This level of logging produces an enormous amount of noise, making finding signal very difficult. I would focus more energy on writing unique and helpful messages where they are important, and using git grep to find the related source code when they show up in production logs (assuming you don’t log a stack trace). This level of call logging can be temporarily useful while debugging or reasoning through an app, but rarely belongs in production.


  • Writing fast unit tests will require some refactoring that could end up being pretty extensive.

    For example, you mentioned “cloud storage” - if this is not already behind an interface one ticket could be to define an interface for accessing “cloud storage” and make it so that it can be mocked for most tests and the concrete implementation can be tested directly to confirm the integration works. Try to hone down that interface so that it’s as few methods as possible, only allow the parameters you’re actually using to be exposed and used in the interface. You can add more later if it’s absolutely necessary.

    Do this for anything that does I/O and/or is CPU intensive.

    So, to do tickets, I’d basically say, one per refactoring.

    Going forward, writing “unit tests” should not be separate tickets, it should be factored into the estimates for the original stories, and nothing should go out without appropriate tests. The operational burden will decrease over time.

    QA should have their own unit for how they want to test the application. Usually this is a suite per section of the app. If your app has an API, that is probably going to have a nice logical breakdown of the different areas that could each have their own ticket for adding QA-level test suites. The tests that developers write should only be additive and reduce the workload of QA. What you want to be sure of is that change sets are getting reviewed and through the entire pipeline without getting logjammed in any stage. Ideally, individual PRs are getting started and deployed in less than a week.

    If you’re interested in more techniques, check out the book “Working effectively with legacy code.” It has a lot of patterns for adding tests to existing codebases.


  • You are really looking for architecture diagrams. These are extremely rare in most projects, open source or otherwise.

    The reason you don’t see a lot of documentation on algorithms used or architecture is that most of the time the code is not actually novel. It’s like asking a plumber to describe the physical properties of the pipe they used on a job. They’d say “schedule 40” or “copper” and a dimension. They would not describe the manufacturing process or chemical composition of the pipe. The materials are pretty standard and only require special descriptions for when and why they deviate from those standards.



  • Considering the implications of relying on an external company as the registry, I don’t think custom domains are really “vanity” as much as reserving agency to move the code if it becomes necessary. I’m perfectly happy with GitHub, but would rather my modules didn’t break if they implement a policy change at some future date. I also don’t like the implication that “GitHub owns” my repo due to the import path stuff.

    That being said, I wonder if this same thing could be achieved with a simple reverse proxy/CDN with a few rewrite rules? Ideally, the only cost to a typical maintainer would be the domain name, and the rest could be hosted on free infrastructure (cloudflare would seem like a reasonable choice).


  • If these systems could only reorganize and regurgitate 1000 creative works, we would not be having this conversation. It’s literally because of the scale that this is even relevant. The scope of consumption by these systems, and the relative ease of accessibility to these systems is what makes the infringement/ownership question relevant.

    We literally went through this exercise with fair use as it pertains to CD/DVD piracy in the 90’s, and Napster in the early 2000’s. Individuals making copies was still robbing creative artists of royalties before those technologies existed, but the scale, ubiquity, and fidelity of those systems enabled large-scale infringement in a way that individuals copying/reproducing them previously could not.

    I’m not saying these are identical examples, but the magnitude is a massive factor in why this issue needs to be regulated/litigated.




  • As an end result, maybe. But it also means that you get specific feedback on how to properly author it correctly and fix it before pushing it live.

    IDK, I lived through that whole era, and I’d attribute it more to the fact that HTML is easy enough to author in any text editor by complete novices. XHTML demands a hell of a lot more knowledge of how XML works, and what is valid (and, more keystrokes). The barrier to entry for XHTML is much, much, higher.





  • I generally find that writing code that requires a lot of “accounting” is very prone to mistakes that are easier to avoid with recursion. What I mean by this is stuff where you’re tracking multiple counters and sets on each iteration. It’s very easy to produce off by one errors in these types of algos.

    Recursion, once you get the hang of it, can make certain kinds of problems “trivial,” and with tail-call recursion being implemented in many languages, the related memory costs have also been somewhat mitigated.

    Loops are simpler for beginners to understand, but I don’t think recursion is all that hard to learn with a bit of practice, and can really clean up some otherwise very complicated code.

    My general opinion is that we are all beginners for a short part of our journey, but our aim shouldn’t be to make everything simple enough that beginners never need to advance their skills. We spend most of our careers as journeymen, and that’s the level of understanding we should be aiming for/expecting for most code. Recursion in that context is absolutely ok from a “readability/complexity” perspective.



  • athekentoProgrammingSoftware Engineer vs Software Developer
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    edit-2
    1 year ago

    Just to be clear, that is not exclusive to “engineering,” as other professionals have similar legal requirements (doctors, lawyers, fiduciaries).

    More generally, on a personal level, people are expected to act with integrity, and we have laws that provide them legal protections for whistleblowing.

    The actual practice of engineering is about problem-solving within a set of constraints. Of course the solution should not harm the public, and there are plenty of circumstances where software is developed to that standard.

    When a PE stamps a plan, they are asserting that they personally have reviewed the plan and process that created it and that it meets a standard for acceptable risk (not no risk!). That establishes the boundary of legal liability. In software, we generally do not have that process that fits in a legal framework, but that doesn’t mean that professional software engineers aren’t making those assessments for life-critical systems.

    For other kinds of systems, understand that this is a new field and that it doesn’t have the bloody history that got “real engineering” to where it is today. A lot of the work product of most software engineers just don’t have stringent safety requirements, or we don’t understand the risks of certain product categories yet (and before you try to rebut that, remember that “building codes are written in blood” because people were applying technology before it was well-designed/understood).

    Anyway, “engineering” is defined by a lot more than if you or your boss has a stamp (and in point of fact, there are plenty of engineers in the US that work as engineers without being a PE, or with any intention of ever having the stamp. Are they real engineers?)