• xmunk@sh.itjust.works
      link
      fedilink
      arrow-up
      36
      ·
      1 month ago

      Honestly? Pretty fucking awesome if you get it configured correctly. I don’t think it’s super useful for production (I prefer chef/vagrant) but for dev boxes it’s incredible at producing consistent environments even on different OSes and architectures.

      Anything that makes it less painful for a dev to destroy and rebuild an environment that’s corrupt or even just a bit spooky pays for itself almost immediately.

      • MajorHavoc
        link
        fedilink
        arrow-up
        8
        ·
        1 month ago

        I don’t think it’s super useful for production (I prefer chef/vagrant)

        Yeah!

        Docker and OCI get abused a lot to thoughtlessly ship a copy of the developer’s laptop into production.

        Life is so much simpler after taking the time to build thoughtful correct recipes in an orchestration tool.

        Anything that makes it less painful for a dev to destroy and rebuild an environment that’s corrupt or even just a bit spooky pays for itself almost immediately.

        Exactly. The learning curve is mean, but it’s worth it quickly as soon as the first mystery bug dies in a rebuild fire.

    • Platypus@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      13
      ·
      1 month ago

      In my experience, very, but it’s also not magic. Being able to package an application with its environment and ship it to any machine that can run Docker is great but it doesn’t solve the fact that modern deployment architecture can become extremely complicated, and Docker adds another component that needs configuration and debugging to an already complicated stack.

      • skuzz@discuss.tchncs.de
        link
        fedilink
        arrow-up
        3
        ·
        1 month ago

        And a new set of dependency problems depending on the base image. And then fighting layers both to optimize size, and with some image hubs, “why won’t it upload that one file change? It’s a different file now! The hashes can’t possibly be the same!” And having to find hackey ways to slap it so the correct files are in the correct places.

        Then manipulating multi-arch manifests to work reliably for other devs in a cross-processor environment so they don’t have to know how the sausage works…

    • barsquid@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      1 month ago

      Containers are great. Including Docker. I don’t like that it modifies firewall rules on its own and I don’t like how many projects want you to pass the Docker socket into a container.

    • marcos@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      1 month ago

      It’s a way to provide standard configuration for your programs without one configuration interfering with another.

      Honestly, almost all alternatives work better. But docker is the one you can run on any system without large changes.

    • Solemarc@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      1 month ago

      Docker came about as the answer to the “it works on my machine” problem. What it does is bundle everything your application should need into a box so that you can run it basically everywhere.

      For Dev’s this also means that if it runs for you it should run for everyone.

      Since docker came out we’ve had a lot of advancement in compilers & interpreters, it’s generally pretty easy to compile to another platform today and it’s pretty rare for an interpreter to have a meltdown because of your OS. I imagine we’ll see less docker going forwards but enterprise is slow moving.

    • Gamma@beehaw.org
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 month ago

      I think they’re really useful, there are alternatives that I think have feature parity at this point but the concepts of containerization are the same