• ruffsl
    link
    fedilink
    English
    arrow-up
    56
    arrow-down
    1
    ·
    edit-2
    1 year ago

    Image Transcription: Meme


    A photo of an opened semi-trailer unloading a cargo van, with the cargo van rear door open revealing an even smaller blue smart car inside, with each vehicle captioned as “macOS”, “Linux VM” and “Docker” respectively in decreasing font size. Onlookers in the foreground of the photo gawk as a worker opens each vehicle door, revealing a scene like that of russian dolls.


    I’m a human volunteer content transcriber and you could be too!

    • ruffsl
      link
      fedilink
      English
      arrow-up
      11
      ·
      1 year ago

      Just need to put a JIT compiled language logo inside the blue car and caption it as “Containerise once, ship anywhere”.

  • Ucinorn@aussie.zone
    link
    fedilink
    English
    arrow-up
    52
    ·
    1 year ago

    Not just OSX: anyone using WSL on windows is an offender too

    But as a WSL user, dockerised Dev environments are pretty incredible to have running on a windows machine.

    Does it required 64 gig of ram to run all my projects? Yes. Was it worth it? Also yes

    • qwop
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      1 year ago

      My experience using docker on windows has been pretty awful, it would randomly become completely unresponsive, sometimes taking 100% CPU in the process. Couldn’t stop it without restarting my computer. Tried reinstalling and various things, still no help. Only found a GitHub issue with hundreds of comments but no working workarounds/solutions.

      When it does work it still manages to feel… fragile, although maybe that’s just because of my experience with it breaking.

      • desmaraisp@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        You can cap the amount of cpu/memory docker is allowed to use. That helps a lot for those issues in my experience, although it still takes somewhat beefy machines to run docker in wsl

        • qwop
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          When it happens docker+wsl become completely unresponsive anyway though. Stopping containers fails, after closing docker desktop wsl.exe --shutdown still doesn’t work, only thing I’ve managed to stop the CPU usage is killing a bunch of things through task manager. (IIRC I tried setting a cap while trying the hyper-v backend to see if it was a wsl specific problem, but it didn’t help, can’t fully remember though).

          This is the issue that I think was closest to what I was seeing https://github.com/docker/for-win/issues/12968

          My workaround has been to start using GitHub codespaces for most dev stuff, it’s worked quite nicely for the things I’m working on at the moment.

      • Ucinorn@aussie.zone
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        I found the same thing until I started strictly controlling the resources each container could consume, and also changing to a much beefier machine. Running a single project with a few images were fine, but more than that and the WSL connection would randomly crash or become unresponsive.

        Databases in particular you need to watch: left unchecked they will absolutely hog RAM.

    • MXX53
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      1 year ago

      I work in a windows environment at work and my VMs regularly flag the infrastructure firewalls. So WSL is my easiest way to at least be able to partially work in my environment of choice.

    • Phoenix
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I’ve used WSL to run deepspeed before because inexplicably microsoft didn’t develop it for their own platform…

    • Kuiche
      link
      fedilink
      English
      arrow-up
      26
      ·
      1 year ago

      Yes, under windows and osx at least.

      • jk47@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        1 year ago

        Is that still true? I use Linux but my coworker said docker runs natively now on the M1s but maybe he was making it up

        • Ryan
          link
          fedilink
          English
          arrow-up
          9
          ·
          1 year ago

          I suspect they meant it runs natively in that it’s an aarch64 binary. It’s still running a VM under the hood because docker is really just a nice frontend to a bunch of Linux kernel features.

          • Dohnakun@lemmy.fmhy.mlB
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            docker is really just a nice frontend to a bunch of Linux kernel features.

            What does it do anyway? I know there’s lxc in the kernel and Docker not using it, doing it’s own thing, but not much else.

            • Ryan
              link
              fedilink
              English
              arrow-up
              3
              ·
              1 year ago

              I can’t remember exactly what all the pieces are. However, I believe its a combination of

              • cgroups: process isolation which is why you can see docker processes in ps/top/etc but you can’t for vms. I believe this is also what gets you the ability to run cross distro images since the isolation ensures the correct shared objects are loaded
              • network namespaces: how they handle generating the isolated network stack per process
              • some additional mount magic that I don’t know what its called.

              My understanding is that all of the neat properties of docker are actuall part of the kernel, docker (and podman and other container runtimes) are mostly just packing them together to achieve the desired properties of “containers”.

            • dadbod@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 year ago

              It makes it very easy to define the environment and conditions in which a process is run, and completely isolate it from the rest of the system. The environment includes all the other software installed in said isolated environment. Since you have complete isolation you can install all the software that comes with what we think if as a linux “distribution”, which means you can do something like run a docker container that is “ubuntu” or “debian” on a “CentOS” or whatever distribution.

              When you start a Dockerfile with the statement FROM ubuntu:version_tag you are more or less saying “I want to run a process in an environment that includes all of the3 software that would ship with this specific version of ubuntu”

              A linux distro == Kernel + “user land” (maybe not the correct terminology). A docker container is the “user land” or “distro” + whatever you’re wanting to run, but the Kernel is the host system.

              I found this pretty helpful in explaining it: https://earthly.dev/blog/chroot/

              I’ll also say that folks say pretty nonchalantly deride Docker and other tools as if it’s just “easy” to set these things up with “just linux” and Docker is something akin to syntax sugar. I suspect many of these folks don’t make software for a living, or at least don’t work at significant scale. It might be easy to create an isolated process, it’s absurd to say that Docker (or Podman, etc…) doesn’t add value. The reproducibility, layering, builders, orchestration, repos, etc… are all build on top of the features that allow isolation. None of that stuff existed before docker/other container build/deploy tools.

              Note: I’m not a Linux SME, but I am a software dev who uses Docker every day — I am likely oversimplifying some things here, but this is a better and more accurate oversimplification than “docker is like a VM”, which is a helpful heuristic when you first learn it, but ultimately wrong

        • LaggyKar
          link
          fedilink
          English
          arrow-up
          9
          ·
          1 year ago

          Maybe they just meant that it runs ARM binaries instead of running on Rosetta 2.

        • Shareni
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          Docker requires the Linux kernel to work.

          M1 is just worse arm. Since most people use x86_64 instead of arm, docker had to emulate that architecture and therefore had performance issues. Now you’ve got arm specific images that don’t require that hardware emulation layer, and so work a lot better.

          Since that didn’t solve the Linux kernel requirement, it’s still running a VM to provide it.

        • aport
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Not making it up, but possibly confused. OCI containers are built on Linux-only technologies.

      • haruki
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Try limiting it down to 2GB (there is an option in the Docker Desktop app). Before I discovered this option, the VM was normally eating 3-4GB of my memory.

      • hemmes@vlemmy.net
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        1 year ago

        Bloody hell

        Edit: Reminds of the pimp my ride meme. “We made you an OS so you can VM your VM inside a VM!”

    • Ucinorn@aussie.zone
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      This was one of the reasons we switched to docker in the first place. Our Devs with M series processors spent weeks detangling issues with libraries that weren’t compatible.

      Just started using Docker and all of those issues went away

  • YellowTraveller@lemm.ee
    link
    fedilink
    English
    arrow-up
    21
    ·
    1 year ago

    When I was in school I once used a IOS emulator running inside a docker container of MacOS running on a linux machine. It works surprisingly smoothly.

    • george@midwest.social
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      The difference between Docker and a VM is that Docker shares a kernel, but provides isolated processes and filesystems. macOS has a very distinct kernel from Linux (hence why Docker on macOS uses a Linux VM), I would be shocked if it could run on a Linux Docker host. Maybe you were running macOS in a VM?

    • astraeus
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 year ago

      We’re reaching levels of containerization that shouldn’t even be possible!

  • 𝐘Ⓞz҉@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    1 year ago

    Can someone please explain me like i am 5 what is docker and containers ? How it works? Can i run anything on it ? Is it like virtualbox ?

    • SantaClaus@aussie.zone
      link
      fedilink
      English
      arrow-up
      21
      ·
      edit-2
      1 year ago

      Think of a container like a self contained box that can be configured to contain everything a program may need to run.

      You can give the box to someone else, and they can use it on their computer without any issues.

      So I could build a container that contains my program that hosts cat pictures and give it to you. As long as you have docker installed you can run a command “docker run container X” and it’ll run.

    • Spzi@lemm.ee
      link
      fedilink
      English
      arrow-up
      16
      ·
      1 year ago

      Is it like virtualbox ?

      VirtualBox: A virtual machine created with VirtualBox contains simulated hardware, an installed OS, and installed applications. If you want multiple VMs, you need to simulate all of that for each.

      Docker containers virtualize the application, but use their host’s hardware and kernel without simulating it. This makes containers smaller and lighter.

      VMs are good if you care about the hardware and the OS, for example to create different testing environments. Containers are good if you want to run many in parallel, for example to provide services on a server. Because they are lightweight, it’s also easy to share containers. You can choose from a wide range of preconfigured containers, and directly use them or customize them to your liking.

    • Lmaydev
      link
      fedilink
      arrow-up
      1
      ·
      9 months ago

      A container is a binary blob that contains everything your application needs to run. All files, dependencies, other applications etc.

      Unlike a VM which abstracts the whole OS a container abstracts only your app.

      It uses path manipulation and namespaces to isolate your application so it can’t access anything outside of itself.

      So essentially you have one copy of an OS rather than running multiple OS’s.

      It uses way less resources than a VM.

      As everything is contained in the image if it works on your machine it should work the same on any. Obviously networking and things like that can break it.

    • aport
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      2
      ·
      1 year ago

      macOS is not unix-like, it is literally Unix.

  • tomh@feddit.uk
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    Now run a KinD cluster inside that, with containers running inside the worker containers.