I’ve read that standard containers are optimized for developer productivity and not security, which makes sense.

But then what would be ideal to use for security? Suppose I want to isolate environments from each other for security purposes, to run questionable programs or reduce attack surface. What are some secure solutions?

Something without the performance hit of VMs

  • dragnucs@lemmy.ml
    link
    fedilink
    arrow-up
    14
    ·
    1 year ago

    It is the application Docker that is not secure. Containers are. In fact Docker runs a daemon as root to wich you communicate from a client. This is what makes it less secure; running under root power. It also has a few shortcomings of privileged containers. This can be easily solved by using podman and SELinux. If you can manage to run Docker rootless, then you are magnitudes higher in security.

    • piezoelectron@sopuli.xyz
      link
      fedilink
      arrow-up
      6
      ·
      1 year ago

      Do you think Podman is ready to take over Docker? My understanding is that Podman is Docker without the root requirement.

      • dragnucs@lemmy.ml
        link
        fedilink
        arrow-up
        6
        ·
        1 year ago

        Yes it is. I’ve been using it for more than a year now. Works reliably. Has pod support aswel.

        • piezoelectron@sopuli.xyz
          link
          fedilink
          arrow-up
          3
          ·
          1 year ago

          Great. I don’t know enough to use either but I think I’m going to try lean on podman from the get go. In any case, I know that all podman commands are exactly identical to Docker, such that you can replace, say, docker compose with podman compose and move on with ease.

          • Guilvareux@feddit.uk
            link
            fedilink
            arrow-up
            2
            ·
            1 year ago

            With the specific exception of podman compose I completely agree. I haven’t tested it for a while but podman compose has had issues with compose file syntax in my experience. Especially with network configs.

            However, I have been using “docker-compose” with podman’s docker compatible socket implementation when necessary, with great success

      • Cyclohexane@lemmy.mlOPM
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        I’ve been using podman for almost a year now. It works very well and supports most Docker features.

      • mosthated@lemmy.ml
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        1 year ago

        Related to this: can podman completely replace Docker? I.e., can it pull containers and build containers in addition to running them?

        • boo@beehaw.org
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          It can pull and build containers fine but last time I tried there were some differences. Mounts were not usable because user uid/gid behave quite differently. Tools like portainer dont work on podman containers. I havent tried out any networking or advanced stuff yet.

          But i found that the considerations to write docker files are quite different for podman.

          • dragnucs@lemmy.ml
            link
            fedilink
            arrow-up
            3
            ·
            1 year ago

            Differences you find could be related to containers being run rootless, or the host system having SELinux enforcesd. Both problems could be intended behavior and can be soled simply by using by adding correct labels to mount points like :z or :Z. This SELinux feature also affects Docker when setup.

            Portaiers tries to connect to a docker sock path that is not the same with Podman. While podman is rootless and does not need a daemon, socks and stuff, it has support for them nevertheless. So you can simply adjust Portainer config to work with podman. I havnt tried it yet but I managed to do similar things for other software.

          • mosthated@lemmy.ml
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            Gotcha. I use docker containers on computing clusters at the University, but because of security, I have to convert them to singularity containers. That is okay, but I was hoping that by running podman I could prevent this extra step.

            • Tiuku@sopuli.xyz
              link
              fedilink
              arrow-up
              1
              ·
              1 year ago

              Unlike docker, podman doesn’t try to do everything on it’s own. There’s a separate tool known as buildah which builds containers from dockerfiles just fine.

              Ps. More generally, they’re called containerfiles.

    • boo@beehaw.org
      link
      fedilink
      arrow-up
      4
      ·
      1 year ago

      There can also be old images with e.g. old openssl versions being used. Its not a concern if they are updated frequently, but still manual.

      • dragnucs@lemmy.ml
        link
        fedilink
        arrow-up
        3
        ·
        1 year ago

        This is a problem of the containerized program and the image itself. This problem affect, containers, VM, and baremetal aswel.

        • boo@beehaw.org
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          I agree. But imo these usecases are more known and mature in traditional setups, we could apt update and restart a systemd service and its done.

          Its not so obvious and there are no mechanisms for containers/images.

          (I am not into devops/sysadmin, so this might also be my lack of exposure)

          • dragnucs@lemmy.ml
            link
            fedilink
            arrow-up
            2
            ·
            1 year ago

            Most often, images are updated automatically and are managed by the developers themselves so images are usually up to date. If you don’t know how to build images, it may be difficult for you to update the containerized software before the vendor does. But this situation is infrequent.

            • AggressivelyPassive@feddit.de
              link
              fedilink
              arrow-up
              1
              ·
              1 year ago

              Many projects just pull in a bunch of images from wherever and never update them. Especially if it’s that one obscure image that happens to package this over obscure app you absolutely need.

  • Helix 🧬@feddit.de
    link
    fedilink
    arrow-up
    10
    ·
    1 year ago

    Where did you read that and which arguments did the authors make?

    Many times, the configuration of Docker is the issue, e.g. mounting stuff like files from /etc/ or the Docker socket from the outside, using insecure file permissions or running the application as root user.

    If you use rootless Docker or Podman, you already eliminated one of the security risks. The same goes for the other mentioned things.

    What exactly do you mean by “questionable programs”? If you want to run malware, you shouldn’t do so in an environment where it can break out of anything. There’s the possibility of hardware virtualisation which prevents many of the outbreaks possible, but even then, exploits have been found.

    You’re really only secure if you run questionable software on an airgapped computer with no speakers and never run anything else on it.

    What would be your use case?

  • steph@lemmy.clueware.org
    link
    fedilink
    arrow-up
    9
    arrow-down
    1
    ·
    edit-2
    1 year ago

    All recent CPUs have native virtualization support, so there’s close to no performance hit on VMs.

    That being said, even a VM is subject to exploits and malicious code could break out of the VM down to its hypervisor.

    The only secure way of running suspicious programs starts with an air-gaped machine, a cheap hdd/ssd that will go straight under the hammer as soon as testing is complete. And I’d be wondering even after that if maybe the BIOS might have been compromised.

    On a lower level of paranoia and/or threat, a VM on an up-to-date hypervisor with a snapshot taken before doing anything questionable should be enough. You’d then only have to fear a zero day exploit of said hypervisor.

    • AggressivelyPassive@feddit.de
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      Each VM needs a complete OS, though. Even at 100% efficiency, that’s still a whole kernel+userspace just idling around and a bunch of caches, loaded libraries, etc. Docker is much more efficient in that regard.

      • Saik0@lemmy.saik0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        And LXC even more efficient in that regard.

        Docker does load a bunch of stuff that most people don’t need for their project.

        I don’t know why LXC is always the red-headed stepchild. It works wonderfully.

  • bishopolis@lemmy.ca
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    Docker has an additional issue, but not one unique to docker. Like flatpak, pip, composer, npm or even back to cpan and probably further, as a third-party source of installed software, it breaks single-source of truth when we want to examine the installed-state of applications on a given host.

    I’ve seen iso27002/12.2.1f, I’ve seen supply-chain management in action to massive benefit for uptime, changes, validation and rollback, and it’s simplified the work immensely.

        .1.3.6.1.2.1.25.6.3
    

    If anyone remembers dependency hell - which is always self-inflicted - then this should be Old Hat.

    HAVING SAID THAT, I’ve seen docker images loaded as the entire, sole running image, apparently over a razor-thin bmc-sized layer, on very small gear, to wondrous effect. But - and this is how VMware did it - a composed bare micro-image with Just Enough OS to load a single container on top, may not violate 27002 in that circumstance.