I don’t understand what problem they are meant to solve. If you have a FOSS piece of software, you can install it via the package manager. Or the store, which is just a frontend for the package manager. I see that they are distribution-independent, but the distro maintainers likely already know what’s compatible and what your system needs to install the software. You enjoy that benefit only through the package manager.

If your distro ships broken software because of dependency problems, you don’t need a tool like Flatpak, you need a new distro.

  • koorogi
    link
    fedilink
    1211 months ago

    I disagree with so much of this.

    You might not care about the extra disk space, network bandwidth, and install time required by having each application package up duplicate copies of all the libraries they depend on, but I sure do. Memory use is also higher, because having separate copies of common libraries means that each copy needs to be loaded into memory separately, and that memory can’t be shared across multiple processes. I also trust my distribution to be on top of security updates much more than I trust every random application developer shipping Flatpaks.

    But tbh, even if you do want each application to bundle its own libraries, there was already a solution for that which has been around forever: static linking. I never understood why we’re now trying to create systems that look like static linking, but using dynamic linking to do it.

    I think it’s convenient for developers to be able to know or control what gets shipped to users, but I think the freedom of users to decide what they will run on their own system is much more important.

    I think the idea that it’s not practical for different software to share the same libraries is overblown. Most common libraries are generally very good about maintaining backwards compatibility within a major version, and different major versions can be installed side-by-side. I run gentoo on my machines, and with the configurability the package manager exposes, I’d wager that no two gentoo installations are alike, either in version of packages installed, or in the options those packages are built with. And for a lot of software that tries to vendor its own copies of libraries, gentoo packages often give the option of forcing them to use the system copy of the library instead. And you know what? It’s actually works almost all the time. If gentoo can make it work across the massive variability of their installs, a distribution which offers less configurability should have virtually no problem.

    You are right that some applications are a pain to package, and that the traditional distribution model does have some duplication of effort. But I don’t think it’s as bad as it’s made out to be. Distributions push a lot of patches upstream, where other distributions will get that work for free. And even for things that aren’t ready to go upstream, there’s still a lot of sharing across distributions. My system runs musl for its C library, instead of the more common glibc. There aren’t that many musl-based distributions out there, and there’s some software that needs to be patched to work – though much less than used to be the case, thanks to the work of the distributions. But it’s pretty common for other musl-based distributions to look at what Alpine or Void have done when packaging software and use it as a starting point.

    In fact, I’d say that the most important role distributions play is when they find and fix bugs and get those fixes upstreamed. Different distributions will be on different versions of libraries at different times, and so will run into different bugs. You could make the argument that by using the software author’s “blessed” version of each library, everybody can have a consistent experience with the software. I would argue that this means that bugs will be found and fixed more slowly. For example, a rolling release distro that’s packaging libraries on the bleeding edge might find and fix bugs that would eventually get hit in the Flatpak version, but might do so far sooner.

    The one thing I’ve heard about Flatpak/Snap/etc that sounds remotely interesting to me is the sandboxing.

    • Tobias Hunger
      link
      111 months ago

      You might not care about the extra disk space, network bandwidth, and install time required by having each application package up duplicate copies of all the libraries they depend on, but I sure do.

      Updates in flatpak tend to be smaller in flatpak than with packages. Flatpak only downloads files that changed, packages always download all files. But yes, the initial install of the first flatpak is going to take more bandwidth. The second package is then already reusing a lot of files already installed by the first.

      Memory use is also higher, because having separate copies of common libraries means that each copy needs to be loaded into memory separately, and that memory can’t be shared across multiple processes.

      All the base libraries are ahared between all flatpaks. Only very specialized libs end up getting installed several times (or libs with a different build configuration), but those are very like going to be used by one application anyway, independent of how youninstall the application.

      I also trust my distribution to be on top of security updates much more than I trust every random application developer shipping Flatpaks.

      I trust developers to have a way better idea what the software actually needs than some random packages that also packages 200 other things.

      But tbh, even if you do want each application to bundle its own libraries, there was already a solution for that which has been around forever: static linking. I never understood why we’re now trying to create systems that look like static linking, but using dynamic linking to do it.

      Yeap, static linking cannget similar results, but at the price of more bandwidth usage (you need to download the same library code over and over again) and resource usage: You do not get deduplication of libraries, so that code needs to be loaded into RAM repeatedly.

      I think the idea that it’s not practical for different software to share the same libraries is overblown.

      I tried that: It is a huge pain. You basically can not use anything newer than 3 years or so as distributions will not have the new code yet. It is also eating a lot of time when you get but reports and then find out some packager has shipped your code linked to a library kniwn to be broken. Happens all the time:-( You need to be able to point people to working binaries of your product, so you will just package the stuff yourself. Doing that once properly for flatpak is very nice – oder doing it a dozen times, each time testing with the package versions and patches applied on top of that version the distribution ships.

      But younleft out the one thing that makes me prefer flatpak: I can restrict what an application can do. E.g. no broser can access anything outside the downloads folder, etc. I care about my stuff in the home directory. Traditional Unix security has nothing to help me stop one application messing with the data from another, stealing my ssh keys or turn on my cameras or microphones. Flatpak can limit all that to certain applications – and when thebapplication supports it, even more than that.