• FizzyOrange
    link
    fedilink
    arrow-up
    3
    ·
    1 day ago

    Hmm I think just using SQLite or DuckDB with normalised data would probably get you 99% of the way there…

    • chaospatterns@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      1 day ago

      It all depends on how it’s represented on disk though and how the query is executed. Sqlite only supports numbers and strings, and if you keep using a VARCHAR, a read of those rows are going to have materialize a string into memory inside the sqlite library. DuckDB has more types, but if you’re using varchars everywhere, something has to read that string into memory unless you can push down logic into a query that doesn’t actually have to read the actual value, such as one that can use indices.

      The best way is to change the representation on disk, such as converting low-cardinality columns like the station into a numeric id. A standard int being four bytes is a lot more efficient than an n-byte string + a header and it can be compared by value.

      This is where file formats, like Parquet, shine. They’re oriented more towards parsing by systems. JSON is geared towards human parsing.

  • explodes@lemmy.world
    link
    fedilink
    English
    arrow-up
    24
    ·
    3 days ago

    If you’re storing your data as json on disk, you probably never cared about efficient storage anyway.

  • wewbull@feddit.uk
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    3 days ago

    Rust was the important factor in this result. That’s why it’s in the headline. It wasn’t the hugely inefficient way of storing the data initially. Not at all.

    FFS, you could just have run gzip on it probably.

    • NostraDavid
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      edit-2
      1 day ago

      FFS, you could just have run gzip on it probably.

      Gzip doesn’t reduce your data’s size by 2000x. Of course this could be done by other languages as well, but running gzip on your data doesn’t keep it accessible.

      Even turning the data into a Parquet file would’ve been a massive improvement, while keeping it accessible, but it likely would not have been 2000x smaller. 10x, maybe.

      edit: zip: about 10x; 7zip about 166x (from ~10GB to 60MB) - still not 2000x

      • wewbull@feddit.uk
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        It all depends on the data entropy. Formats like JSON compress very well anyway. If the data is also very repetitive too then 2000x is very possible.

        • FizzyOrange
          link
          fedilink
          arrow-up
          1
          ·
          1 day ago

          In my experience taking an inefficient format and copping out by saying “we can just compress it” is always rubbish. Compression tends to be slow, rules out sparse reads, is awkward to deal with remotely, and you generally always end up with the inefficient decompressed data in the end anyway, whether in temporarily decompressed files or in memory.

          I worked in a company where they went against my recommendation not to use JSON for a memory profiler output. We ended up with 10 GB JSON files, even compressed they were super annoying.

          We switched to SQLite in the end which was far superior.

          • wewbull@feddit.uk
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 day ago

            Of course compressing isn’t a good solution for this stuff. The point of the comment was to say how unremarkable the original claim was.