• JackbyDev
    link
    fedilink
    English
    arrow-up
    21
    ·
    1 month ago

    Oh boy, I sure am excited to websites hosting PDFs! I love when the tool that everyone uses for hosting and viewing HTML get to be blessed with the perfect format that is PDF!

    I LOVE PDFS! I love two column PDFs! I love reading like this!

    1 3
    2 4
    5 7
    6 8

    Instead of like this

    1
    2
    3
    4
    5
    6
    7
    8

    It’s amazing and such a good user experience!

    I love that PDFs are so difficult to transform into HTML, too. I would never want the besmirch the publishers oerfect one approved layout by resizing the window!

    • keepthepace@slrpnk.net
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 month ago

      I love that PDFs are so difficult to transform into HTML, too

      FYI, if that’s relevant to your field, every new article published on arxiv.org now has a HTML render as well.

      And on many older publications, transforming “arxiv.org” into “ar5iv.org” leads to an HTML rendering that is a best-effort experiments they ran for a while.

      • JackbyDev
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 month ago

        That’s really cool! What I really would like is a tool that converts PDFs to semantic HTML files. I took a peek there and it seems easier for them because they have the original LeX source.

        I think for arbitrary PDFs files the information just isn’t there. I’ve looked into it a bit and it’s sort of all over. A tool called pdf2htmlex is pretty good but it makes the HTML look exactly like the PDF.

        • keepthepace@slrpnk.net
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 month ago

          Yes, PDFs are much more permissive and may not have any semantic information at all. Hell, some old publications are just scanned images!

          PDF -> semantic seems to be a hard problem that basically requires OCR, like these people are doing

          • JackbyDev
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 month ago

            Oh nice, thanks for sharing that project. I haven’t heard of it before!

          • thevoidzero@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 month ago

            Not just semantics. PDFs doesn’t even have segmentations like spaces/lines/paragraph. It’s just text drawn at locations the text processor/any other softwares inserted into. Many pdf editor softwares just detect the closeness of the characters to group them together.

            And one step further is you can convert text to path, which basically won’t even have glyph (characters) info and font info, all characters will just be geometric shapes. In that case you can’t even copy the text. OCR is your only choice.

            PDF is for finalizing something and printing/sharing without the ability to edit.

    • brianary@startrek.website
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 month ago

      I’ve always called Word documents and PDFs “dead-end formats” (DEF). Once you export your data to them, there’s no reliable way to retrieve your data from them for further transformation like you can for YAML, JSON, XML, HTML, Markdown, &c.