this rootless Python script rips Windows Recall’s screenshots and SQLite database of OCRed text and allows you to search them.

  • @[email protected]
    link
    fedilink
    English
    714 months ago

    Hilarious to me that it OCRs the text. The text is generated by the computer. It’s almost like when Lt. Cmdr. Data wants to get information from the computer database, so he tells the computer to display it and just keeps increasing the speed — there are way more efficient means of getting information from A to B than displaying it, imaging it, and running it though image processing!

    I totally get that this is what makes sense, and it’s independent of the method/library used for generating text, but still…the computer “knows” what it’s displaying (except for images of text), and yet it has to screenshot and read it back.

    • @[email protected]
      link
      fedilink
      English
      284 months ago

      It happens the same on android for some reason

      Like 5-8 years ago the google assistant app was able to select and copy text from any app when invoked, I think it was called “now on tap”. Then because they’re google and they’re contractually obligated to remove features after some time, they removed this from the google app and integrated it in the pixel app switcher (and who cares if 99% of android users aren’t using a pixel, they say). The new implementation sucks, as it does ocr instead of just accessing the raw text…

      It only works fine with us English and not with other languages. But maybe it’s ok as it seems that google’s development style is us-centric

      • @[email protected]
        link
        fedilink
        English
        134 months ago

        Now on Tap also used OCR. Both Google Lens and Now on Tap get the same bullshit results on any languages that are not Latin. Literally, Ж gets read as >|< by both exactly the same.

    • @[email protected]
      link
      fedilink
      English
      25
      edit-2
      4 months ago

      Hey, yeah… why aren’t they just tapping the font rendering DLL?

      are they tapping the front rendering dll??

      • @[email protected]
        link
        fedilink
        English
        24 months ago

        My guess is that they looked at their screen reader API, saw that it wasnt 100% of the text on screen and said fuck it! Were using OCR!

    • @[email protected]
      link
      fedilink
      English
      244 months ago

      Having worked on a product that actually did this, it’s not as easy as it seems. There are many ways of drawing text on the screen.

      GDI is the most common, which is part of the windows API. But some applications do their own rendering (including browsers).

      Another difficulty, even if you could tap into every draw call, you would also need a way to determine what is visible on the screen and what is covered by something else.

    • Eager Eagle
      link
      fedilink
      English
      94 months ago

      Text from OCR is one kind of match. Recall also runs visual comparisons with the image tokens stored.

    • @[email protected]
      link
      fedilink
      English
      34 months ago

      To be fair, Data was designed to be like a human, and was made in the image of his creator. He has a number of design decisions that are essentially down to his creator wanting to create something like a human. Including that which you describe.

      Data was never intended to work like a PC, it’s very normal that he can’t just wirelessly interface with stuff.