• DannyBoy@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    47
    ·
    edit-2
    21 days ago

    That’s not the worst idea ever. Say a screenshot is 10 mb. 10x60x 8 hours =4800mb per work day. 30 days is 150gb worst case scenario. I suppose you could check the previous screenshot and if it’s the same, then don’t write a new file. Combine that with OCR and a utility to scroll forward and backward through time, it might be a useful tool.

    • RandomLegend@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      34
      ·
      21 days ago

      Are you on 16k resolution or something?

      When i take a screenshot of my 3440x1440 display it’s 1MB big. I mean this doesn’t change the issue in its core but dramatically downsizes it

        • RandomLegend@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          10
          ·
          edit-2
          21 days ago

          Also, 1MB on full resolution. You could also downscale the images dramatically after you OCR them. So let’s say we shoot in full res, OCR and then downscale to 50%. Still enough so everything is human readable, combined with searchable OCR you’re down to 7,5GB for a whole month.

          Absolutely feasable. Let’s say we’re up to 8GB to include the OCR text and additional metadata and just reserve 10GB on your system for that to make double sure.

          Now you have 10GB to track your whole 3440x1440 display.

      • DannyBoy@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        6
        ·
        21 days ago

        Once a minute, and only if the screen contents change. I imagine there’s something lightweight enough.

        • MacN'Cheezus@lemmy.today
          link
          fedilink
          English
          arrow-up
          6
          ·
          edit-2
          21 days ago

          In order to be certified for running Recall, machines currently must have an NPU (Neural Processing Unit, basically an AI coprocessor). I assume that is what makes it practical to do by offloading the required computation from the CPU.

          Apparently it IS possible to circumvent that requirement using a hack, which is what some of the researchers reporting on it have done, but I haven’t read any reports on how that affects CPU usage in practice.

          • wick@lemm.ee
            link
            fedilink
            arrow-up
            1
            ·
            17 days ago

            Recall analyses each screenshot and uses AI or whatever to add tags to it. I’d assume that’s what the NPU is used for.

      • RandomLegend@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        3
        ·
        21 days ago

        You could optimize it though.

        As said one comment above, check if it’s the same composition as before and don’t take a screenshot if it didn’t change. Make some rules to filter out video content so if you have a youtube video open it doesn’t take a screenshot every second just because the video is running.

        Or you could actually integrate this with your window manager. Only take a screenshot if you move / resize / open / close a window. Make a small extension for browsers that tell it to make a screenshot if you scroll / close / open a page. Then you don’t have to make a screenshot and compare with the one before.

        This wouldn’t be as thorough as just forcing screenshots all the time and you would probably not catch stuff like writing a text in libreoffice as you don’t change anything with the window. But it could be a resourceful way to do that.

        And if for example no screenshot was taken for 1 minute because nothing called for that, you could just take one regardless. That way you have a minimum of one screenshot per minute or as often as window manager / browser calls for it.

    • Evotech@lemmy.world
      link
      fedilink
      arrow-up
      19
      ·
      edit-2
      21 days ago

      That’s what recall is… It’s literally screenshotring and. Ocr / ai parsing Combined with a sqllite database

      • barsquid@lemmy.world
        link
        fedilink
        arrow-up
        10
        ·
        21 days ago

        I think it would be hugely useful.

        But obviously I don’t want a malware company like Microsoft doing that “for me” (actually the purpose is hyperspecific ads if not long term planning to exfiltrate the data).

        Not sure if I even trust myself with the security that data would require.

      • Cargon@lemmy.ml
        link
        fedilink
        arrow-up
        1
        ·
        17 days ago

        If only MS used DuckDB then they wouldn’t have such a huge PR disaster on their hands.

    • takeheart@lemmy.world
      link
      fedilink
      arrow-up
      8
      ·
      21 days ago

      I mean taking the screenshot is the easy part, getting reliable OCR on the other hand …

      In my experience (tesseract) current OCR works well for continuous text blocks but it has a hard time with tables, illustrations, graphs, gui widgets, etc.

    • renzev@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      20 days ago

      I suppose you could check the previous screenshot and if it’s the same

      Hmmm… this gives me an idea… maybe we could even write a special algorithm that checks whether only certain parts of picture have changed, and store only those, while re-using the parts that haven’t changed. It would be a specialized compression algorithm for Moving Pictures. But that sounds difficult, it would probably need a whole Group of Experts to implement. Maybe we can call it something like Moving Picture Experts Group, or MPEG for short : )