cross-posted from: https://lemmy.ca/post/37011397

!opensource@programming.dev

The popular open-source VLC video player was demonstrated on the floor of CES 2025 with automatic AI subtitling and translation, generated locally and offline in real time. Parent organization VideoLAN shared a video on Tuesday in which president Jean-Baptiste Kempf shows off the new feature, which uses open-source AI models to generate subtitles for videos in several languages.

  • TheRealKuni@lemmy.world
    link
    fedilink
    English
    arrow-up
    31
    arrow-down
    4
    ·
    2 days ago

    And yet they turned down having thumbnails for seeking because it would be too resource intensive. 😐

    • DreamlandLividity@lemmy.world
      link
      fedilink
      English
      arrow-up
      16
      arrow-down
      1
      ·
      2 days ago

      I mean, it would. For example Jellyfin implements it, but it does so by extracting the pictures ahead of time and saving them. It takes days to do this for my library.

      • fishpen0@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        Yeah, I do this for plex as well, and stash. I think if the file already exists in the directory vlc should use it. It’s up to you to generate them. That is exactly how cover art for albums on songs worked in VLC for a decade before they added the feature to pull cover art on the fly.

        • DreamlandLividity@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          15 hours ago

          I get what you are saying, but I don’t think there is any standardized format for these trickplay images. The same images from Plex would likely not be usable in Jellyfin without converting the metadata (e.g. to which time in the video an image belongs to). So VLC probably does not have a good way to understand trickplay images not made by VLC.

    • cley_faye@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      1
      ·
      2 days ago

      Video decoding is resource intensive. We’re used to it, we have hardware acceleration for some of it, but spewing something around 52 million pixels every second from a highly compressed data source is not cheap. I’m not sure how both compare, but small LLM models are not that costly to run if you don’t factor their creation in.

      • TheRealKuni@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 day ago

        All they’d need to do is generate thumbnails for every period on video load. Make that period adjustable. Might take a few extra seconds to load a video. Make it off by default if they’re worried about the performance hit.

        There are other desktop video players that make this work.