I think a lot of people have heard of OpenAI’s local-friendly Whisper model, but I don’t see enough self-hosters talking about WhisperX, so I’ll hop on the soapbox:

Whisper is extremely good when you have lots of audio with one person talking, but fails hard in a conversational setting with people talking over each other. It’s also hard to sync up transcripts with the original audio.

Enter WhisperX: WhisperX is an improved whisper implementation that automatically tags who is talking, and tags each line of speech with a timestamp.

I’ve found it great for DMing TTRPGs — simply record your session with a conference mic, run a transcript with WhisperX, and pass the output to a long-context LLM for easy session summaries. It’s a great way to avoid slowing down the game by taking notes on minor events and NPCs.

I’ve also used it in a hacky script pipeline to bulk download podcast episodes with yt-dlp, create searchable transcripts, and scrub ads by having an LLM sniff out timestamps to cut with ffmpeg.

Privacy-friendly, modest hardware requirements, and good at what it does. WhisperX, apply directly to the forehead.

  • Optional@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    30 days ago

    I’ve also used it in a hacky script pipeline to bulk download podcast episodes with yt-dlp, create searchable transcripts, and scrub ads by having an LLM sniff out timestamps to cut with ffmpeg.

    This is genius. Could you appify this and I’ll pay you in real or pretend currency as you prefer

    I’ve found it great for DMing TTRPGs — simply record your session with a conference mic, run a transcript with WhisperX, and pass the output to a long-context LLM for easy session summaries. It’s a great way to avoid slowing down the game by taking notes on minor events and NPCs.

    Okay that’s just crazy. ;)

    • Justin@lemmy.jlh.name
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      30 days ago

      Probably not that hard to build a simple flask frontend around it.

      Automatically processing files in an S3/WebDAV directory would also be useful.

  • irmadlad@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    30 days ago

    What would be some use cases for WhisperX? I’m struggling to envision how I would use that in a selfhosting/homelabbing environment.

    • fatalicus@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      29 days ago

      I’m personally looking at setting up whisper or whisperx with bazarr, to get subtitles for movies and series that I can’t find any to download.

    • TheFogan@programming.dev
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      30 days ago

      half sarcastic but the overall premise of rigging something in to a local voice assistant, when an arguement starts “Ok nabu record this conversation”. then 2 weeks later on another arguement… “OK nabu search our last arguement for the cabinet”. Would be like having a court transcriber on call.

      • irmadlad@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        30 days ago

        I have a lady friend that does quite a good enough job of that. LOL

        ‘You remember back in 1979…it was a Friday at 2:11 PM, and you said…’ ‘Babe, I don’t remember what I had for breakfast yesterday.’

          • irmadlad@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            30 days ago

            What kind of stupid-ass question is that? LOL All kidding aside, she’s a good soul. We’re not married, we’ve just know each other for 45+ years. It just kind of clicked. She lives in her house, and I in mine, and we get together as often as possible.

  • wise_pancake@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    30 days ago

    That is cool! I’ve been wanting I’ve wanted to use a model like this but haven’t really looked.

    Are you self hosting the long context llm, of do what are you using?

    Context lengths are what kill a lot of my local llm experiments.

    • dgdft@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      29 days ago

      Are you self hosting the long context llm, of do what are you using?

      I did a lot of my exploration back when GPT4 128K over API was the only long-context game in town.

      I imagine the options are much better these days between Llama 3/4, Deepseek, and Qwen — but haven’t tried them locally myself.