Giter Club home page Giter Club logo

subgen's People

Contributors

actions-user avatar benjroy avatar clach04 avatar ellisonpatterson avatar emalton avatar manifestfailure avatar mcclouds avatar pearlythepirate avatar reznakt avatar rikiar73574 avatar sdspieg avatar sjafferali avatar xhzhu0628 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

subgen's Issues

Error: CUDA failed with error CUDA driver version is insufficient for CUDA runtime version

Hello,

I am trying to get this going on windows to use with Nvidia GPUs.

The default mccloud/subgen:cuda image fails with

subgen  | Installing whisper...
subgen  | Requirement already satisfied: whisper in /usr/local/lib/python3.10/dist-packages (1.1.10)
subgen  | Requirement already satisfied: six in /usr/local/lib/python3.10/dist-packages (from whisper) (1.16.0)
subgen  | WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
subgen  | whisper has been successfully installed.
subgen  | Traceback (most recent call last):
subgen  |   File "/subgen/./subgen.py", line 43, in <module>
subgen  |     import ffmpeg
subgen  | ModuleNotFoundError: No module named 'ffmpeg'

So I added ffmpeg-python to subgen.py and changed the Dockerfile.cuda to point to ./subgen/subgen.py and was able to pass the error, but subsequently run into this cuda error upon attempting transcription:

CUDA failed with error CUDA driver version is insufficient for CUDA runtime version

I tried messing around with updates in the container, etc. but had no success.

Any advice to get cuda working? Is it related to building the container rather than using your image?

Thanks!

Dependency Error

ERROR:fastapi:Form data requires "python-multipart" to be installed. 

You can install "python-multipart" with: 

pip install python-multipart

Almost working...on unraid

What a wonderfull app. I've tried to add it on my Unraid server, usring plex and bazarr, to ger french and spanish subs.

What i've done sor far :

1/ Use Docker compose with that file below. Downloaded and installed

#docker-compose.yml
version: '2'
services:
subgen:
container_name: subgen
tty: true
image: mccloud/subgen
environment:
- "WHISPER_MODEL=medium"
- "WHISPER_THREADS=4"
- "PROCADDEDMEDIA=True"
- "PROCMEDIAONPLAY=False"
- "NAMESUBLANG=aa"
- "SKIPIFINTERNALSUBLANG=fra"
- "PLEXTOKEN=tokenhere"
- "PLEXSERVER=http://plexserver:32400"
- "JELLYFINTOKEN=token here"
- "JELLYFINSERVER=http://jellyfin:8096"
- "WEBHOOKPORT=8090"
- "CONCURRENT_TRANSCRIPTIONS=2"
- "WORD_LEVEL_HIGHLIGHT=False"
- "DEBUG=True"
- "USE_PATH_MAPPING=False"
- "PATH_MAPPING_FROM=/tv"
- "PATH_MAPPING_TO=/Volumes/TV"
- "TRANSCRIBE_DEVICE=cpu"
- "CLEAR_VRAM_ON_COMPLETE=True"
- "HF_TRANSFORMERS=False"
- "HF_BATCH_SIZE=24"
- "MODEL_PATH=./models"
- "UPDATE=False"
- "APPEND=False"
volumes:
- /mnt/user/video/films
- /mnt/user/video/series
ports:
- "9000:9000"

I'm not 100 % sure on the Volumes (addded the complete path) and path mapping from/too (my "series" volume is mapped under "TV" in Sonarr/Bazarr, but i don't know if subgen will find it

2 - I've addd it to bazarr with local server (127. ...) and open a port on my router.
i've added it too as a webhook

Anyway, when i search a sub with bazar, nothing is found. Same thing on plex.

Help ?

[FEATURE] Jellyfin/bazarr/filesystem support

Having an option for Plex is cool, however, being more universal or supporting other media providers would be amazing.
There are couple suggestions you could do that.

  1. Direct Jellyfin implementation. Jellyfin has an api https://api.jellyfin.org/ that should be to get what is needed by subgen to generate the files needed. (It might take a couple of calls though) (Jellyfin also has plugins for subs might be a good place to look)
  2. bazarr. It is a tool that is used to download human made subs from multiple sites. It would be much better for users who need subs because human made ones would be higher quality (mostly) and people who do use subs a lot have it already. Bazarr works by polling it's providers so you would have to expose and API call to generate subs and then for it download them. It would also require writing a bazarr provider implementation.
  3. Filesystem watching. Adding an agent to watch file system to generate subs for all media files in a folder.

[FYI for others] Getting language detected properly + Running on local files

Thanks for building this @McCloudS! This isn't an actual problem (and would probably be better for a wiki section of this repo if you ever add one). My hope is that this helps future travelers along this journey :)

Language Detection Issues

For anyone else who comes across this project, and is having trouble getting Whisper to detect the proper language of the audio track, keep in mind that stable-ts uses the first 30 seconds of audio to determine the language, which in my case was music which defaulted to English, even though the video was in Japanese.

There are more elegant ways to do this, but if you want to get this working quickly, you can change the subgen.py method call to the model in gen_subtitles() to pass along the proper, hard-coded language code, like this:

result = model.transcribe_stable(file_path, language = "ja", task=transcribe_or_translate_str)

A list of all the language codes can be found at the bottom of subgen.py

A Dockerfile which uses your new, modified subgen.py might look like this:

FROM ubuntu:latest

WORKDIR subgen

ADD subgen/subgen.py /subgen/subgen.py

RUN apt-get update && apt-get -y install python3 python3-pip ffmpeg

ENV PYTHONUNBUFFERED 1

CMD [ "python3", "-u", "./subgen.py" ]

EXPOSE 8090

Running on local files without dealing with Plex/Emby/Jellyfin/etc

The author was kind enough to think about this use case and added an environment variable called TRANSCRIBE_FOLDERS, which just needs a path to files that need to be processed. You don't need to do anything with a media server or even set the PATH_MAPPING variables.

A sample docker-compose.yml that gets this working might look like this:

version: '2'
services:
  subgen:
    container_name: subgen
    tty: true
    image: subgencpu:1 #Note this is a local image I built from the Dockerfile mentioned in the issue above.
    environment:
       - "WHISPER_MODEL=medium"
       - "WHISPER_THREADS=4"
       - "PROCADDEDMEDIA=True"
       - "PROCMEDIAONPLAY=False"
       - "NAMESUBLANG=aa"
       - "SKIPIFINTERNALSUBLANG=eng"
       - "WEBHOOKPORT=8090"
       - "CONCURRENT_TRANSCRIPTIONS=2"
       - "WORD_LEVEL_HIGHLIGHT=False"
       - "USE_PATH_MAPPING=False"
       - "TRANSCRIBE_DEVICE=cpu"
       - "TRANSCRIBE_FOLDERS=/mnt/media/"
    volumes:
       - "/home/user/media:/mnt/media/"
  
    ports:
       - "8090:8090"

How do I mount multiple drives?

Hi,

So currently, I have Plex installed on Windows. I have Whisper ASR webservice installed in Docker. I use Bazarr, but I'd like to implement and play with this a bit.

That being said, I have multiple drives. All of them have a "Media" folder in them with subfolders "TV Shows", "Anime", "AnimeMovies", "Movies" and "Music". Plex is mapped to use TV Shows library with all of the "TV Shows" folders present inside "Media" on each drive.

So my question is how would I map this setup to a subgen Docker container in docker-compose.yml?

Thanks in advance for any help. Not sure if this is the right place for this.

Replicate Bazarr Whisper Functionality/Webhooks

I've attempted to replicate the existing capability from

https://wiki.bazarr.media/Additional-Configuration/Whisper-Provider/ which uses the code from

https://github.com/ahmetoner/whisper-asr-webservice/blob/51c6eceda0836d145048224693c69c2706d78f46/app/webservice.py#L61-L78

The code block called from Bazarr is

https://github.com/morpheus65535/bazarr/blob/a09cc34e09407b8a2338d1034de7f8ff8fc91b19/libs/subliminal_patch/providers/whisperai.py#L284-L295

The code block I came up with:

@app.route("/asr", methods=["POST"])
def asr():
    logging.debug("This hook is from asr webhook!")
    logging.debug("Headers: %s", request.headers)
    logging.debug("Raw response: %s", request.data)
    task = request.args.get("task", default="transcribe")
    language = request.args.get("language")
    initial_prompt = request.args.get("initial_prompt")
    encode = request.args.get("encode", type=bool, default=True)
    output = request.args.get("output", default="txt")
    word_timestamps = request.args.get("word_timestamps", type=bool, default=False)
    audio_file = request.files.get("audio_file")
    
    if audio_file:
        print(f"Transcribing file: {audio_file}")
        start_time = time.time()
        result = model.transcribe_stable(audio_file)
        result.to_srt_vtt("/tmp/" + audio_file + ".srt", word_level=word_level_highlight)
        elapsed_time = time.time() - start_time
        minutes, seconds = divmod(int(elapsed_time), 60)
        print(f"Transcription of {audio_file} is completed, it took {minutes} minutes and {seconds} seconds to complete.")
        result = send_file("/tmp/" + audio_file + ".srt")

        # Return the result as a file download
        return send_file(
            result,
            as_attachment=True,
            download_name=f"{filename}.{output}",
            mimetype="text/plain",
            add_etags=False,
            conditional=True,
        )

    return "Audio file not provided."

In the Bazarr logs I'm getting 2023-10-24 22:22:49,787 - retry.api (14dee3942b38) : WARNING (api:40) - HTTPConnectionPool(host='192.168.111', port=8090): Max retries exceeded with url: /asr?task=transcribe&language=en&output=srt&encode=false (Caused by ConnectTimeoutError(<urllib3.connection.HTTPConnection object at 0x14dee0988610>, 'Connection to 192.168.111 timed out. (connect timeout=9000)')), retrying in 5 seconds...

I'm at a loss. I can't even get a webhook response from Bazarr when trying to manually generate a subtitle using Whisper as the provider. The machines are reachable to each other via pings. I'm sure I'm missing something simple.

Feature to just translate

Hi, is it possible to add an option to just translate?
a lot of media comes with proper English subtitles / easily find good subtitles, so there is no need for transcribing but only to use AI to translate to another language

Setting up via Unraid

I'm a bit loss on how to set this up via Unraid. I tried to do the add container, but it would not take any link repository. I get the idea how the file directory is supposed to be. but im just unable to add it via container if that make sense.

Subgen Container HTTP 404 error

Me again! Thank you for your help last time. You were very helpful before!

So I updated my container and eliminated Tautulli

For some reason When I start playing a file in Plex I get this:

I do have subgen running on HOST Network. That did not solve the issue :(

image

My docker compose:
#docker-compose.yml
version: '2'

services:
subgen:
container_name: subgen
network_mode: host
image: mccloud/subgen
environment:
- "WHISPER_MODEL=medium"
- "WHISPER_SPEEDUP=False"
- "WHISPER_THREADS=4"
- "WHISPER_PROCESSORS=1"
- "PROCADDEDMEDIA=True"
- "PROCMEDIAONPLAY=True"
- "NAMESUBLANG=AI-GENERATED"
- "UPDATEREPO=True"
- "SKIPIFINTERNALSUBLANG=eng"
- "PLEXTOKEN=y9Axi5-Wyxxxxxxxx"
- "PLEXSERVER=http://10.0.0.2:32400"
- "WEBHOOKPORT=8090"
volumes:
- "/mnt/user/tv:/tv"
- "/mnt/user/movies:/movies"
- "/mnt/user/appdata/subgen:/whisper.cpp"

I don´t undertand how works

I don't understand if something else needs to be configured. The logs are fine, the container receives the requests. The path mapping is corret. It is set to debug=true and no errors appear. The webhook in emby is configured with all options. I don't understand. I need config anything else?

Install issue "No module named 'fastapi'"

I´ve been trying to set up the docker image on a synology NAS. I keep getting this error. (See below). I'm using docker compose and I've tried all sorts of variations of the repo´s docker-compose.yml

USER@SERVER:/volume1/docker/appdata$ sudo docker-compose up subgen
[+] Running 1/0
⠿ Container subgen Created 0.0s
Attaching to subgen
subgen | Traceback (most recent call last):
subgen | File "/subgen/./subgen.py", line 29, in
subgen | from fastapi import FastAPI, File, UploadFile, Query, Header, Body, Form, Request
subgen | ModuleNotFoundError: No module named 'fastapi'
subgen exited with code 1

Bazarr Connection aborted, timeout

Hi, I set the container up and I get to the "Not found" page if I open port 8090 in the webbrowser.

But in Bazarr I get this error in the logs when I try to use it as a provider:

2023-12-20T20:08:25.963304814Z 2023-12-20 21:08:25,962 - retry.api (7fc1fa395b38) : WARNING (api:40) - ('Connection aborted.', TimeoutError('timed out')), retrying in 5 seconds...
2023-12-20T20:08:36.145970006Z 2023-12-20 21:08:36,145 - retry.api (7fc1fa395b38) : WARNING (api:40) - ('Connection aborted.', TimeoutError('timed out')), retrying in 5 seconds...
2023-12-20T20:08:46.360092257Z 2023-12-20 21:08:46,359 - root (7fc1fa395b38) : INFO (get_providers:366) - Throttling whisperai for 24 hours, until 23/12/21 21:08, because of: ConnectionError. Exception info: "'('Connection aborted.', TimeoutError('timed out'))' ~ whisperai.py@233"

AA.srt not showing in Subtitle list in Plex.

Hello. I have this installed in Unraid. I can see in the live log it generating the subtitles, and I can see what the people are saying.

I can also see the file it generates next to the episode file (EX NameOfEpisdoe.mkv.output.wav

However in Plex on my Shield TV and Plex app on PopOS, I do not see the AA.srt subtitle listed for me to select.

Error In ffmpeg on docker container

Recently whern I try to do a docker-compose up -d it gets stuck on the middle of the process, by running the container log it displays:

whisper has been successfully installed.
Traceback (most recent call last):
File "/subgen/./subgen.py", line 43, in
import ffmpeg
ModuleNotFoundError: No module named 'ffmpeg'

And gets stuck here.

anyway to force language in case of wrong whisper autodetection?

Hi

Thanks for the great work and very useful tool (I'm amazed that it works on my 2018 NAS)

Faster whisper sometimes picks the wrong language..
Is there a way to force the language?

Here is an example, it's French audio wrongly detected as English (score of .56)

image

I should pass the --language fr to faster-whisper. One way could probably be to add "FR" somewhere at the end of the file name right before the file extension?
openai/whisper#529

CUDA failed with error out of memory

Get the error:

INFO:root:Error processing or transcribing /movies/The Hollywood Revue of 1929 (1929)/The.Hollywood.Revue.of.1929.1929.DVDRip.XviD-BBM(iLC).avi: CUDA failed with error out of memory
compose file includes
- "TRANSCRIBE_DEVICE=gpu"
I request that if the transcription fails with this error, it falls back to using the cpu.

Otherwise, thanks so much for this program. I started to try Whisper on my own and was not happy with the results. Thank you so much for doing all the hard work for me.

Getting error due to renamed gen_subtitles call

Getting the below error after setting this up for the first time. Seems like this function may have been accidentally reamed in the latest update: f2a874b

2024-03-20T01:53:11.356070262Z INFO:     172.25.10.1:60572 - "POST /jellyfin HTTP/1.1" 500 Internal Server Error
2024-03-20T01:53:11.360051873Z ERROR:    Exception in ASGI application
2024-03-20T01:53:11.360069320Z Traceback (most recent call last):
2024-03-20T01:53:11.360071443Z   File "/usr/local/lib/python3.10/dist-packages/uvicorn/protocols/http/h11_impl.py", line 407, in run_asgi
2024-03-20T01:53:11.360073498Z     result = await app(  # type: ignore[func-returns-value]
2024-03-20T01:53:11.360083352Z   File "/usr/local/lib/python3.10/dist-packages/uvicorn/middleware/proxy_headers.py", line 69, in __call__
2024-03-20T01:53:11.360085258Z     return await self.app(scope, receive, send)
2024-03-20T01:53:11.360087067Z   File "/usr/local/lib/python3.10/dist-packages/fastapi/applications.py", line 1054, in __call__
2024-03-20T01:53:11.360088799Z     await super().__call__(scope, receive, send)
2024-03-20T01:53:11.360090450Z   File "/usr/local/lib/python3.10/dist-packages/starlette/applications.py", line 123, in __call__
2024-03-20T01:53:11.360092174Z     await self.middleware_stack(scope, receive, send)
2024-03-20T01:53:11.360094362Z   File "/usr/local/lib/python3.10/dist-packages/starlette/middleware/errors.py", line 186, in __call__
2024-03-20T01:53:11.360096155Z     raise exc
2024-03-20T01:53:11.360097720Z   File "/usr/local/lib/python3.10/dist-packages/starlette/middleware/errors.py", line 164, in __call__
2024-03-20T01:53:11.360099590Z     await self.app(scope, receive, _send)
2024-03-20T01:53:11.360101244Z   File "/usr/local/lib/python3.10/dist-packages/starlette/middleware/exceptions.py", line 62, in __call__
2024-03-20T01:53:11.360102909Z     await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
2024-03-20T01:53:11.360104732Z   File "/usr/local/lib/python3.10/dist-packages/starlette/_exception_handler.py", line 64, in wrapped_app
2024-03-20T01:53:11.360106470Z     raise exc
2024-03-20T01:53:11.360108017Z   File "/usr/local/lib/python3.10/dist-packages/starlette/_exception_handler.py", line 53, in wrapped_app
2024-03-20T01:53:11.360109904Z     await app(scope, receive, sender)
2024-03-20T01:53:11.360111549Z   File "/usr/local/lib/python3.10/dist-packages/starlette/routing.py", line 758, in __call__
2024-03-20T01:53:11.360113375Z     await self.middleware_stack(scope, receive, send)
2024-03-20T01:53:11.360115073Z   File "/usr/local/lib/python3.10/dist-packages/starlette/routing.py", line 778, in app
2024-03-20T01:53:11.360116757Z     await route.handle(scope, receive, send)
2024-03-20T01:53:11.360118480Z   File "/usr/local/lib/python3.10/dist-packages/starlette/routing.py", line 299, in handle
2024-03-20T01:53:11.360120208Z     await self.app(scope, receive, send)
2024-03-20T01:53:11.360121963Z   File "/usr/local/lib/python3.10/dist-packages/starlette/routing.py", line 79, in app
2024-03-20T01:53:11.360123684Z     await wrap_app_handling_exceptions(app, request)(scope, receive, send)
2024-03-20T01:53:11.360125840Z   File "/usr/local/lib/python3.10/dist-packages/starlette/_exception_handler.py", line 64, in wrapped_app
2024-03-20T01:53:11.360127686Z     raise exc
2024-03-20T01:53:11.360129381Z   File "/usr/local/lib/python3.10/dist-packages/starlette/_exception_handler.py", line 53, in wrapped_app
2024-03-20T01:53:11.360131029Z     await app(scope, receive, sender)
2024-03-20T01:53:11.360134626Z   File "/usr/local/lib/python3.10/dist-packages/starlette/routing.py", line 74, in app
2024-03-20T01:53:11.360136344Z     response = await func(request)
2024-03-20T01:53:11.360138011Z   File "/usr/local/lib/python3.10/dist-packages/fastapi/routing.py", line 278, in app
2024-03-20T01:53:11.360139862Z     raw_response = await run_endpoint_function(
2024-03-20T01:53:11.360141544Z   File "/usr/local/lib/python3.10/dist-packages/fastapi/routing.py", line 193, in run_endpoint_function
2024-03-20T01:53:11.360143390Z     return await run_in_threadpool(dependant.call, **values)
2024-03-20T01:53:11.360145082Z   File "/usr/local/lib/python3.10/dist-packages/starlette/concurrency.py", line 42, in run_in_threadpool
2024-03-20T01:53:11.360146735Z     return await anyio.to_thread.run_sync(func, *args)
2024-03-20T01:53:11.360148527Z   File "/usr/local/lib/python3.10/dist-packages/anyio/to_thread.py", line 56, in run_sync
2024-03-20T01:53:11.360150139Z     return await get_async_backend().run_sync_in_worker_thread(
2024-03-20T01:53:11.360151868Z   File "/usr/local/lib/python3.10/dist-packages/anyio/_backends/_asyncio.py", line 2144, in run_sync_in_worker_thread
2024-03-20T01:53:11.360153691Z     return await future
2024-03-20T01:53:11.360155371Z   File "/usr/local/lib/python3.10/dist-packages/anyio/_backends/_asyncio.py", line 851, in run
2024-03-20T01:53:11.360157111Z     result = context.run(func, *args)
2024-03-20T01:53:11.360158657Z   File "/subgen/subgen.py", line 266, in receive_jellyfin_webhook
2024-03-20T01:53:11.360160443Z     titles(path_mapping(fullpath), transcribe_or_translate, True)
2024-03-20T01:53:11.360162143Z NameError: name 'titles' is not defined

Missing random include

Error: Error processing or transcribing Bazarr audio_file: name 'random' is not defined

Resolution: add to top of subgen.py: include random

Running in Kubernetes

For anyone looking to get this running in kubernetes. Here's a helm release you can play with:

kind: HelmRelease
metadata:
  name: &appname subgen
  namespace: &namespace media
spec:
  releaseName: *appname
  chart:
    spec:
      chart: app-template
      version: 2.4.0
      sourceRef:
        kind: HelmRepository
        name: bjw-s-charts
        namespace: flux-system
  interval: 6m
  values:
    defaultPodOptions:
      securityContext:
        runAsUser: 0
        runAsGroup: 0
        fsGroup: 0
        fsGroupChangePolicy: OnRootMismatch

      nodeSelector:
        node-role.kubernetes.io/worker: 'true'
    controllers:
      main:
        type: statefulset
        annotations:
          reloader.stakater.com/auto: 'true'
        containers:
          main:
            image:
              repository: mccloud/subgen
              tag: cpu
              pullPolicy: Always
            env:
              TZ: ${TIMEZONE}
              TRANSCRIBE_DEVICE: "cpu"
              WHISPER_MODEL: "medium"
              CONCURRENT_TRANSCRIPTIONS: "2"
              WHISPER_THREADS: "4"
              PROCADDEDMEDIA: True # will gen subtitles for all media added regardless of existing external/embedded subtitles (based off of SKIPIFINTERNALSUBLANG)
              PROCMEDIAONPLAY: True # will gen subtitles for all played media regardless of existing external/embedded subtitles (based off of SKIPIFINTERNALSUBLANG)
              NAMESUBLANG: "AUTO" # allows you to pick what it will name the subtitle. Instead of using EN, I'm using AA, so it doesn't mix with exiting external EN subs, and AA will populate higher on the list in Plex.
              SKIPIFINTERNALSUBLANG: 'eng' #Will not generate a subtitle if the file has an internal sub matching the 3 letter code of this variable
              PLEXSERVER: "http://plex.media.svc.cluster.local:32400"
            envFrom:
              - secretRef:
                  name: *appname
            probes:
              liveness: &probes
                enabled: false
                custom: true
                spec:
                  httpGet:
                    path: /
                    port: &port 8090

                  initialDelaySeconds: 0
                  periodSeconds: 10
                  timeoutSeconds: 1
                  failureThreshold: 3
              readiness: *probes
              startup:
                enabled: false


    service:
      main:
        ports:
          http:
            port: 8090
        type: ClusterIP

    volumeClaimTemplates:
      - name: config
        mountPath: /config
        accessMode: ReadWriteOnce
        size: 5Gi
        storageClass: longhorn
    ingress:
      main:
        enabled: true
        annotations:
          hajimari.io/enable: 'true'
          hajimari.io/icon: television-box
          hajimari.io/group: Media
          cert-manager.io/cluster-issuer: letsencrypt-production
          traefik.ingress.kubernetes.io/router.entrypoints: websecure
          traefik.ingress.kubernetes.io/router.middlewares: networking-chain-authelia@kubernetescrd
        hosts:
          - host: &uri subgen.${SECRET_DEV_DOMAIN}
            paths:
              - path: /
                pathType: Prefix
                service:
                  name: main
                  port: http
        tls:
          - hosts:
              - *uri
            secretName: *uri
        className: traefik
    persistence:
      omoikane:
        enabled: true
        type: custom
        volumeSpec:
          nfs:
            server: ${NAS_ADDR}
            path: /volume1/omoikane
        globalMounts:
          - path: /omoikane
      backups:
        enabled: true
        type: custom
        volumeSpec:
          nfs:
            server: ${NAS_ADDR}
            path: ${NFS_ARR}
        globalMounts:
          - path: /config/Backups
      downloads:
        enabled: true
        type: custom
        volumeSpec:
          nfs:
            server: ${NAS_ADDR}
            path: /volume2/downloads
        globalMounts:
          - path: /downloads
    podLabels:
      app: *appname

Docker-compose issue

Hello,
The Docker build does not seems to end, Here are the last lines of the logs:

whisper has been successfully installed.

Traceback (most recent call last):

  File "/subgen/./subgen.py", line 40, in <module>

    import stable_whisper

ModuleNotFoundError: No module named 'stable_whisper'

Any idea ?

Could someone explain how to install?

Hi,
I am able to follow some of the instructions:
Plex and tautulli are connected (both in docker in synology). I can create the webhook (i have another to send to trakt.tv what i watch on plex working) with no issues.
But the rest i am not sure how to proceed, when i read this:
"The docker-compose/Dockerfile settings are relatively straightforward (and poorly commented)." i am lost.

I am thinking maybe without the docker, via terminal adding:
"wget https://raw.githubusercontent.com/McCloudS/subgen/main/subgen/subgen_nodocker.py
apt-get update && apt-get install -y ffmpeg git gcc python3
pip3 install webhook_listener
python3 -u subgen_nodocker.py"

but then how can i edit the volumes locations?
Sorry if that is too obvious. If someone can point me in the right direction i will appreciate it.

Please add some parameters for standardizing/beautifying subtitle layout

Hey, I'm a windows user, and I'm really grateful for Subgen as it's the simplest way to get Whisper running with Bazarr on Windows without having to use Docket etc.

However, one thing I've noticed is that the subtitles aren't formatted the best, due to how Faster-Whisper operates. I've found that the standalone Faster Whisper (https://github.com/Purfview/whisper-standalone-win) has a great optional argument called --standard, which does the following:

--standard: Quick hardcoded preset to split lines in standard way. 42 chars per 2 lines with max_comma_cent=70 and --sentence are activated automatically.

--sentence: Enables splitting lines to sentences for srt and vtt subs. Every sentence starts in the new segment. Be default meant to output whole sentence per line for better translations, but not limited to, read about '--max_...' parameters.

This gives the subtitles a much more standardized look that are common across streaming services such as Netflix, BBC etc.

Is it possible to implement these into SubGen, please?

Setup Troubles on Windows 11 with Jellyfin

Issue: Setup Troubles on Windows 11 with Jellyfin

Description

Hey, I am new to this world of Docker and Python. I was trying to follow instructions on Windows 11 to set up Subgen.

Starting with Docker, the image worked fine and was waiting on a webhook. The IP of my container was 172.17.0.2, but unfortunately, I could not ping this IP.

I wanted to switch Docker to use the host network driver, but this feature isn't available on Windows. So, I switched to try and follow the standalone setup instructions for Subgen without Docker in the hopes that I can connect to the webhooks without any network routing. However, I ran into a ton of setup issues.

Console Log

Console Log on Pastebin

I used Conda 23.7.2 to create a new environment. Could this be a version issue with Python/Conda? Are there missing requirements? I don't know what to make of this log or where to begin.

Any help is appreciated. Thank you!

Request: Docker logging in stdout, add timestamp

Below are the last few lines of my docker log -f subgen, but there is no way to tell when this error occured. Might be nice to see a date + timestamp in front of it?

INFO:     192.168.90.15:39570 - "POST /detect-language?encode=false HTTP/1.1" 500 Internal Server Error
ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/uvicorn/protocols/http/h11_impl.py", line 412, in run_asgi
    result = await app(  # type: ignore[func-returns-value]
  File "/usr/local/lib/python3.10/dist-packages/uvicorn/middleware/proxy_headers.py", line 84, in __call__
    return await self.app(scope, receive, send)
  File "/usr/local/lib/python3.10/dist-packages/fastapi/applications.py", line 1054, in __call__
    await super().__call__(scope, receive, send)
  File "/usr/local/lib/python3.10/dist-packages/starlette/applications.py", line 123, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/usr/local/lib/python3.10/dist-packages/starlette/middleware/errors.py", line 186, in __call__
    raise exc
  File "/usr/local/lib/python3.10/dist-packages/starlette/middleware/errors.py", line 164, in __call__
    await self.app(scope, receive, _send)
  File "/usr/local/lib/python3.10/dist-packages/starlette/middleware/exceptions.py", line 62, in __call__
    await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
  File "/usr/local/lib/python3.10/dist-packages/starlette/_exception_handler.py", line 64, in wrapped_app
    raise exc
  File "/usr/local/lib/python3.10/dist-packages/starlette/_exception_handler.py", line 53, in wrapped_app
    await app(scope, receive, sender)
  File "/usr/local/lib/python3.10/dist-packages/starlette/routing.py", line 758, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/usr/local/lib/python3.10/dist-packages/starlette/routing.py", line 778, in app
    await route.handle(scope, receive, send)
  File "/usr/local/lib/python3.10/dist-packages/starlette/routing.py", line 299, in handle
    await self.app(scope, receive, send)
  File "/usr/local/lib/python3.10/dist-packages/starlette/routing.py", line 79, in app
    await wrap_app_handling_exceptions(app, request)(scope, receive, send)
  File "/usr/local/lib/python3.10/dist-packages/starlette/_exception_handler.py", line 64, in wrapped_app
    raise exc
  File "/usr/local/lib/python3.10/dist-packages/starlette/_exception_handler.py", line 53, in wrapped_app
    await app(scope, receive, sender)
  File "/usr/local/lib/python3.10/dist-packages/starlette/routing.py", line 74, in app
    response = await func(request)
  File "/usr/local/lib/python3.10/dist-packages/fastapi/routing.py", line 278, in app
    raw_response = await run_endpoint_function(
  File "/usr/local/lib/python3.10/dist-packages/fastapi/routing.py", line 193, in run_endpoint_function
    return await run_in_threadpool(dependant.call, **values)
  File "/usr/local/lib/python3.10/dist-packages/starlette/concurrency.py", line 42, in run_in_threadpool
    return await anyio.to_thread.run_sync(func, *args)
  File "/usr/local/lib/python3.10/dist-packages/anyio/to_thread.py", line 56, in run_sync
    return await get_async_backend().run_sync_in_worker_thread(
  File "/usr/local/lib/python3.10/dist-packages/anyio/_backends/_asyncio.py", line 2144, in run_sync_in_worker_thread
    return await future
  File "/usr/local/lib/python3.10/dist-packages/anyio/_backends/_asyncio.py", line 851, in run
    result = context.run(func, *args)
  File "/subgen/subgen.py", line 278, in detect_language
    return {"detected_language": get_lang_pair(whisper_languages, detected_lang_code), "language_code": detected_lang_code}
UnboundLocalError: local variable 'detected_lang_code' referenced before assignment
INFO:faster_whisper:Detected language 'en' with probability 0.98
Detected Language: english
Transcribe:   0%|          | 0/30.0 [00:00<?, ?sec/s]DEBUG:faster_whisper:Processing segment at 00:00.000
Transcribe:  38%|███▊      | 11.4/30.0 [00:14<00:24,  1.30s/sec]DEBUG:faster_whisper:Processing segment at 00:29.620
Transcribe: 100%|██████████| 30.0/30.0 [00:32<00:00,  1.07s/sec]
Adjustment: 100%|██████████| 24.8/24.8 [00:00<00:00, 5702.47sec/s]
DEBUG:root:Queue is empty, clearing/releasing VRAM
INFO:     192.168.90.15:55706 - "POST /detect-language?encode=false HTTP/1.1" 200 OK

File doesn't have any Audio to transcribe

Hi,

I really appreciate this tool. But I have an weird Issue (sometimes).

Environment:

OS: Ubuntu 22.04 LTS
Runtime: Docker
Version: Latest (as of now)

Error:

Log Error: "Angry Birds (2016.mkv doesn't have any audio to transcribe!"
Log Error without Try-Catch in subgen.py: "An error occurred: [Errno 40] Too many levels of symbolic links: '/plexfiles/media/Movies/Automated/Sonarr/Angry Birds (2016)/Angry Birds (2016).mkv'"

Explanation:

This error happens when I recreate the container with "docker compose up --force-recreate", however it can also happen, that the script just works normally for a few files, then it will show the same error again and I have to restart it a few times (*docker compose up --force-recreate") to fix it.

Have a Nice day.

Evaluate use of Whisper-Jax for more Speed

Hello,

i found your repo, awesome idea, was looking some time for something like this.
this is a little suggestion. i came across this repo Whisper-Jax. The Dev says its fast, 70x faster than normal whisper on tpu. i know normally you have no tpu, but the benchmarks in the repo show also improvements on gpu. you can give it a shot on this hugginface space from the same dev.

Maybe you can look into it.
Would be awesome if you can make it even faster as it is now.
Thanks for your tool and the time you invest in it ;)

  • semigoro

Still trying to get it to work...

I decided to give this another try. Thanks for updating the docker image, the setup instructions and the setup procedure. But neither the new docker nor the new non-docker option works for me.

  • for Docker (and I just re-installed the latest default image), I can install the image and I then also see it running, withy the port properly mapped in my my Docker Desktop. But when I then try 'http://localhost:9000', I get '["You accessed this request incorrectly via a GET request. See https://github.com/McCloudS/subgen for proper configuration"]'
  • for non-docker, when I run 'python launcher.py -u -i -s', I get this in my Windows PowerShell terminal:
PS C:\Plex\subgen\subgen> python launcher.py -u -i -s
You will be prompted for several configuration values.
If you wish to use the default value for any of them, simply press Enter without typing anything.
The default values are shown in brackets [] next to the prompts.
Items can be the value of true, on, 1, y, yes, false, off, 0, n, no, or an appropriate text response.

Enter the Whisper model you want to run: tiny, tiny.en, base, base.en, small, small.en, medium, medium.en, large, distil-large-v2, distil-medium.en, distil-small.en [medium]: distil_large_v2
Default listening port for subgen.py [9000]:
Set as cpu or gpu [gpu]:
Enable debug logging [True]:
Attempt to clear VRAM when complete (Windows users may need to set this to False) [False]:
Append 'Transcribed by whisper' to generated subtitle [False]:
Environment variables have been saved to subgen.env
Environment variables have been loaded from subgen.env
File downloaded successfully to requirements.txt
Requirement already satisfied: numpy in c:\users\sdspi\miniconda3\lib\site-packages (from -r requirements.txt (line 1)) (1.26.4)
Requirement already satisfied: stable-ts in c:\users\sdspi\miniconda3\lib\site-packages (from -r requirements.txt (line 2)) (2.15.9)
Requirement already satisfied: fastapi in c:\users\sdspi\miniconda3\lib\site-packages (from -r requirements.txt (line 3)) (0.110.0)
Requirement already satisfied: requests in c:\users\sdspi\miniconda3\lib\site-packages (from -r requirements.txt (line 4)) (2.31.0)
Requirement already satisfied: faster-whisper in c:\users\sdspi\miniconda3\lib\site-packages (from -r requirements.txt (line 5)) (1.0.1)
Requirement already satisfied: uvicorn in c:\users\sdspi\miniconda3\lib\site-packages (from -r requirements.txt (line 6)) (0.29.0)
Requirement already satisfied: python-multipart in c:\users\sdspi\miniconda3\lib\site-packages (from -r requirements.txt (line 7)) (0.0.9)
Requirement already satisfied: python-ffmpeg in c:\users\sdspi\miniconda3\lib\site-packages (from -r requirements.txt (line 8)) (2.0.10)
Requirement already satisfied: whisper in c:\users\sdspi\miniconda3\lib\site-packages (from -r requirements.txt (line 9)) (1.1.10)
Requirement already satisfied: watchdog in c:\users\sdspi\miniconda3\lib\site-packages (from -r requirements.txt (line 10)) (4.0.0)
Requirement already satisfied: torch in c:\users\sdspi\miniconda3\lib\site-packages (from stable-ts->-r requirements.txt (line 2)) (2.2.1)
Requirement already satisfied: torchaudio in c:\users\sdspi\miniconda3\lib\site-packages (from stable-ts->-r requirements.txt (line 2)) (2.2.1)
Requirement already satisfied: tqdm in c:\users\sdspi\miniconda3\lib\site-packages (from stable-ts->-r requirements.txt (line 2)) (4.65.0)
Requirement already satisfied: openai-whisper==20231117 in c:\users\sdspi\miniconda3\lib\site-packages (from stable-ts->-r requirements.txt (line 2)) (20231117)
Requirement already satisfied: numba in c:\users\sdspi\miniconda3\lib\site-packages (from openai-whisper==20231117->stable-ts->-r requirements.txt (line 2)) (0.59.0)
Requirement already satisfied: more-itertools in c:\users\sdspi\miniconda3\lib\site-packages (from openai-whisper==20231117->stable-ts->-r requirements.txt (line 2)) (10.2.0)
Requirement already satisfied: tiktoken in c:\users\sdspi\miniconda3\lib\site-packages (from openai-whisper==20231117->stable-ts->-r requirements.txt (line 2)) (0.6.0)
Requirement already satisfied: pydantic!=1.8,!=1.8.1,!=2.0.0,!=2.0.1,!=2.1.0,<3.0.0,>=1.7.4 in c:\users\sdspi\miniconda3\lib\site-packages (from fastapi->-r requirements.txt (line 3)) (2.6.4)
Requirement already satisfied: starlette<0.37.0,>=0.36.3 in c:\users\sdspi\miniconda3\lib\site-packages (from fastapi->-r requirements.txt (line 3)) (0.36.3)
Requirement already satisfied: typing-extensions>=4.8.0 in c:\users\sdspi\miniconda3\lib\site-packages (from fastapi->-r requirements.txt (line 3)) (4.9.0)
Requirement already satisfied: charset-normalizer<4,>=2 in c:\users\sdspi\miniconda3\lib\site-packages (from requests->-r requirements.txt (line 4)) (2.0.4)
Requirement already satisfied: idna<4,>=2.5 in c:\users\sdspi\miniconda3\lib\site-packages (from requests->-r requirements.txt (line 4)) (3.4)
Requirement already satisfied: urllib3<3,>=1.21.1 in c:\users\sdspi\miniconda3\lib\site-packages (from requests->-r requirements.txt (line 4)) (1.26.16)
Requirement already satisfied: certifi>=2017.4.17 in c:\users\sdspi\miniconda3\lib\site-packages (from requests->-r requirements.txt (line 4)) (2024.2.2)
Requirement already satisfied: av==11.* in c:\users\sdspi\miniconda3\lib\site-packages (from faster-whisper->-r requirements.txt (line 5)) (11.0.0)
Requirement already satisfied: ctranslate2<5,>=4.0 in c:\users\sdspi\miniconda3\lib\site-packages (from faster-whisper->-r requirements.txt (line 5)) (4.1.0)
Requirement already satisfied: huggingface-hub>=0.13 in c:\users\sdspi\miniconda3\lib\site-packages (from faster-whisper->-r requirements.txt (line 5)) (0.21.4)
Requirement already satisfied: tokenizers<0.16,>=0.13 in c:\users\sdspi\miniconda3\lib\site-packages (from faster-whisper->-r requirements.txt (line 5)) (0.15.2)
Requirement already satisfied: onnxruntime<2,>=1.14 in c:\users\sdspi\miniconda3\lib\site-packages (from faster-whisper->-r requirements.txt (line 5)) (1.17.1)
Requirement already satisfied: click>=7.0 in c:\users\sdspi\miniconda3\lib\site-packages (from uvicorn->-r requirements.txt (line 6)) (8.1.7)
Requirement already satisfied: h11>=0.8 in c:\users\sdspi\miniconda3\lib\site-packages (from uvicorn->-r requirements.txt (line 6)) (0.14.0)
Requirement already satisfied: pyee in c:\users\sdspi\miniconda3\lib\site-packages (from python-ffmpeg->-r requirements.txt (line 8)) (11.1.0)
Requirement already satisfied: six in c:\users\sdspi\miniconda3\lib\site-packages (from whisper->-r requirements.txt (line 9)) (1.16.0)
Requirement already satisfied: colorama in c:\users\sdspi\miniconda3\lib\site-packages (from click>=7.0->uvicorn->-r requirements.txt (line 6)) (0.4.6)
Requirement already satisfied: setuptools in c:\users\sdspi\miniconda3\lib\site-packages (from ctranslate2<5,>=4.0->faster-whisper->-r requirements.txt (line 5)) (67.8.0)
Requirement already satisfied: pyyaml<7,>=5.3 in c:\users\sdspi\miniconda3\lib\site-packages (from ctranslate2<5,>=4.0->faster-whisper->-r requirements.txt (line 5)) (6.0.1)
Requirement already satisfied: filelock in c:\users\sdspi\miniconda3\lib\site-packages (from huggingface-hub>=0.13->faster-whisper->-r requirements.txt (line 5)) (3.13.1)
Requirement already satisfied: fsspec>=2023.5.0 in c:\users\sdspi\miniconda3\lib\site-packages (from huggingface-hub>=0.13->faster-whisper->-r requirements.txt (line 5)) (2024.2.0)
Requirement already satisfied: packaging>=20.9 in c:\users\sdspi\miniconda3\lib\site-packages (from huggingface-hub>=0.13->faster-whisper->-r requirements.txt (line 5)) (23.0)
Requirement already satisfied: coloredlogs in c:\users\sdspi\miniconda3\lib\site-packages (from onnxruntime<2,>=1.14->faster-whisper->-r requirements.txt (line 5)) (15.0.1)
Requirement already satisfied: flatbuffers in c:\users\sdspi\miniconda3\lib\site-packages (from onnxruntime<2,>=1.14->faster-whisper->-r requirements.txt (line 5)) (24.3.7)
Requirement already satisfied: protobuf in c:\users\sdspi\miniconda3\lib\site-packages (from onnxruntime<2,>=1.14->faster-whisper->-r requirements.txt (line 5)) (4.25.3)
Requirement already satisfied: sympy in c:\users\sdspi\miniconda3\lib\site-packages (from onnxruntime<2,>=1.14->faster-whisper->-r requirements.txt (line 5)) (1.12)
Requirement already satisfied: annotated-types>=0.4.0 in c:\users\sdspi\miniconda3\lib\site-packages (from pydantic!=1.8,!=1.8.1,!=2.0.0,!=2.0.1,!=2.1.0,<3.0.0,>=1.7.4->fastapi->-r requirements.txt (line 3)) (0.6.0)
Requirement already satisfied: pydantic-core==2.16.3 in c:\users\sdspi\miniconda3\lib\site-packages (from pydantic!=1.8,!=1.8.1,!=2.0.0,!=2.0.1,!=2.1.0,<3.0.0,>=1.7.4->fastapi->-r requirements.txt (line 3)) (2.16.3)
Requirement already satisfied: anyio<5,>=3.4.0 in c:\users\sdspi\miniconda3\lib\site-packages (from starlette<0.37.0,>=0.36.3->fastapi->-r requirements.txt (line 3)) (4.3.0)
Requirement already satisfied: networkx in c:\users\sdspi\miniconda3\lib\site-packages (from torch->stable-ts->-r requirements.txt (line 2)) (3.2.1)
Requirement already satisfied: jinja2 in c:\users\sdspi\miniconda3\lib\site-packages (from torch->stable-ts->-r requirements.txt (line 2)) (3.1.3)
Requirement already satisfied: sniffio>=1.1 in c:\users\sdspi\miniconda3\lib\site-packages (from anyio<5,>=3.4.0->starlette<0.37.0,>=0.36.3->fastapi->-r requirements.txt (line 3)) (1.3.1)
Requirement already satisfied: humanfriendly>=9.1 in c:\users\sdspi\miniconda3\lib\site-packages (from coloredlogs->onnxruntime<2,>=1.14->faster-whisper->-r requirements.txt (line 5)) (10.0)
Requirement already satisfied: MarkupSafe>=2.0 in c:\users\sdspi\miniconda3\lib\site-packages (from jinja2->torch->stable-ts->-r requirements.txt (line 2)) (2.1.5)
Requirement already satisfied: llvmlite<0.43,>=0.42.0dev0 in c:\users\sdspi\miniconda3\lib\site-packages (from numba->openai-whisper==20231117->stable-ts->-r requirements.txt (line 2)) (0.42.0)
Requirement already satisfied: mpmath>=0.19 in c:\users\sdspi\miniconda3\lib\site-packages (from sympy->onnxruntime<2,>=1.14->faster-whisper->-r requirements.txt (line 5)) (1.3.0)
Requirement already satisfied: regex>=2022.1.18 in c:\users\sdspi\miniconda3\lib\site-packages (from tiktoken->openai-whisper==20231117->stable-ts->-r requirements.txt (line 2)) (2023.12.25)
Requirement already satisfied: pyreadline3 in c:\users\sdspi\miniconda3\lib\site-packages (from humanfriendly>=9.1->coloredlogs->onnxruntime<2,>=1.14->faster-whisper->-r requirements.txt (line 5)) (3.4.1)
Packages installed successfully using pip3.
Downloading subgen.py from GitHub branch main...
File downloaded successfully to subgen.py
Launching subgen.py
2024-03-26 01:14:08,207 INFO: Subgen v2024.3.25.64
2024-03-26 01:14:08,207 INFO: Starting Subgen with listening webhooks!
2024-03-26 01:14:08,207 INFO: Transcriptions are limited to running 2 at a time
2024-03-26 01:14:08,207 INFO: Running 4 threads per transcription
2024-03-26 01:14:08,208 INFO: Using cuda to encode
2024-03-26 01:14:08,208 INFO: Using faster-whisper
INFO:     Started server process [52096]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:9000 (Press CTRL+C to quit)

And when I then do localhost:9000 (or 0.0.0.0:9000, or 127.0.0.1:9000), I only get errors
'This page isn’t working
localhost sent an invalid response.
ERR_INVALID_HTTP_RESPONSE'

Thoughts? Suggestions?

typo in docker-compose.yml for jellyfin

Hi

I'm trying to make it work with jellyfin but to no avail until now

I noticed a typo in the docker-compose.yml as one of the parameters is spelled JELYFINTOKEN instead of JELLYFINTOKEN (missing L)

This makes me think no one actually tried the jellyfin implementation yet. Has anyone tested successfully that it works?

I've corrected locally the typo. I have a working jellyfin server in one docker, a working subgen docker built with the corrected yml, and logs show that the unicorn server launches and waits for the web hook, but then nothing happens. The dockers both can be reach with their LAN address.

Thanks!

Jellyfin API Call is Implemented Poorly

Open to feedback, but after beating my head against the wall, the only way I could figure out how to get the filepath from a Jellyfin webhook was:

subgen/subgen/subgen.py

Lines 248 to 249 in b9f6903

userid = json.loads(requests.get(f"{jellyfin_url}/Users", headers=headers).content)[0]['Id']
response = requests.get(f"{jellyfin_url}/Users/{userid}/Items/{item_id}", headers=headers)

For some reason, their API doesn't allow calling /Items/{item_id} directly and still requires a userid even though an API key has been provided. The first call finds the Id of the first user on the server and uses that for subsequent Items calls to get the filepath.

HTTP 500. Exception on /webhook [POST]

When playing a media file I get an HTTP 500 Error in Subgen Logs.

Here is the log:

10.0.0.2 - - [04/Jul/2023 17:32:27] "POST /webhook HTTP/1.1" 500 -
Tautulli webhook received!
fullpath: /data/tv/Accused (US)/Season 1/Accused (US) - S01E01 - Scott's Story.mkv
filepath: /data/tv/Accused (US)/Season 1
file name with no extension: Accused (US) - S01E01 - Scott's Story
event: played
/data/tv/Accused (US)/Season 1/Accused (US) - S01E01 - Scott's Story.mkv: No such file or directory
[2023-07-04 17:32:33,942] ERROR in app: Exception on /webhook [POST]
Traceback (most recent call last):
  File "/usr/local/lib/python3.11/site-packages/flask/app.py", line 2525, in wsgi_app
    response = self.full_dispatch_request()
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/flask/app.py", line 1822, in full_dispatch_request
    rv = self.handle_user_exception(e)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/flask/app.py", line 1820, in full_dispatch_request
    rv = self.dispatch_request()
         ^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/flask/app.py", line 1796, in dispatch_request
    return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/subgen.py", line 65, in receive_webhook
    if skipifinternalsublang in str(subprocess.check_output("ffprobe -loglevel error -select_streams s -show_entries stream=index:stream_tags=language -of csv=p=0 \"{}\"".format(fullpath), shell=True)):
                                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/subprocess.py", line 466, in check_output
    return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/subprocess.py", line 571, in run
    raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command 'ffprobe -loglevel error -select_streams s -show_entries stream=index:stream_tags=language -of csv=p=0 "/data/tv/Accused (US)/Season 1/Accused (US) - S01E01 - Scott's Story.mkv"' returned non-zero exit status 1.
10.0.0.2 - - [04/Jul/2023 17:32:33] "POST /webhook HTTP/1.1" 500 -

Tautulli Webhook:

JSON HEADERS
{ "source":"Tautulli" }

DATA:
{
"event": "played",
"file": "{file}",
"filename": "{filename}",
"mediatype": "{media_type}"
}

alternative lenient bool converter

def converttobool(input):

def force_bool_optimistic(in_bool):
    """Force string value into a Python boolean value
    Everything is True with the exception of; false, off, and 0"""
    value = str(in_bool).lower()
    if value in ('false', 'off', '0'):
        return False
    else:
        return True

Fix threading/queuing

I have a suspicion I'm not queuing/threading correctly and are dependent on some of whisper's settings to only run X tasks a time. This could (maybe?) cause webhooks to be missed/dropped when several are called quickly. Probably need to look at using threading or concurrent.futures with async webhooks.

Is there a way to monitor progress

I'm running the docker image, and it does seem to work: when I start playing sthg on my Plex, I see messages appear in my docker log. E.g.

2024-03-11 18:57:33 INFO:     172.21.0.1:52488 - "POST /plex HTTP/1.1" 200 OK
2024-03-11 19:20:15 2024-03-11 18:20:15,616 DEBUG: Raw response: {"event":"media.play","user":true,"owner":true,"Account":{"id":12802,"thumb":"https://plex.tv/users/0a5e655b223f4ef9/avatar?c=1710124402","title":"***************"},"Server":{"title":"***************************-Office","uuid":"******************"},"Player":{"local":true,"publicAddress":"*********","title":"Chrome","uuid":"******************************},"Metadata":{"librarySectionType":"show","ratingKey":"6861","key":"/library/metadata/6861","parentRatingKey":"6807","grandparentRatingKey":"5381","guid":"local://6861","parentGuid":"plex://season/65df7ca15cef0dbfa3d335ba","grandparentGuid":"plex://show/5d9c0852705e7a001e6d8f1e","grandparentSlug":"seaside-hotel-2013","type":"episode","title":"Episode 2","grandparentKey":"/library/metadata/5381","parentKey":"/library/metadata/6807","librarySectionTitle":"TV Shows","librarySectionID":6,"librarySectionKey":"/library/sections/6","grandparentTitle":"Seaside Hotel","parentTitle":"Season 10","contentRating":"TV-14","summary":"","index":2,"parentIndex":10,"viewOffset":2003000,"lastViewedAt":1710181215,"year":2013,"thumb":"/library/metadata/6861/thumb/1710171669","art":"/library/metadata/5381/art/1709692391","grandparentThumb":"/library/metadata/5381/thumb/1709692391","grandparentArt":"/library/metadata/5381/art/1709692391","addedAt":1710171668,"updatedAt":1710171669}}

You mentioned in an issue that, when run on a CPU (and I haven't managed to get my docker container to play nice with my GPU yet), it takes about as long to transcribe a file as it does to play it. But is there a way to see what subgen is actually doing any any given time? And am I to assume that the only way to find out whether a video file is fully transcribed is the appearance of an *.srt file in the appropriate media folder where that file is located? And that it then has to be manually selected in Plex for the subtitles to appear?

Thanks!

[request] Less bigger image

Would be sweet to have the option / tag / branch to have a smaller image again.

People use docker-compose to pull in their stack. Each time i do and subgen has a new docker image, i gotta pull in 6gb. This makes me want to update my docker containers less. It's not how docker is supposed to work. Docker is meant for the smallest possible images. So would be nice to have something without the CUDA images and associated packages for those running CPU only.

If you need any help with this, i've got some experience as well

Add queue lock to force running only one job at a time

import threading
from queue import Queue

start queue

task_queue = Queue()

start queue lock

task_queue_lock = threading.Lock()

def transcription_worker():
while True:
task = task_queue.get()

    gen_subtitles(task['path'], task['transcribe_or_translate'], task['force'])

    task_queue.task_done()
    time.sleep(5*60)
    # show queue
    with task_queue_lock:
        print(f"There are {task_queue.qsize()} tasks left in the queue.")

activate thread

threading.Thread(target=transcription_worker, daemon=True).start()

cant find media folder not named "movies"

My Plex instance runs using one singular folder mounted to the container, then sub-directories to find its "tv" and "movies" folders.

i mounted the subgen volume bind pointing at the same root folder but it cannot see any of the files inside. does it need to be pointing at two different folders, named "tv" and "movies"? and does plex also need to be looking for those two folders as well?

Thanks for watching! OpenAI issue

At the beginning of some media, it seems to be spamming Thanks for watching! over and over. Apparently this is a common issue (the hallucination problem) and I'm wondering how it could be remedied.

More information here: openai/whisper#679

Example of what I'm seeing:

0
00:00:00,000 --> 00:01:13,380
Thanks for watching! Thanks for watching! Thanks for watching! Thanks for watching!

1
00:01:14,820 --> 00:01:15,780
Thanks for watching!

2
00:01:16,220 --> 00:01:23,820
Thanks for watching! Thanks for watching! Thanks for watching! Thanks for watching!

3
00:01:25,840 --> 00:01:28,120
Thanks for watching!

4
00:01:28,560 --> 00:01:29,560
Thanks for watching!

5
00:01:41,260 --> 00:01:44,260
Thanks for watching!

6
00:01:57,100 --> 00:02:04,220
Thanks for watching! Thanks for watching! Thanks for watching! Thanks for watching!

7
00:02:05,060 --> 00:02:06,560
Thanks for watching!

8
00:02:28,740 --> 00:02:29,420
Thanks for watching! Thanks for watching! Thanks for watching!

9
00:02:35,960 --> 00:02:50,260
Thanks for watching! Thanks for watching!

10
00:02:54,500 --> 00:02:59,400
Thanks for watching!

11
00:03:04,220 --> 00:03:12,780
Thanks for watching! Thanks for watching!

12
00:03:25,980 --> 00:03:29,380
Thanks for watching!

13
00:03:38,600 --> 00:03:39,900
Thanks for watching!

14
00:03:40,340 --> 00:03:40,620
Thanks for watching!

15
00:03:42,180 --> 00:03:43,080
Thanks for watching!

16
00:03:46,180 --> 00:03:46,960
Thanks for watching!

17
00:03:48,280 --> 00:03:53,220
Thanks for watching! Thanks for watching! Thanks for watching! Thanks for watching! Thanks for watching! Thanks for watching! Thanks for watching!

18
00:03:53,820 --> 00:03:54,440
Thanks for watching!

19
00:03:57,420 --> 00:03:59,240
Thanks for watching!

Allow adding a gap at the end of each sentence

Generated subtitles disappear incredibly quickly compared to normal subtitles that you would see with media.

Once the person is done speaking, the subtitle immediately disappears and it's a little jarring.

Could an option to add x amount of time at the end of each line be added, so they don't immediately disappear?

Thank you!

[FEATURE] GPU Support?

any chance there will be a mode which enables utilizing the CUDA cores of an nvidia GPU?

How to get it working on Windows 11

I want to get this working with Jellyfin on Windows 11. with dutch subtitles. These are my settings in subgen.py
plextoken = os.getenv('PLEXTOKEN', 'token here')
plexserver = os.getenv('PLEXSERVER', 'http://192.168.1.111:32400')
jellyfintoken = os.getenv('JELLYFINTOKEN', 'apifromtheapiJellyfin i have added')
jellyfinserver = os.getenv('JELLYFINSERVER', 'http://192.168.1.241:8096')
whisper_model = os.getenv('WHISPER_MODEL', 'medium')
whisper_threads = int(os.getenv('WHISPER_THREADS', 4))
concurrent_transcriptions = int(os.getenv('CONCURRENT_TRANSCRIPTIONS', 2))
transcribe_device = os.getenv('TRANSCRIBE_DEVICE', 'cpu')
procaddedmedia = convert_to_bool(os.getenv('PROCADDEDMEDIA', True))
procmediaonplay = convert_to_bool(os.getenv('PROCMEDIAONPLAY', True))
namesublang = os.getenv('NAMESUBLANG', 'aa')
skipifinternalsublang = os.getenv('SKIPIFINTERNALSUBLANG', 'eng')
webhookport = int(os.getenv('WEBHOOKPORT', 9000))
word_level_highlight = convert_to_bool(os.getenv('WORD_LEVEL_HIGHLIGHT', False))
debug = convert_to_bool(os.getenv('DEBUG', True))
use_path_mapping = convert_to_bool(os.getenv('USE_PATH_MAPPING', False))
path_mapping_from = os.getenv('PATH_MAPPING_FROM', '/tv')
path_mapping_to = os.getenv('PATH_MAPPING_TO', 'C:/TV Shows')
model_location = os.getenv('MODEL_PATH', './models')
transcribe_folders = os.getenv('TRANSCRIBE_FOLDERS', '')
transcribe_or_translate = os.getenv('TRANSCRIBE_OR_TRANSLATE', 'transcribe')
force_detected_language_to = os.getenv('FORCE_DETECTED_LANGUAGE_TO', 'en')
hf_transformers = convert_to_bool(os.getenv('HF_TRANSFORMERS', False))
hf_batch_size = int(os.getenv('HF_BATCH_SIZE', 24))
clear_vram_on_complete = convert_to_bool(os.getenv('CLEAR_VRAM_ON_COMPLETE', True))
compute_type = os.getenv('COMPUTE_TYPE', 'auto')
append = convert_to_bool(os.getenv('APPEND', False))

I did go to the folder subgen-main and started C:\subgen-main\subgen-main\subgen.py and started this i am getting:
INFO: Will watch for changes in these directories: ['C:\Users\hello\Desktop\subgen-main\subgen-main\subgen']
INFO: Uvicorn running on http://0.0.0.0:9000 (Press CTRL+C to quit)
INFO: Started reloader process [10856] using StatReload
INFO: Started server process [3244]
INFO: Waiting for application startup.
INFO: Application startup complete.

I have installed it with Standalone/Without Docker : install python3 and ffmpeg and run pip3 install numpy stable-ts fastapi requests faster-whisper uvicorn python-multipart python-ffmpeg whisper transformers optimum accelerate. Then run it: python3 subgen.py. You need to have matching paths relative to your Plex server/folders, or use USE_PATH_MAPPING.

First, you need to install the Jellyfin webhooks plugin. Then you need to click "Add Generic Destination", name it anything you want, webhook url is your subgen info (IE http://192.168.1.154:8090/jellyfin). Next, check Item Added, Playback Start, and Send All Properties. Last, "Add Request Header" and add the Key: Content-Type Value: application/json

I have added http://192.168.1.241:8096 as webhook in Jellyfin. I thicked the Playback Start, Key : Content-Type
and Value: application/json

http://192.168.1.241:8096 is my jellyfin server ip.
Path mapping I dont know? when I tried the folder of my tv shows. its C:\TV Shows. but when I add that It gives error.
I have the Jellyfinserver correct, the Jellyfintoken, I have added the api keys I created a new one and added that apikey.
I am still lost what I need to do to get it working.
Hope someone can help me with this because I am totally new with this stuff.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.