Giter Club home page Giter Club logo

htk8s's Introduction

test

HTPC powered by k3s

htk8s diagrams

This is my current HTPC setup. It runs on k3s - a lightweight and easy to install Kubernetes distribution. It includes the following applications:

Applications state (settings / db) and media files are stored in a shared volume of type hostPath. It does not use PVC and currently only works if the whole htpc namespace is deployed in the same node.

Getting Started

Quickstart

# for x86_64
kubectl apply -f https://raw.githubusercontent.com/fabito/htk8s/v0.1/install_x86_64.yaml

# for raspberry pi (ARM)
kubectl apply -f https://raw.githubusercontent.com/fabito/htk8s/v0.1/install_armhf.yaml

The Gitops way

argocd htpc application

# x86_64 only
kubectl apply -f https://raw.githubusercontent.com/fabito/htk8s/v0.1/install_argocd.yaml

This alternate manifest will install Argo CD along with the htpc application. Then it will monitor this repo for changes and apply them to the cluster accordingly (more specifically the overlays/x86overlay).

You can access the ArgoCD UI at: https://localhost/argocd.

Verifying the installation

All resources are created in the htpc namespace. So if you run:

k3s kubectl get all -n htpc

You should get something similar to:

NAME                                READY   STATUS    RESTARTS   AGE
pod/bazarr-795f88c5c9-w75l7         1/1     Running   0          24h
pod/emby-6f457df664-fqbmc           1/1     Running   0          24h
pod/jackett-6bcf6cd8d6-lrh6j        1/1     Running   0          24h
pod/radarr-5c965c7678-zt8sq         1/1     Running   0          24h
pod/sonarr-b65c8956-mxng4           1/1     Running   0          24h
pod/transmission-5f7fdc6cb5-nrtbb   1/1     Running   0          24h

NAME                   TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
service/bazarr         ClusterIP   10.43.43.224    <none>        6767/TCP   24h
service/emby           ClusterIP   10.43.212.198   <none>        8096/TCP   24h
service/jackett        ClusterIP   10.43.104.233   <none>        9117/TCP   24h
service/radarr         ClusterIP   10.43.141.101   <none>        7878/TCP   24h
service/sonarr         ClusterIP   10.43.35.98     <none>        8989/TCP   24h
service/transmission   ClusterIP   10.43.184.198   <none>        9091/TCP   24h

NAME                           READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/bazarr         1/1     1            1           24h
deployment.apps/emby           1/1     1            1           24h
deployment.apps/jackett        1/1     1            1           24h
deployment.apps/radarr         1/1     1            1           24h
deployment.apps/sonarr         1/1     1            1           24h
deployment.apps/transmission   1/1     1            1           24h

NAME                                      DESIRED   CURRENT   READY   AGE
replicaset.apps/bazarr-795f88c5c9         1         1         1       24h
replicaset.apps/emby-6f457df664           1         1         1       24h
replicaset.apps/jackett-6bcf6cd8d6        1         1         1       24h
replicaset.apps/radarr-5c965c7678         1         1         1       24h
replicaset.apps/sonarr-b65c8956           1         1         1       24h
replicaset.apps/transmission-5f7fdc6cb5   1         1         1       24h

You should also be able to reach each component's UI using the links below. Don't forget to replace localhost with the IP or the server name running k3s.

App URI
radarr http://localhost/radarr
sonarr http://localhost/sonarr
bazarr http://localhost/bazarr
jacket http://localhost/jackett
transmission http://localhost/transmission
emby http://localhost/

Check the ingress-route.yaml for more details.

Each module except for Emby is configured to respond on a custom basepath (check the init containers logic for more details).

How it works (WIP)

It uses LinuxServers images.

It uses a hostPath volume to store configuration and media files. It defaults to the /opt/htpc directory

/opt/htpc
├── bazarr
├── downloads
├── emby
├── jackett
├── media
│   ├── movies
│   └── tv
├── radarr
├── sonarr
└── transmission

htk8s's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

htk8s's Issues

Pre configure bazarr

Add init container logic to pre configure bazarr.

  • add bazarr to ingress
  • create /config/config/config.ini
  • configure sonarr
  • configure radarr
  • configure base_url
[assrt]
token = 

[opensubtitles]
username = 
use_tag_search = False
skip_wrong_fps = False
ssl = False
vip = False
timeout = 15
password = 

[sonarr]
apikey = abc
full_update = Daily
ip = sonarr
only_monitored = False
base_url = /sonarr
ssl = False
port = 8989

[proxy]
username = 
url = 
exclude = localhost,127.0.0.1
password = 
type = None
port = 

[radarr]
apikey = abc
full_update = Daily
ip = radarr
only_monitored = False
base_url = /radarr
ssl = False
port = 7878

[addic7ed]
username = 
password = 
random_agents = True

[legendastv]
username = 
password = 

[auth]
username = 
password = 
type = None

[general]
movie_default_hi = False
movie_default_language = []
ip = 0.0.0.0
use_scenename = True
use_postprocessing = False
enabled_providers = subscene,tvsubtitles
auto_update = True
port = 6767
use_radarr = True
base_url = /bazarr/
page_size = 25
minimum_score_movie = 70
branch = master
single_language = False
use_sonarr = True
serie_default_hi = False
path_mappings_movie = [['/movies', '/movies'], ['', ''], ['', ''], ['', ''], ['', '']]
serie_default_enabled = False
movie_default_enabled = False
serie_default_language = []
path_mappings = [['/tv', '/tv'], ['', ''], ['', ''], ['', ''], ['', '']]
postprocessing_cmd = 
minimum_score = 90
debug = False
use_embedded_subs = True
adaptive_searching = False

bazarr can not access from web

log from bazarr pod:
PermissionError: [Errno 13] Permission denied: '/config/bazarr.restart'

Maybe you need to add the chown part to the init container?

Volumes are not configured correctly for multi-node installation

The volumes are mounted per machine, instead of using PersistantVolumeClaim (PVC).
The issue is that In case more than one node is running, it's possible for example radarr to run on machine 1 and the download to be done on machine 2, meaning that radarr on machine 1 cannot copy the download from machine 2.

I think a starting 20-40gb is good enough.

Pending deprecation for Ingress extensions/v1beta1

Getting a deprecation warning when using a contemporary Kubernetes release. Just a matter of reformatting the spec for the Ingress resources. The deprecation was introduced in 1.14 and will remove it in 1.22+.

I'd be happy to submit a patch for it if you want, its a boring manual change. I am fixing it in my fork but it has a bunch of of other implementation specific changes.

If interested, how would you like this fix delivered? Separate fork or just a PR? Or do you not care (which is totally fair) :D

Default transmission configuration is damaged

Was checking this project out for the first time. Forgive me for my k8s/k3s laypersonship.

Ran the x86 install, was unable to access Transmission. Ingress said 502 Bad Gateway. Checking out the logs indicated that the settings.json was malformed. Inspection of the JSON shows:

"rpc-host-whitelist-enabled": false,

Obviously, something has been interpreted weird. So I scaled the replicaset to 0, nuked the file, and scaled back up:

kubectl scale --replicas=0 replicaset/transmission-5bf9655f77
rm /opt/htpc/transmission/settings.json
kubectl scale --replicas=1 replicaset/transmission-5bf9655f77

That restored it back to the application default which has the tragic side-effect of turning whitelisting back on. Back to the drawing board. Lets try a new deployment:

kubectl scale --replicas=0 replicaset/transmission-5bf9655f77
rm /opt/htpc/transmission/settings.json
kubectl rollout restart deployment/transmission

Isolated the issue to the deployment args by manually replacing the JSON being echo'd into the config file:

      - args:                                                                                                                                                                                          
        - echo starting; echo deadbeef > /config/settings.json; echo done;   

Success. The config file literally just says "deadbeef." So there seems to be something wrong with the way the JSON is being spit out into the config file as a deployment argument.

Running on a bare-metal Ubuntu 20.04 server, if that matters.

VPN support

HI,

This setup looks great. I am thinking of trying it out shortly once I get a new pi 4. Any support for VPN inside the cluster ?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.