Giter Club home page Giter Club logo

fhem-docker's People

Contributors

cabal2k avatar cheanrod avatar cmpprk avatar cooltuxnet avatar dependabot[bot] avatar donjonsn avatar git-developer avatar haraldr42 avatar heinz-otto avatar joschamiddendorf avatar jpawlowski avatar renovate[bot] avatar sidey79 avatar stormmurdoc avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fhem-docker's Issues

Timeout handling for /bin/bash /health-check.sh is missing

Describe the bug
A clear and concise description of what the bug is.

Currently i figured out a problem.
Perl fhem process has 100% cpu usage
Health-Check script is called somewhere and seems not to have some timeout / abort functiion

root      866907       0  0 09:59 ?        00:00:00 /bin/sh -c /health-check.sh
root      866912  866907  0 09:59 ?        00:00:00 /bin/bash /health-check.sh
root      866916  866912  0 09:59 ?        00:00:00 /bin/bash /health-check.sh
root      867567       0  0 09:59 ?        00:00:00 /bin/sh -c /health-check.sh
root      867572  867567  0 09:59 ?        00:00:00 /bin/bash /health-check.sh
root      867576  867572  0 09:59 ?        00:00:00 /bin/bash /health-check.sh
root      868227       0  0 10:00 ?        00:00:00 /bin/sh -c /health-check.sh
root      868234  868227  0 10:00 ?        00:00:00 /bin/bash /health-check.sh
root      868238  868234  0 10:00 ?        00:00:00 /bin/bash /health-check.sh
root      868889       0  0 10:00 ?        00:00:00 /bin/sh -c /health-check.sh
root      868899  868889  0 10:00 ?        00:00:00 /bin/bash /health-check.sh
root      868903  868899  0 10:00 ?        00:00:00 /bin/bash /health-check.sh
root      869554       0  0 10:01 ?        00:00:00 /bin/sh -c /health-check.sh
root      869559  869554  0 10:01 ?        00:00:00 /bin/bash /health-check.sh
root      869563  869559  0 10:01 ?        00:00:00 /bin/bash /health-check.sh
root      870214       0  0 10:01 ?        00:00:00 /bin/sh -c /health-check.sh
root      870219  870214  0 10:01 ?        00:00:00 /bin/bash /health-check.sh
root      870223  870219  0 10:01 ?        00:00:00 /bin/bash /health-check.sh
root      870874       0  0 10:02 ?        00:00:00 /bin/sh -c /health-check.sh
root      870880  870874  0 10:02 ?        00:00:00 /bin/bash /health-check.sh
root      870884  870880  0 10:02 ?        00:00:00 /bin/bash /health-check.sh
root      871535       0  0 10:02 ?        00:00:00 /bin/sh -c /health-check.sh
root      871552  871535  0 10:02 ?        00:00:00 /bin/bash /health-check.sh
root      871556  871552  0 10:02 ?        00:00:00 /bin/bash /health-check.sh
root      872197       0  0 10:03 ?        00:00:00 /bin/sh -c /health-check.sh
root      872213  872197  0 10:03 ?        00:00:00 /bin/bash /health-check.sh
root      872217  872213  0 10:03 ?        00:00:00 /bin/bash /health-check.sh
root      872861       0  0 10:03 ?        00:00:00 /bin/sh -c /health-check.sh
root      872874  872861  0 10:03 ?        00:00:00 /bin/bash /health-check.sh
root      872878  872874  0 10:03 ?        00:00:00 /bin/bash /health-check.sh
root      873529       0  0 10:04 ?        00:00:00 /bin/sh -c /health-check.sh
root      873535  873529  0 10:04 ?        00:00:00 /bin/bash /health-check.sh
root      873539  873535  0 10:04 ?        00:00:00 /bin/bash /health-check.sh
root      874190       0  0 10:04 ?        00:00:00 /bin/sh -c /health-check.sh
root      874195  874190  0 10:04 ?        00:00:00 /bin/bash /health-check.sh
root      874199  874195  0 10:04 ?        00:00:00 /bin/bash /health-check.sh
root      874859       0  0 10:05 ?        00:00:00 /bin/sh -c /health-check.sh
root      874865  874859  0 10:05 ?        00:00:00 /bin/bash /health-check.sh
root      874869  874865  0 10:05 ?        00:00:00 /bin/bash /health-check.sh
root      875523       0  0 10:05 ?        00:00:00 /bin/sh -c /health-check.sh
root      875528  875523  0 10:05 ?        00:00:00 /bin/bash /health-check.sh
root      875532  875528  0 10:05 ?        00:00:00 /bin/bash /health-check.sh
root      876183       0  0 10:06 ?        00:00:00 /bin/sh -c /health-check.sh
root      876192  876183  0 10:06 ?        00:00:00 /bin/bash /health-check.sh
root      876196  876192  0 10:06 ?        00:00:00 /bin/bash /health-check.sh
root      876851       0  0 10:06 ?        00:00:00 /bin/sh -c /health-check.sh
root      876858  876851  0 10:06 ?        00:00:00 /bin/bash /health-check.sh
root      876862  876858  0 10:06 ?        00:00:00 /bin/bash /health-check.sh
root      878179       0  0 10:07 ?        00:00:00 /bin/sh -c /health-check.sh
root      878185  878179  0 10:07 ?        00:00:00 /bin/bash /health-check.sh
root      878189  878185  0 10:07 ?        00:00:00 /bin/bash /health-check.sh
root      878840       0  0 10:08 ?        00:00:00 /bin/sh -c /health-check.sh
root      878853  878840  0 10:08 ?        00:00:00 /bin/bash /health-check.sh
root      878860  878853  0 10:08 ?        00:00:00 /bin/bash /health-check.sh
root      879500       0  0 10:08 ?        00:00:00 /bin/sh -c /health-check.sh
root      879517  879500  0 10:08 ?        00:00:00 /bin/bash /health-check.sh
root      879521  879517  0 10:08 ?        00:00:00 /bin/bash /health-check.sh
root      880172       0  0 10:09 ?        00:00:00 /bin/sh -c /health-check.sh
root      880178  880172  0 10:09 ?        00:00:00 /bin/bash /health-check.sh
root      880182  880178  0 10:09 ?        00:00:00 /bin/bash /health-check.sh
root      881496       0  0 10:10 ?        00:00:00 /bin/sh -c /health-check.sh
root      881504  881496  0 10:10 ?        00:00:00 /bin/bash /health-check.sh
root      881508  881504  0 10:10 ?        00:00:00 /bin/bash /health-check.sh
root      881743  874652  0 10:10 pts/0    00:00:00 grep health-check

To Reproduce
Steps to reproduce the behavior:

Currently not clear

  1. block fhem
  2. call perl fhem.pl 7072 jsonlist2 TYPE=FHEMWEB:FILTER=TEMPORARY!=1:FILTER=DockerHealthCheck!=0
  3. Process never times out

Expected behavior
Call of
perl fhem.pl 7072 jsonlist2 TYPE=FHEMWEB:FILTER=TEMPORARY!=1:FILTER=DockerHealthCheck!=0
needs tome timeout handling like this for example
timeout 20 perl fhem.pl 7072 jsonlist2 TYPE=FHEMWEB:FILTER=TEMPORARY!=1:FILTER=DockerHealthCheck!=0

May complete health-check.sh needs some timeout handling.

Additional context
Add any other context about the problem here.

root@1e65b9fd71b2:/opt/fhem# cat /image_info
org.opencontainers.image.created=2020-08-03T11:22:51+00:00
org.opencontainers.image.authors=Julian Pawlowski (Forum.fhem.de:@loredo, Twitter:@loredo)
org.opencontainers.image.url=https://hub.docker.com/r/fhem/fhem-amd64_linux
org.opencontainers.image.documentation=https://github.com/fhem/fhem-docker/blob/e96d817971ec68f3a191c0dfa7a27e87b4d2e8be/README.md
org.opencontainers.image.source=https://github.com/fhem/fhem-docker/
org.opencontainers.image.version=6.0-s22528_v2.2.4
org.opencontainers.image.revision=e96d817971ec68f3a191c0dfa7a27e87b4d2e8be
org.opencontainers.image.vendor=Julian Pawlowski
org.opencontainers.image.licenses=MIT
org.opencontainers.image.title=fhem-amd64_linux
org.opencontainers.image.description=A basic Docker image for FHEM house automation system, based on Debian Buster.
org.fhem.authors=https://fhem.de/MAINTAINER.txt
org.fhem.url=https://fhem.de/
org.fhem.documentation=https://fhem.de/#Documentation
org.fhem.source=https://svn.fhem.de/
org.fhem.version=6.0-s22528
org.fhem.revision=22528
org.fhem.vendor=FHEM e.V.
org.fhem.licenses=GPL-2.0
org.fhem.description=FHEM (TM) is a GPL'd perl server for house automation. It is used to automate some common tasks in the household like switching lamps / shutters / heating / etc. and to log events like temperature / humidity / power consumption.

Add cpm install filter

The extended cpan 3rdparty layer tries to install those packages. They should be filtered:

#53 517.5 FAIL resolve ABFALL_getEvents
#53 517.5 FAIL resolve ABFALL_setUpdate
#53 517.5 FAIL resolve Blocking
#53 517.5 FAIL resolve Device::LIFX
#53 517.5 FAIL resolve Device::LIFX::Constants
#53 517.5 FAIL resolve FHEM
#53 517.5 FAIL resolve FHEM::Meta
#53 517.5 FAIL resolve GPUtils
#53 517.5 FAIL resolve HM485d::HM485_Protocol
#53 517.5 FAIL resolve HttpUtils
#53 517.5 FAIL resolve Slim::Plugin::Base
#53 517.5 FAIL resolve Slim::Utils::Log
#53 517.5 FAIL resolve Slim::Utils::Misc
#53 517.5 FAIL resolve Slim::Utils::Prefs
#53 517.5 FAIL resolve Slim::Utils::Strings
#53 517.5 FAIL resolve TradfriUtils
#53 517.5 FAIL resolve carp
#53 517.5 FAIL resolve encode
#53 517.5 FAIL resolve fhconverter
#53 517.5 FAIL resolve fhwebsocket
#53 517.5 FAIL resolve lib::HM485::ConfigurationManager
#53 517.5 FAIL resolve lib::HM485::Constants
#53 517.5 FAIL resolve lib::HM485::Device
#53 517.5 FAIL resolve lib::HM485::PeeringManager
#53 517.5 FAIL resolve lib::HM485::Util
#53 517.5 FAIL resolve lib::HM485::XmlConverter
#53 517.5 FAIL resolve lib::OWNet
#53 517.5 FAIL resolve lib::SD_Protocols
#53 517.5 FAIL resolve longer
#53 517.5 FAIL resolve myCtrlHAL

Docker container is always restarting after new setup via git clone

I am trying to integrate this container into my git version control as described in the README. At first, everything is going well, that is, once I set up a docker-compose.yml in my /docker/home folder on a Raspberry Pi 3, I can successfully create the container. I am using the following docker-compose.yml:

version: '2.3'

networks:
  net:
    driver: bridge
    # enable_ipv6: true
    ipam:
      driver: default
      config:
        - subnet: 172.27.0.0/24
          gateway: 172.27.0.1
        # - subnet: fd00:0:0:0:27::/80
        #   gateway: fd00:0:0:0:27::1

services:

  fhem:
    image: fhem/fhem:latest
    container_name: fhem_slave
    restart: always
    networks:
      - net
    ports:
      - "8083:8083"
    volumes:
      - "./fhem/:/opt/fhem/"
    devices:
      - "dev/ttyACM0:/dev/ttyACM0"
      - "dev/ttyACM1:/dev/ttyACM1"
      - "dev/ttyAMA0:/dev/ttyAMA0"
    environment:
      FHEM_UID: 6061
      FHEM_GID: 6061
      TIMEOUT: 10
      RESTART: 1
      TELNETPORT: 7072
      TZ: Europe/Berlin

After first start of the container, FHEM is up and running. I then did some changes within the frontend of FHEM, did an update all and then commited those changes to my online repo at bitbucket. I wanted to try if this is working as a suitable backup solution. To test it, I stopped and removed the container and completely deleted the home folder using sudo rm -r home. Then, I cloned my online repo back into the home folder. All files looked exactly the same as before I deleted the home folder. However, when trying to build the container and start it via docker-compose up -d the container is not starting properly anymore. Using docker ps, I can see that it always tries to start the container for some seconds. It looks like the container is in a restarting loop.

grafik

I'm wondering what I am supposed to do in order to check what causes these restart issues?

Endless loop

This causes endless looping, because $FOUND will not change during loop operation:

fhem-docker/src/entry.sh

Lines 472 to 475 in fcf2e72

until $FOUND; do
sleep $SLEEPINTERVAL
PrintNewLines "Server shutdown"
done

Update nodejsVersion to >18.x or higher for support of npm >10.x

Using the latest provided version of dockerized FHEM there are issues updating npm inside of docker to 10.x. The requirements therefore are requesting nodejsVersion >18.x instead of the provided version 16.20.2.

What else is needed from my side?

Thanks in advance and best regards
Volker

Crypt::Cipher::AES wird nicht installiert

Die Pakete Crypt::Cipher::AES werden trotz https://github.com/docker-home-automation-stack/fhem-docker/blob/master/Dockerfile#L173 nicht installiert.

Versuche ich das innerhalb des Containers per Hand, so erscheint folgender Fehler:

root@0d63d51780fa:/opt/fhem# sudo cpan Crypt::Cipher::AES
Loading internal null logger. Install Log::Log4perl for logging messages
Reading '/root/.cpan/Metadata'
   Database was generated on Tue, 23 Oct 2018 09:29:02 GMT
Running install for module 'Crypt::Cipher::AES'
Checksum for /root/.cpan/sources/authors/id/M/MI/MIK/CryptX-0.061.tar.gz ok
'YAML' not installed, will not store persistent state
Configuring M/MI/MIK/CryptX-0.061.tar.gz with Makefile.PL
Checking if your kit is complete...
Looks good
Generating a Unix-style Makefile
Writing Makefile for CryptX
Writing MYMETA.yml and MYMETA.json
   MIK/CryptX-0.061.tar.gz
   /usr/bin/perl Makefile.PL INSTALLDIRS=site -- OK
Running make for M/MI/MIK/CryptX-0.061.tar.gz
   MIK/CryptX-0.061.tar.gz
   make -- NOT OK

Scheinbar sind die build-tools erforderlich. Also habe ich manuell
apt-get install build-essential
installiert.

Dies werden im Dockefile zwar vorher installiert und auch nachher wieder per 'purge' entfernt, aber ich kann nicht sagen, warum Crypt::Cipher::AES zwischendurch (auf einer amd64-Architektur) nicht installiert wird.

Beim Erstaufruf von cpan (nicht cpanm) habe ich festgestellt, dass dieses eine Ersteinrichtung durchgeführt hat. Vielleicht ist dies von Interesse.

Kurz und knapp:
Kannst du feststellen, wieso Crypt::Cipher::AES auf einer amd64-Plattform nicht installiert wird?

Default: gateway.docker.internal is not set

Describe the bug
A clear and concise description of what the bug is.

To Reproduce
Steps to reproduce the behavior:

  1. Deploy a container "ghcr.io/fhem/fhem-minimal-docker:4.0.0-beta7-bullseye"
  2. wait until entry.shhas prepared the environment
  3. run cat /etc/hosts

Expected behavior
A clear and concise description of what you expected to happen.

Entry with name is present:
gateway.docker.internal

Current behavior

Entry with name is missing:
gateway.docker.internal

Additional context

Lines which are interessting :
hostAddr

ip command not found:

if ip -4 addr show docker0 >/dev/null 2>&1 ; then

root@2a30104b5003:/opt/fhem# cat /etc/hosts
127.0.0.1       localhost
::1     localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.3      2a30104b5003
172.17.0.1      host.docker.internal

Source: https://forum.fhem.de/index.php?topic=137309.msg1308654#msg1308654

Empty new lines written to docker log file

Describe the bug
Docker logs get flooded with empty new lines, i.e. the file gets appended even though there is nothing new in the FHEM log file.
If the container is recreated the docker logs are deleted, otherwise they grow quite large (under /var/lib/docker/containers/<id>/<id>-json.log).

To Reproduce
Steps to reproduce the behavior:
Run sudo docker logs <name of container> -f and watch.

Expected behavior
No adding of blank new lines if there is nothing new in the FHEM log file.

Desktop (please complete the following information):

  • OS: Debian bookworm
  • Docker version: 24.0.6 (compose version 2.21.0)
  • Image version: 3.2.3-bullseye

Additional context
I investigated the issue and it seems like there is a problem in entry.sh where on line 459 printf '%s\n' "${logArray[@]}" is executed, even though $logArray is empty.

I fixed it by moving the contents of line 459 into the else-branch on line 464, i.e. PrintNewLines looks like this now:

function PrintNewLines {
  LOGFILENAME=$( date +"${LOGFILE}" )
  if [ -s "${LOGFILENAME}" ]; then
    mapfile -t logArray < <(tail -n "+$((${OLDLINES} + 1))" "${LOGFILENAME}" )
    [ -n "$1" ] && printf '%s\n' "${logArray[@]}" | grep -q -e "$1" && FOUND=true || FOUND=false
    if [ ${#logArray[@]} -eq 0 ]; then
      MAXLINES=$( wc -l < "${LOGFILENAME}" )
  	  [ ${OLDLINES} -gt ${MAXLINES} ] && OLDLINES=-1  # logfile rotation
    else
      printf '%s\n' "${logArray[@]}" # moved to here
      OLDLINES=$((${OLDLINES} + ${#logArray[@]} ))
    fi
  fi
}

While this does seem to fix the bug, I haven't tested it yet with log rotation and don't know if there are other implications from moving the printf statement …

CUL_HM version with bugs

Hey Folks,

the actual version contains a buggy module --> CUL_HM.pm.
This cause a problem if users try to peer virtual temperature sensors with Homematic thermostats.
To solve this problem users must update the whole system as described here: https://wiki.fhem.de/wiki/Update

@ALL Contributors: Could you kindly update the whole container to a version with the newest modules? Just to avoid future questions and bug reports.

Thanks
Niko

Action Required: Fix Renovate Configuration

There is an error with this repository's Renovate configuration that needs to be fixed. As a precaution, Renovate will stop PRs until it is resolved.

Location: config
Error type: Invalid allowedVersions
Message: The following allowedVersions does not parse as a valid version or range: "^(5\.)([1-9][02468])(\.\d)?(.*)$"

Error in Dockerfile

There is an Error in Dockerfile, this Folder does not exist.

COPY src/fhem/trunk/fhem/ /fhem/

best regards Jan

FHEM is killed if delayed shutdown is in progress

Describe the bug
A clear and concise description of what the bug is.

To Reproduce
Steps to reproduce the behavior:

  1. Load a module which uses delayed restart
  2. Stop Container (not fhem)
  3. wait until logfile prints "Server shutdown delayed ..."
  4. error occures, fhem is killed by entry script

Expected behavior

The container shoud wait until "Server shutdown" is reported.

Additional context

Reported via fhem forum:
https://forum.fhem.de/index.php?topic=133468.msg1276417#msg1276417

Rewrite of entry.sh / question on contributing

Dear maintainers,

I did a fairly massive rewrite of the entry.sh script. It mainly addresses:

  • In the past there were a number of regressions in sending the log to the console.
    With my new version, sending the log to the console is done by a background process heavily using "tail",
    but also taking care of line-buffering, changed log file etc.
    And performance wise it uses much less calls to the various unix tools, as it doesn't poll any more.
  • Checking for process startup and process termination is more robust.
  • General rewrite, making the script more modular and robust.

So my question is: Should I make a pull request on the original entry.sh, resulting in a massive (and mostly useless diff),
or should I request pulling a new file?

Thx in advance
Harald

Connection refused from 127.0.0.1:50562

Hello,

i have lots of messages like:

2021.07.17 13:08:27.623 1: Connection refused from 127.0.0.1:46166
2021.07.17 13:08:49.252 1: Connection refused from 127.0.0.1:46294
2021.07.17 13:09:10.682 1: Connection refused from 127.0.0.1:46452
2021.07.17 13:09:31.986 1: Connection refused from 127.0.0.1:46602
2021.07.17 13:09:53.233 1: Connection refused from 127.0.0.1:46726

I tried to find the cause using tcpdump, but could not find the cause.

I have 2 fhem instances defined.
This one works fine without errors:
docker run --name=fhem-prod --ulimit nofile=98304:98304 -e TELNETPORT=7072 -e CPAN_PKGS="IO::File RPC::XML::Client RPC::XML::Server SubProcess JSON" -d -p 8083:8083 -p 7072:7072 -p 7411:7411 -p 7420:7420 -v /share/CACHEDEV1_DATA/Container/Volumes/fhem:/opt/fhem --restart always fhem/fhem

The second one with the connection attempt errors is defined like:
docker run --name=fhem-test --ulimit nofile=98304:98304 -e TELNETPORT=7072 -e CPAN_PKGS="IO::File RPC::XML::Client RPC::XML::Server SubProcess JSON" -d -p 8084:8083 -p 7074:7072 -p 7511:7511 -p 7520:7520 -p 14211:14211 -v /share/CACHEDEV1_DATA/Container/Volumes/fhem-test:/opt/fhem --restart always fhem/fhem

I also tried to set the telnet port to something other than 7072 -> same issue. (TELNETPORT=xxxx and -p xxxx:xxxx)
The other ports (except web) are for CCU RPC. (rpcserverport set to 5500)

Any idea how to debug this connection attempt?
Thanks

Abrupt daemon termination

a few days ago the file "entry.sh" was changed.

now fhem is constantly restarting with the message "Abrupt daemon termination".

if I change line 535 from "export PERL_JSON_BACKEND= ..." to "PERL_JSON_BACKEND= ..." everything will work.

Everything works without change in a second fhem container.

what could that be?

avahi-daemon not startable

Hi,
i need avahi-browse inside of Fhem Docker.
So i need to install those packets:
apt-get install -y avahi-utils avahi-daemon libnss-mdns systemd

My Dockerfile:

FROM fhem/fhem:bullseye

RUN apt-get update -y &&  \
    apt-get upgrade -y && \
    apt-get install -y avahi-utils avahi-daemon libnss-mdns systemd && \
    apt-get clean && \
    rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*

My docker compose file:

  fhem:
    build:
      context: fhem
    image: fhem:1.0
    container_name: fhem
    hostname: fhem
    volumes:
      - fhem:/opt/fhem
      - /var/run/dbus:/var/run/dbus
    ports:
      - 8083:8083
      - 7072:7072
    restart: unless-stopped

After starting the docker the daemon didn´t start:

root@fhem:/tmp# avahi-browse -a
Failed to create client object: Daemon not running

So I have to do the following steps for a proper working inside of docker:

rm /run/dbus/pid
dbus-daemon --system
/etc/init.d/avahi-daemon start
avahi-browse -a
[.....]

Its a problem inside of the fhem docker why dbus is not startable?
If not, how can i add a custom startscript to execute the needed steps after each container start?

Error in building fhem-docker image

hello everybody,

maybe someone is experiencing the same issue while building the container image.

Step 59/68 : COPY src/fhem/trunk/fhem/ /fhem/
ERROR: Service 'fhem' failed to build: COPY failed: stat /var/lib/docker/tmp/docker-builder657142930/src/fhem/trunk/fhem: no such file or directory

$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04.1 LTS
Release: 18.04
Codename: bionic

$ docker version
Client:
Version: 18.09.0
API version: 1.39
Go version: go1.10.4
Git commit: 4d60db4
Built: Wed Nov 7 00:49:01 2018
OS/Arch: linux/amd64
Experimental: false

Server: Docker Engine - Community
Engine:
Version: 18.09.0
API version: 1.39 (minimum version 1.12)
Go version: go1.10.4
Git commit: 4d60db4
Built: Wed Nov 7 00:16:44 2018
OS/Arch: linux/amd64
Experimental: false

If this trunk folder is related to the SVN repository of FHEM - is there any chance to include this into the Dockerfile?

Just to mention - I don't want to use the pre-build docker image

Thanks / Cheers, Jens

Configurable telnet port for healthcheck?

I am running several FHEM container and so I have to forward the ports like this:

5083:8083 (FHEMWEB)
5072:7072 (Telnet)

Your healthcheck-script is using the telnet port 7072 hardcoded.

Could you please make this port configurable? Perhaps as an environment value?

Dependency Dashboard

This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.

Open

These updates have all been created already. Click a checkbox below to force a retry/rebase of any.

Detected dependencies

docker-compose
docker-compose.yml
dockerfile
Dockerfile-bullseye
  • docker/dockerfile 1@sha256:a57df69d0ea827fb7266491f2813635de6f17269be881f696fbfdf2d83dda33e
  • perl 5.36.3-slim-bullseye
  • perl 5.36.3-bullseye
Dockerfile-threaded-bullseye
  • docker/dockerfile 1@sha256:a57df69d0ea827fb7266491f2813635de6f17269be881f696fbfdf2d83dda33e
  • perl 5.36.3-slim-threaded-bullseye
  • perl 5.36.3-threaded-bullseye
github-actions
.github/workflows/build.yml
  • actions/checkout v4
  • shogo82148/actions-setup-perl v1
  • actions/cache v4
  • actions/upload-artifact v4
  • actions/upload-artifact v4
  • actions/checkout v4
  • docker/build-push-action v5
  • actions/checkout v4
  • rlespinasse/github-slug-action v4.5.0
  • actions/download-artifact v4
  • actions/download-artifact v4
  • docker/build-push-action v5
  • actions/checkout v4
  • rlespinasse/github-slug-action v4.5.0
  • actions/download-artifact v4
  • actions/download-artifact v4
  • docker/metadata-action v5
  • docker/build-push-action v5
  • docker/build-push-action v5
  • Wandalen/wretry.action v3.4.0
  • Wandalen/wretry.action v3.4.0
  • docker/build-push-action v5
  • docker/build-push-action v5
  • Wandalen/wretry.action v3.4.0
  • actions/checkout v4
  • rlespinasse/github-slug-action v4.5.0
  • actions/download-artifact v4
  • actions/download-artifact v4
  • docker/metadata-action v5
  • docker/build-push-action v5
  • docker/metadata-action v5
  • docker/build-push-action v5
.github/workflows/cacheCleanup.yml
.github/workflows/prepare-docker/action.yml
  • docker/setup-qemu-action v3
  • docker/setup-buildx-action v3
  • docker/login-action v3
  • docker/login-action v3
.github/workflows/prepare-svn/action.yml
  • actions/cache v4

  • Check this box to trigger a request for Renovate to run again on this repository

Compatible path between docker hub and ghcr.io

Is your feature request related to a problem? Please describe.
At the moment the image can be found on docker hub with fhem/fhem but on ghcr with fhem/fhem/fhem-docker.

Describe the solution you'd like
Please change the package name on ghcr to fhem to be found also under fhem/fhem.

Error docker build

Step 64/73 : COPY src/fhem/trunk/fhem/ /fhem/
lstat src/fhem/trunk/fhem/: no such file or directory

Missing packages for BOSEST module

Hallo,

mir fällt gerade auf, dass man das Bose-Modul BOSEST nicht in Betrieb nehmen kann.
Es fehlen Packages unter Linux.

Beispiel: libmojolicious-perl

BOSEST
BOSEST is used to control a BOSE SoundTouch system (one or more SoundTouch 10, 20 or 30 devices)

Note: The followig libraries are required for this module:
libwww-perl
libmojolicious-perl
libxml-simple-perl
libnet-bonjour-perl
libev-perl
liburi-escape-xs-perl
sox
libsox-fmt-mp3

Könntest du hier aushelfen?
Macht das eigentlich auf Dauer überhaupt Sinn, alle möglichen Pakete mit aufzunehmen? Oder könnte man das evtl. auslagern wie bspw. bei Homebridge (https://github.com/oznu/docker-homebridge#homebridge-plugins)
Oder kann man bereits mittels "pre-init.sh" die Pakete hinzufügen ohne dafür ein eigenes Build durchzuführen? Kann man die "pre-init.sh" irgendwie rein-mounten?

fhemdebug memusage: Size.pm missing

Can't locate Devel/Size.pm in @INC (you may need to install the Devel::Size module) (@INC contains: . /etc/perl /usr/local/lib/x86_64-linux-gnu/perl/5.24.1 /usr/local/share/perl/5.24.1 /usr/lib/x86_64-linux-gnu/perl5/5.24 /usr/share/perl5 /usr/lib/x86_64-linux-gnu/perl/5.24 /usr/share/perl/5.24 /usr/local/lib/site_perl /usr/lib/x86_64-linux-gnu/perl-base ./FHEM) at (eval 16252) line 2.
BEGIN failed--compilation aborted at (eval 16252) line 2.

should not touch populated volume by default

I gave a copy of my existing /opt/fhem directory to the docker image as a volume, expecting it to more or less run as-is. To my surprise, running the image changed the ownership and permissions of all files & directories, ruining my setup. It's setup just so, with a couple of files, notably fhem.cfg, read-only to the fhem user by default, for a reason!

Easily restored of course, but I don't think the container setup should meddle around in the external volume at all, that sort of defeats the point of keeping code and data/configuration separate.

HEALTHCHECK Error

Step 69/72 : HEALTHCHECK --interval=20s --timeout=10s --start-period=60s --retries=5 CMD /health-check.sh
Unknown flag: start-period

Memory leak with Perl 5.32

I migrated my fhem installation from an old desktop pc (Debian buster with perl 5.28.1) to the current docker image with perl 5.32
and ran into the perl memory leakage described here:
https://forum.fhem.de/index.php/topic,112649.msg1069721.html#msg1069721

I need to restart ~ every 60h to prevent my fhem instance to crash.

Accourding to
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=994834#10
the bug is fixed with perl version 5.34.0.1 .
An imminent release of a new image based on Debian Bookworm would solve the problem,
since this release uses Perl 5.36.0-7+deb12u.

FAIL install Image-Magick-7.1.1-28

Installation of Image-Magick fails


#55 539.9 2024-04-14T11:55:47,26,Image-Magick-7.1.1-28| Magick.xs:56:10: fatal error: MagickCore/MagickCore.h: No such file or directory
#55 539.9 2024-04-14T11:55:47,26,Image-Magick-7.1.1-28|    56 | #include <MagickCore/MagickCore.h>
#55 539.9 2024-04-14T11:55:47,26,Image-Magick-7.1.1-28|       |          ^~~~~~~~~~~~~~~~~~~~~~~~~
#55 539.9 2024-04-14T11:55:47,26,Image-Magick-7.1.1-28| compilation terminated.
#55 539.9 2024-04-14T11:55:47,26,Image-Magick-7.1.1-28| make: *** [Makefile:351: Magick.o] Error 1
#55 539.9 2024-04-14T11:55:47,26,Image-Magick-7.1.1-28| Failed to install distribution

FAIL install Net-DBus-1.2.0

#55 539.9 2024-04-14T11:54:55,26,Net-DBus-1.2.0| ! Retrying (you can turn off this behavior by --no-retry)
#55 539.9 2024-04-14T11:54:55,26,Net-DBus-1.2.0| Executing /usr/local/bin/perl Makefile.PL
#55 539.9 2024-04-14T11:54:56,26,Net-DBus-1.2.0| DBus >= 1.3.0 is required
#55 539.9 2024-04-14T11:54:56,26,Net-DBus-1.2.0| Failed to configure distribution

Update speedtest-cli: Buggy version is installed

Hello,

in the container, the installed speedtest-cli version (2.0.2) includes a bug that the measured upload-speed is wrong (too low).
Here, the bug is described: Github: sivel/speedtest-cli/issues/575
The current version is 2.1.2.

Exec inside the container and check the speedtest-cli version and measured uplink speed by:
docker exec -ti fhem_fhem_1 speedtest-cli --version
docker exec -ti fhem_fhem_1 speedtest-cli

The buggy version of speedtest-cli is included in the debian repos for over 1,5 years.

A better way to install the speedtest-cli is by installing it directly from the git repo or via pip:

  • Via pip: pip install speedtest-cli
  • Directly via : git clone https://github.com/sivel/speedtest-cli.git && cd speedtest-cli && python setup.py install

Pyhten Link fehlt

Für das Fhem-Modul speedtest. wird das Phyton-scipt speedtest benötig.

Dies wirft in fhem den Fehler

/usr/bin/env: ‘python’: No such file or directory

In der bash funktioniert es.

Hier wird der link python zu python3 benötigt.

FAIL install HiPi-0.92

Installation of HiPi fails

#55 539.9 2024-04-14T11:55:54,24,HiPi-0.92| I2C.c:22:10: fatal error: i2c/smbus.h: No such file or directory

Jabber is broken for certain servers (e.g. jabber.de)

Bug description

  • When this Docker image is used for a FHEM instance containing a Jabber client connecting to jabber.de, Jabber fails to authenticate and remains Disconnected.

Steps to reproduce

  1. Prepare an account at jabber.de
  2. Start a container from this Dockerfile.
  3. Create a Jabber module instance, e.g.
    define jabber Jabber jabber.de 5222 <username> <password> 1 0
  4. Look at the modules internal CONNINFO:
    it contains the server response message error invalid-mechanism

Expected behavior

  • The device state should be Connected
  • The internal CONNINFO should contains the message Connected to jabber.de with username <username>

Environment

  • Raspberry Pi 3B+
  • Raspbian Buster
  • Docker 18.09.1

Additional context

  • The Dockerfile bundles package libauthen-sasl-cyrus-perl which enables support for the SASL authentication mechanism SCRAM-SHA-1. This triggers the problem.
  • Root cause for the problem is issue dap/XML-Stream#27.
  • Summary: when both server and client support more than 1 SASL authentication mechanism, the XML-Stream library that is used within 70_Jabber.pm creates an invalid authentication request.

Workaround

  • The problem does not occur when package libauthen-sasl-cyrus-perl is removed from this Dockerfile. This may be done by a tiny pre-init.sh script containing the line
    apt-get remove -y libauthen-sasl-cyrus-perl

Suggestion

  • Depending on whether libauthen-sasl-cyrus-perl is required for other purposes, it may be reasonable to remove this package from the Dockerfile and so avoid the problem without the need for an pre-init script.

Reduce size of v4 minimal image

(Migrated from #115)

This is a request to reduce the size of the v4 minimal image.

The minimal image has grown significantly between v3 and v4:

$ docker images | grep fhem
ghcr.io/fhem/fhem-minimal-docker   3.3.1-bullseye                       c9b1f0c873e7   4 months ago    635MB
ghcr.io/fhem/fhem-minimal-docker   dev-bullseye                         cf28244b08a3   12 hours ago    905MB

Maybe a switch to the slim base image helps (not verified).

$ docker image ls | grep perl
perl                 5.36.3-slim-bullseye   52a305760451   3 weeks ago     132MB
perl                 5.36.3-bullseye        61325f916a7e   3 weeks ago     692MB

Jabber module does not work

Describe the bug
libauthen-sasl-cyrus-perl prevents the jabber module from connecting properly to a jabber server. (Bullseye Image)

To Reproduce
Steps to reproduce the behavior:
Try to connect to a jabber server within a FHEM-Bullseye container.
(If libauthen-sasl-cyrus-perl is removed a connection can be established.)

Expected behavior
A connection to a jabber server should be established properly.

ERROR: Service 'habridge' failed to build

I am using raspberry PI and i get this error when I want enter docker-compose up.
I followed the steps in the youtube video.

this is the output:

Building habridge
Step 1/7 : FROM java:8-jdk
 ---> d23bdf5b1b1b
Step 2/7 : MAINTAINER Matthias Kleine <[email protected]>
 ---> Using cache
 ---> 0d7b74e373c4
Step 3/7 : ENV BRIDGE_VERSION 5.2.1
 ---> Using cache
 ---> ededec52eb44
Step 4/7 : RUN mkdir -p /opt/habridge && wget https://github.com/bwssytems/ha-bridge/releases/download/v${BRIDGE_VERSION}/ha-bridge-${BRIDGE_VERSION}.jar -O /opt/habridge/ha-bridge-${BRIDGE_VERSION}.jar
 ---> Running in a48683746bf2
standard_init_linux.go:207: exec user process caused "exec format error"
ERROR: Service 'habridge' failed to build: The command '/bin/sh -c mkdir -p /opt/habridge && wget https://github.com/bwssytems/ha-bridge/releases/download/v${BRIDGE_VERSION}/ha-bridge-${BRIDGE_VERSION}.jar -O /opt/habridge/ha-bridge-${BRIDGE_VERSION}.jar' returned a non-zero code: 1

Device::SerialPort is missing in v4 arm images

(Migrated from #115)

The v4 arm/v7 image does not contain the Device::SerialPort module which is required for many USB devices:

$ docker run --rm -ti --entrypoint /bin/sh ghcr.io/fhem/fhem-minimal-docker:3.3.1-bullseye -c "perl -e 'use Device::SerialPort'"

$ docker run --rm -ti --entrypoint /bin/sh ghcr.io/fhem/fhem-minimal-docker:dev-bullseye -c "perl -e 'use Device::SerialPort'"
Can't locate Device/SerialPort.pm in @INC (you may need to install the Device::SerialPort module) (@INC contains: /usr/local/lib/perl5/site_perl/5.36.3/arm-linux-gnueabihf-64int /usr/local/lib/perl5/site_perl/5.36.3 /usr/local/lib/perl5/vendor_perl/5.36.3/arm-linux-gnueabihf-64int /usr/local/lib/perl5/vendor_perl/5.36.3 /usr/local/lib/perl5/5.36.3/arm-linux-gnueabihf-64int /usr/local/lib/perl5/5.36.3) at -e line 1.
BEGIN failed--compilation aborted at -e line 1.

The arm64 build is probably affected, too.

Related PR: #196

FAIL install GDTextUtil-0.86

#55 539.9 2024-04-14T12:02:58,11,GDTextUtil-0.86| Failed to install distribution, because of installing some dependencies failed

FAIL install Net-Bluetooth-0.41

The installation of net-bluetooth is failing

#53 517.5 2024-04-14T11:55:18,25,Net-Bluetooth-0.41| Bluetooth.xs:11:10: fatal error: bluetooth/bluetooth.h: No such file or directory
#53 517.5 2024-04-14T11:55:18,25,Net-Bluetooth-0.41|    11 | #include <bluetooth/bluetooth.h>
#53 517.5 2024-04-14T11:55:18,25,Net-Bluetooth-0.41|       |          ^~~~~~~~~~~~~~~~~~~~~~~
#53 517.5 2024-04-14T11:55:18,25,Net-Bluetooth-0.41| compilation terminated.
#53 517.5 2024-04-14T11:55:19,25,Net-Bluetooth-0.41| make: *** [Makefile:342: Bluetooth.o] Error 1
#53 517.5 2024-04-14T11:55:19,25,Net-Bluetooth-0.41| Failed to install distribution

Updating of NPM packages not possible through FHEM UI

Hello,

after starting the Docker container and logging into the FHEM UI there are pending NPM updates indicated by fhemServerNpm. When calling "set update all" through the UI an error occurs:

Error code E403
Summary:
Forbidden - passwordless sudo permissions required

Detail:
sudo: sorry, you are not allowed to set the following environment variables: NODE_ENV

The required commands to update are part of /etc/sudoers.d/fhem-docker, unfortunately the user is not allowed to set/retain environment variables, calling the update to fail.

I have solved this by adding

Defaults !env_reset

to /etc/sudoers.d/fhem-docker. Afterwards updating the packages through the UI works fine.

I am not sure if this is an issue that only occurs for me but I thought I better report it.

Thank you,
Andre

No credits on Travis CI for build job

Describe the bug
Travis CI has migrated to credit based service.
May there is a chance of getting credits from travis for that open source project, but on the other hand the build process can be migrated to github actions which integrates better into github overall.

image

To Reproduce
Steps to reproduce the behavior:

  1. Go to https://travis-ci.com/github/fhem/fhem-docker
  2. Look at the banner and the build process

Additional context
image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.