Giter Club home page Giter Club logo

docker's Introduction

'base' workflow Status 'test' workflow Status 'bookworm' workflow Status

'vunit' workflow Status 'ext' workflow Status 'cosim' workflow Status 'mirror' workflow Status

This repository contains scripts and YAML workflows for GitHub Actions (GHA) to build and to deploy the container images that are used and/or published by the GHDL GitHub organization. All of them are pushed to hub.docker.com/u/ghdl.


ATTENTION: Some images related to synthesis and PnR were moved to hdl/containers and hub.docker.com/u/hdlc. See DEPRECATED.


Images for development (i.e., building and/or testing ghdl):

  • ghdl/build Docker pulls images include development/build depedendencies for ghdl.
  • ghdl/run Docker pulls images include runtime dependencies for ghdl.
  • ghdl/pkg Docker pulls images include the content of ghdl tarballs built in ghdl/build images.
  • ghdl/debug Docker pulls image is based on ghdl/build:buster-mcode and ghdl/pkg:buster-mcode; includes Python pip, GNAT GPS, Graphviz and GDB.

Ready-to-use images:

  • ghdl/ghdl Docker pulls images, which are based on correponding ghdl/run images, include ghdl along with minimum runtime dependencies.
  • ghdl/vunit Docker pulls images, which are based on ghdl/ghdl:bookworm-* images, include ghdl along with VUnit.
    • *-master variants include latest VUnit (master branch), while others include the latest stable release (installed through pip).
  • ghdl/ext Docker pulls GHDL and complements (ghdl-language-server, GtkWave, VUnit, etc.).
  • ghdl/cosim Docker pulls GHDL and other tools for co-simulation such as SciPy, Xyce or GNU Octave.

See USE_CASES.md if you are looking for usage examples from a user perspective.

GHA workflows

· base

Build and push all the ghdl/build:* and ghdl/run:* docker images. :

  • A pair of images is created in one job for [ ls ].
  • One job is created for each of [ fedora (37 | 38), debian (buster | bullseye | bookworm), ubuntu (20 | 22)], and six images are created in each job; two (ghdl/build:*, ghdl/run:*) for each supported backend [ mcode, llvm*, gcc ].
    • ghdl/debug:base is created in the debian buster job.
    • ghdl/build:doc is created in the debian bookworm job.

· test

Build and push almost all the ghdl/ghdl:* and ghdl/pkg:* images. A pair of images is created in one job for each combination of:

  • [ fedora: [37, 38], debian: [bullseye], ubuntu: [20, 22] ] and [mcode, llvm*].
  • [ fedora: [37, 38], debian: [bullseye] ] and [gcc*].
  • For Debian only, [bullseye, bookworm] and [mcode] and [--gpl].
  • For Debian Buster, only [mcode].
    • ghdl/debug is created in this job.

The procedure in each job is as follows:

  • Repo ghdl/ghdl is cloned.
  • ghdl is built in the corresponding ghdl/build:* image.
  • A ghdl/ghdl:* image is created based on the corresponding ghdl/run:* image.
  • The testsuite is executed inside the ghdl/ghdl:* image created in the previous step.
  • If successful, a ghdl/pkg:* image is created from scratch, with the content of the tarball built in the first step.
  • ghdl/ghdl:* and ghdl/pkg:* images are pushed to hub.docker.com/u/ghdl.

NOTE: images with GCC backend include lcov for code coverage analysis.

· bookworm [scheduled daily]

Complement of ghdl.yml, to be run after each successful run of the main workflow in ghdl/ghdl. One job is scheduled for each combination of [ bookworm ] and [ mcode, llvm-14 , gcc-12.3.0 ].

· vunit [triggered after workflow 'bookworm']

Build and push all the ghdl/vunit:* images, which are based on the ones created in the 'bookworm' workflow.

  • Two versions are published for each backend: one with latest stable VUnit (from PyPI) and one with the latest master (from Git).
  • Images with GCC backend include lcov and gcovr for code coverage analysis.

· ext [triggered after workflow 'vunit']

Build and push all the ghdl/ext:* images:

  • ls: ghdl/ext:ls-debian and ghdl/ext:ls-ubuntu (a job for each of them). These include ghdl/ghdl, the ghdl/ghdl-language-server backend and the vscode-client (precompiled but not preinstalled).
  • gui:
    • ghdl/ext:gtkwave: includes GtkWave (gtk3) on top of ghdl/vunit:llvm-master.
    • ghdl/ext:broadway: adds a script to ghdl/ext:gtkwave in order to launch a Broadway server that allows to use GtkWave from a web browser.
    • ghdl/ext:ls-vunit: includes VUnit (master) on top of ghdl/ext:ls-debian.
    • ghdl/ext:latest: includes GtkWave on top of ghdl/ext:ls-vunit.

See ghdl/ghdl-cosim: docker and ghdl.github.io/ghdl-cosim/vhpidirect/examples/vffi_user.

  • ghdl/cosim:mcode: based on ghdl/ghdl:bookworm-mcode, includes GCC.
  • ghdl/cosim:py: based on ghdl/ghdl:bookworm-llvm-7, includes Python.
    • ghdl/cosim:vunit-cocotb: based on ghdl/cosim:py, includes VUnit, cocotb and g++ (required by cocotb).
      • ghdl/cosim:matplotlib: based on ghdl/cosim:vunit-cocotb, includes pytest, matplotlib, numpy and Imagemagick.
      • ghdl/cosim:octave: based on ghdl/cosim:vunit-cocotb, includes GNU Octave.
      • ghdl/cosim:xyce: based on ghdl/cosim:vunit-cocotb, includes Xyce.

NOTE: *-slim variants of matplotlib, octave and xyce images are provided too. Those are based on ghdl/cosim:py, instead of ghdl/cosim:vunit-cocotb.

Packaging

Multiple artifacts of GHDL are generated in these workflows. For example, each job in test.yml generates a tarball that is then installed in a ghdl/ghdl:* image, and the content is published in a ghdl/pkg:* image. These resources might be useful for users/developers who:

  • Want to use a base image which is compatible but different from the ones we use. E.g., use python:3-slim-bookworm instead of debian:bookworm-slim.
  • Do not want to build and test GHDL every time.

However, it is discouraged to use these pre-built artifacts to install GHDL on host systems.

docker's People

Contributors

antonblanchard avatar eine avatar lukasvik avatar munken avatar rodrigomelo9 avatar tgingold avatar umarcor avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

docker's Issues

nextpnr GUI

Project nextpnr supports a GUI based on Qt to visually explore the P&R procedure and/or the result. Currently available ghdl/synth:* images include nextpnr without the GUI, since those are to be used in CI environments.

However, we already provide other images with GUI tools, such as GtkWave. These can be easily used with either x11docker or runx, on GNU/Linux or Windows 10 hosts. Hence, should there be interest on having a ghdl/synth:nextpnr-gui, we might add it.

Ensure that artifacts are traceable

Currently, some artifacts are built from tarballs. As a result, users of docker images cannot know exactly which version of GHDL is included. We should add a proper identifier.

Also, we should add a test/check before pushing images to dockerhub, in order to ensure that this is fulfilled by any image that might be added in the future.

Docker Resource Consumption Updates

https://www.docker.com/pricing/resource-consumption-updates

What are the rate limits for pulling Docker images from the Docker Hub Registry?
Rate limits for Docker image pulls are based on the account type of the user requesting the image - not the account type of the image’s owner. These are defined on the pricing page.

The highest entitlement a user has, based on their personal account and any orgs they belong to, will be used. Unauthenticated pull requests are “anonymous” and will be rate limited based on IP address rather than user ID. For more information on authenticating image pulls, please see this docs page.

How is a pull request defined for purposes of rate limiting?
A pull request is up to two GET requests to the registry URL path ‘/v2//manifests/’.

This accounts for the fact that pull requests for multi-arch images require a manifest list to be downloaded followed by the actual image manifest for the required architecture. HEAD requests are not counted.

Note that all pull requests, including ones for images you already have, are counted by this method. This is the trade-off for not counting individual layers.

Are anonymous (unauthenticated) pulls rate-limited based on IP address?
Yes. Pull rates are limited based on individual IP address (e.g., for anonymous users: 100 pulls per 6 hours per IP address).

What about CI systems where pulls will be anonymous?
We recognize there are some circumstances where many pulls will be made that can not be authenticated. For example, cloud CI providers may host builds based on PRs to open source projects. The project owners may be unable to securely use their project’s Docker Hub credentials to authenticate pulls in this scenario, and the scale of these providers would likely trigger the anonymous rate limits. We will unblock these scenarios as necessary and continue iterating on our rate limiting mechanisms to improve the experience, in cooperation with these providers. Please reach out to [email protected] if you are encountering issues.

Will Docker offer dedicated plans for open source projects?
Yes, as part of Docker’s commitment to the open source community, we will be announcing the availability of new open source plans. To apply for an open source plan, complete our application at: https://www.docker.com/community/open-source-application.

For now, we should be safe. However, we might want to apply for an open source plan in the future.

/cc @tmeissner

Docker image for the whole design flow

Hi,
is there a docker image, which contains the tools for the whole design flow (ghdl, ghdlsynth, yosys, nextpnr and icestorm)? I could only find images containing single tools.

Runtime dependencies for libboost

As commented in #22:

I now added libboost-all-dev to image trellis. (...) that's the same package that is installed in the build image. Ideally, there should be a (smaller) package with runtime dependencies only. For example, there are libomp-dev and libomp5-7. Are you aware of any other package (or set of packages) that we can use instead of libboost-all-dev. Note that this is not only for trellis, but also for images nextpnr, nextpnr-ice40 and nextpnr-ecp5, since all of them depend on boost.

We should be able to install the specific boost images instead of the catch all one. If I get a chance I'll take a look.

Cool. Note that this does not affect any of the features we provide; it'd just be an interesting enhancement. Hence, rather than trying to guess it ourselves, it'd be ok to just remember to ask it whenever we get the chance to talk to someone who is used to developing with boost.

Missing installation instructions

It would be great to have some quick instructions for getting started for people like me who haven't used docker all that much. I tried running docker pull ghdl/ghdl as it was mentioned here but I just get

Using default tag: latest
Error response from daemon: manifest for ghdl/ghdl:latest not found

so I guess there's something I'm missing

Feature Request: Add Git and GitPython

I am trying to use ghdl/ext:vunit-master and am running into issues because my VUnit run.py scripts expect git and GitPython to be present. In my use case, I am using a python function that uses GitPython to figure out where my git root directory is to enable each run.py script to grab the correct libraries/packages/source .vhd files for each test. E.g., below is an example of a run.py script that needs to know where the git root directory is to find all my .vhd files without hard-coding every file path:

import git

def get_git_root():
    repo = git.Repo('.', search_parent_directories=True)
    return repo.working_tree_dir

...

lib = vu.add_library("<some_lib>")
lib.add_source_files(join(root, "src/utils/src/*.vhd"))
lib.add_source_files(join(root, "src/misc/src/*.vhd"))
lib.add_source_files(join(root, "src/memory/src/*.vhd"))
lib.add_source_files(join(root, "src/bert/src/*.vhd"))
lib.add_source_files(join(root, "src/bert/test/*.vhd"))

However, because git is not present, I am unable to use this function. I tried another method where I used subprocess.Popen() and ran into the same issue because the git rev-parse --show-toplevel command I was issuing did not work because git was not present.

If you do not want to add GitPython, that is fine as I can do a pip3 install or just use subprocess in the standard package. However, I think it would be really useful to at least have git.

The issue I see when I change anything about my repo is that this creates a build-nightmare to debug with all hard-coded paths across a huge repository for HDL cores I am creating. Once my repo is more stable, I will be using a different method for finding libraries and it will be extremely important for my build environment to know where the git root is.

For reference, this is what I am doing now when I run docker as suggested by VUnit developers:

#!/bin/sh

cd $(dirname $0)

if [ -d "vunit_out" ]; then rm -rf vunit_out; fi

docker run --rm -t \
  -v /$(pwd)://work \
  -w //work \
  ghdl/ext:vunit-master sh -c ' \
    VUNIT_SIMULATOR=ghdl; \
    apt-get update -qq; \
    apt-get install git; \
    pip3 install GitPython; \
    for f in $(find ./ -name 'run.py'); do python3 $f -v; done \
  '

When the apt-get install git is executed, it hangs while waiting for the [Y/n] user input, but when this is forced, it is still unable to install git.

Lastly, if you see a compelling reason for me to avoid this method, I am interested in hearing suggestions. I am fairly new to CI with docker, so this was just the method I thought would be easiest to integrate and maintain.

libgnarl is not expected to be used

As commented in #8, libgnat is a dependency of GHDL, but libgnarl is not.

We should check which is the piece of source code that is making libgnarl be added as a dependency.

GitHub Actions

These are some notes about features provided by GitHub Actions that may be useful for us:


For example, you can have your workflow run on push events to master and release branches or, only run on pull_request events that target the master branch or, run every day of the week from Monday - Friday at 02:00.

https://help.github.com/en/articles/events-that-trigger-workflows


GitHub actions provides hosted runners for Linux, Windows and macOS. To change the operating system for you job simply specify a different virutal machine. The available virtual machine types are:

  • ubuntu-latest, ubuntu-18.04, or ubuntu-16.04
  • windows-latest, windows-2019, or windows-2016
  • macOS-latest or macOS-10.14

You can run workflows directly on the virtual machine or in a Docker container.

Each job in a workflow executes in a fresh instance of the virtual environment. All steps in the job execute in the same instance of the virtual environment, allowing the actions in that job to share information using the filesystem.

https://help.github.com/en/articles/virtual-environments-for-github-actions


With the matrix strategy GitHub Actions can automatically run your jobs across a set of different values of your choosing.

GitHub Actions supports conditions on steps and jobs based data present in your workflow context. To run a step only as part of a push and not in a pull_request you simply specify a condition in the if: property based on the event name.

https://help.github.com/articles/workflow-syntax-for-github-actions#jobsjob_idstepsif


Artifacts are the files created when you build and test your code. For example, artifacts might include binary or package files, test results, screenshots, or log files. When a run is complete, these files are removed from the virtual environment that ran your workflow and archived for you to download.

https://help.github.com/en/articles/managing-a-workflow-run#downloading-logs-and-artifacts


For end users:

For issues:

Fix permissions of /opt/* directories

[@tmeissner]
I've tested the ghdl/synth:formal Docker image. In the image, the tool directories under /opt/ are only readable by root. So, if I run the image as non-root user, I can't use the tools as intended

I can live with installing in /opt, a simple change of permissions would be sufficient for me

[@1138-4eb]
755 is ok?

[@tmeissner]
I think so
The subdirectories of ghdl and the other tools have correct permissions
So, only the 3 directories ghdl, yosys & z3 have to changed to 755

[@1138-4eb]
That makes sense. The 'wrong' command is just the mkdir before running tar. The content extracted by tar is correct.

Oh. I'm not using tar anymore. I forgot that. Neither mkdir. The directory is implicitly created by COPY in the dockerfiles:

Although it was reported for ghdl/synth:formal, other images are likely to be affected too. At least ghdl/synth:beta and ghdl/synth:latest.

gtkwave does not run for the expected time within a testbench

When I run python run.py -g I am expecting to see the output of my unit test displayed within gtkwave.

Test Bench

LIBRARY vunit_lib;
CONTEXT vunit_lib.vunit_context;

LIBRARY ieee;
USE ieee.std_logic_1164.ALL;

LIBRARY src;
USE src.mux_2i_1o;

ENTITY tb_mux_2i_1o IS
  GENERIC (runner_cfg : STRING);
END ENTITY;

ARCHITECTURE tb OF tb_mux_2i_1o IS
  SIGNAL button : std_logic := 'X';
  SIGNAL input_1 : std_logic := '0';
  SIGNAL input_2 : std_logic := '1';
  SIGNAL output : std_logic;
BEGIN
  mux : ENTITY mux_2i_1o PORT MAP (
    button => button,
    input_1 => input_1,
    input_2 => input_2,
    output => output
    );

  main : PROCESS
  BEGIN
    test_runner_setup(runner, runner_cfg);

    WHILE test_suite LOOP

      IF run("button press to switch inputs") THEN
        button <= '0';
        check(output = input_1);
        wait for 20 ns;
        button <= '1';
        wait for 20 ns;
        check(output = input_2, "Input did not change when button was pressed");
      END IF;

    END LOOP;

    test_runner_cleanup(runner); -- Simulation ends here
    WAIT;
  END PROCESS;
END ARCHITECTURE;

As you can see, there are two 20ns waits within the test bench. Which means I am expecting the gtkwave gui to show me a sim for a total of 40ns. But when I run python run.py -g I get the following output.

image

Steps to reproduce on Linux desktop environment

cd $HOME
git clone https://github.com/seanybaggins/ben_eaters_8_bit_computer.git
cd ben_eaters_8_bit_computer

docker run --volume="$HOME/.Xauthority:/root/.Xauthority:rw" --volume="$HOME/ben_eaters_8_bit_computer/:$HOME/ben_eaters_8_bit_computer" -w="$HOME/ben_eaters_8_bit_computer"  --env="DISPLAY" --net=host ghdl/ext python run.py -g

docker start -a <default name>

Variable encryption in Travis CI

I am a member, but not the owner, of the ghdl GitHub organization. In this repo (ghdl/docker) docker-owners Team has Admin level access. I am a member of the team.

However, encrypting docker credentials with travis/enc-dockerhub.sh (as I did in my fork 1138-4EB/ghdl) seems not to work. See travis-ci/travis-ci#9670


When I tried it yesterday, I could not see the Settings tab when browsing this repo. It is possible that I did it too early. Should try it again.

Right now, credentials are set as hidden variables through the Travis CI GUI.

ghdl/synth:beta image a691ca20ad4b broken

The latest image a691ca20ad4b on docker.io is broken with this:

******************** GHDL Bug occurred ***************************
Please report this bug on https://github.com/ghdl/ghdl/issues
GHDL release: 1.0-dev (v0.37.0-794-ge854f72b@buster-mcode) [Dunoon edition]
Compiled with unknown compiler version
Target: x86_64-linux-gnu
/src/
Command line:

Exception SYSTEM.ASSERTIONS.ASSERT_FAILURE raised
Exception information:
raised SYSTEM.ASSERTIONS.ASSERT_FAILURE : vhdl-annotations.adb:1401
******************************************************************
ERROR: vhdl import failed.

The last revision I tried cb1eeee4ff96 worked fine.

Remove GCC from run/* images

Realated to #32. Specifically #32 (comment) and #32 (comment)

Background/summary

115 MB could be saved in mcode images by not installing gcc and libc6-dev in run_debian.dockerfile. This does however affect the testsuite of ghdl. The following tests fail without gcc:

testsuite/gna/bug097
testsuite/gna/issue1226
testsuite/gna/issue1228
testsuite/gna/issue1233
testsuite/gna/issue1256
testsuite/gna/issue1326
testsuite/gna/issue450
testsuite/gna/issue531
testsuite/gna/issue98
testsuite/vpi/vpi001
testsuite/vpi/vpi002
testsuite/vpi/vpi003

Saving 115 MB from the mcode image is very tempting. That would make it very minimal, and perfect for CI.

GCC is added to mcode images for co-simulation purposes. When LLVM or GCC backends are used, GHDL can build C sources "internally". This allows providing VHDL sources and C sources, and let GHDL do the magic. With mcode, that's not possible, because it can only interact with pre-built shared libraries. Hence, a C compiler is required for users to convert their co-simulation C/C++ sources into a shared library. Providing it in the runtime image is convenient because it ensures that any image can be used for co-simulation with foreign languages.

We should not force all the users to download GCC, unless they need/want to.

Roadmap

  • Skip tests in GHDL's testsuite if a C compiler is not available. (ghdl/ghdl#1510)
  • Create ghdl/cosim:mcode, ghdl/cosim:py and ghdl/cosim:vunit-cocotb for replacing current ghdl/ghdl:-mcode and ghdl/vunit:llvm images.
  • Remove GCC from ghdl/run:* images.
  • Remove make and curl from ghdl/vunit:* images.

VUnit Docker Image not working

I am trying to run my tests on ghdl/vunit images. Calling GHDL directly works, but both VUnit and Makefile fail.
Make output:

root@5ab5d015f669:/wrk# make file
ghdl -a --std=08  file_driver.vhdl
make: ghdl: Operation not permitted
make: *** [Makefile:14: file] Error 127

VUnit output:

root@466176e28305:/wrk/LHCbAurora# python3 run_ci.py *aurora_loopback_tb*
WARNING - /wrk/LHCbAurora/UVVM/uvvm_util/src/rand_pkg.vhd: failed to find library 'cyclic_queue_pkg'
WARNING - /wrk/LHCbAurora/UVVM/bitvis_vip_scoreboard/src/generic_sb_pkg.vhd: failed to find library 'sb_queue_pkg'
Re-compile not needed

Starting aurora_lib.aurora_loopback_tb.loopback_encode_decode
Output file: /wrk/LHCbAurora/vunit_out/test_output/aurora_lib.aurora_loopback_tb.loopback_encode_decode_466902a15048931ee0c2a30e712006ed37783cf8/output.txt
Traceback (most recent call last):
  File "/opt/venv/lib/python3.11/site-packages/vunit/test/runner.py", line 244, in _run_test_suite
    results = test_suite.run(output_path=output_path, read_output=read_output)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/venv/lib/python3.11/site-packages/vunit/test/list.py", line 105, in run
    test_ok = self._test_case.run(*args, **kwargs)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/venv/lib/python3.11/site-packages/vunit/test/suites.py", line 72, in run
    results = self._run.run(*args, **kwargs)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/venv/lib/python3.11/site-packages/vunit/test/suites.py", line 178, in run
    sim_ok = self._simulate(output_path)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/venv/lib/python3.11/site-packages/vunit/test/suites.py", line 237, in _simulate
    return self._simulator_if.simulate(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/venv/lib/python3.11/site-packages/vunit/sim_if/ghdl.py", line 353, in simulate
    proc = Process(cmd, env=gcov_env)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/venv/lib/python3.11/site-packages/vunit/ostools.py", line 134, in __init__
    self._reader.start()
  File "/usr/lib/python3.11/threading.py", line 957, in start
    _start_new_thread(self._bootstrap, ())
RuntimeError: can't start new thread
Exception ignored in: <function Process.__del__ at 0x7fc97f993740>
Traceback (most recent call last):
  File "/opt/venv/lib/python3.11/site-packages/vunit/ostools.py", line 240, in __del__
    self.terminate()
  File "/opt/venv/lib/python3.11/site-packages/vunit/ostools.py", line 234, in terminate
    self._reader.join()
  File "/usr/lib/python3.11/threading.py", line 1107, in join
    raise RuntimeError("cannot join thread before it is started")
RuntimeError: cannot join thread before it is started
fail (P=0 S=0 F=1 T=1) aurora_lib.aurora_loopback_tb.loopback_encode_decode (0.1 seconds)

==== Summary ================================================================
fail aurora_lib.aurora_loopback_tb.loopback_encode_decode (0.1 seconds)
=============================================================================
pass 0 of 1
fail 1 of 1
=============================================================================
Total time was 0.1 seconds
Elapsed time was 0.1 seconds
=============================================================================
Some failed!

Some build actions are not running due to inactivity

It looks like some of the build actions, that are schedule based, are not running due to inactivity in this repository.

As a result, some of the docker images on docker hub are now several months out of date.

pip issue in ghdl/vunit:gcc-master (and others?)

There seems to be an issue with python packages in ghdl/vunit:gcc-master, and possibly/probably others. The vunit package does not appear to be installed in the python environment:

master [/home/lukas/work/repo/docker]$ docker run --rm --interactive --tty ghdl/vunit:gcc-master /bin/bash
root@4d2bca5f6a11:/# python3 -c "import vunit"
Traceback (most recent call last):
  File "<string>", line 1, in <module>
ModuleNotFoundError: No module named 'vunit'
root@4d2bca5f6a11:/# 

This issue appeared only recently. CI pipelines started failing in the tsfpga project after about 06:00 GMT+2 this morning.

To me it looks like some fatal python/pip problem is present:

master [/home/lukas/work/repo/docker]$ docker run --rm --interactive --tty ghdl/vunit:gcc-master /bin/bash
root@b5b80c839da3:/# python3 -m pip list
Package    Version
---------- -------
gcovr      4.2
Jinja2     2.11.2
lxml       4.5.2
MarkupSafe 1.1.1
pip        20.2.2
setuptools 50.0.0
root@b5b80c839da3:/# python3 -m pip install vunit_hdl
Collecting vunit_hdl
  Downloading vunit_hdl-4.4.0.tar.gz (6.3 MB)
     |████████████████████████████████| 6.3 MB 8.6 MB/s 
Collecting colorama
  Downloading colorama-0.4.3-py2.py3-none-any.whl (15 kB)
Using legacy 'setup.py install' for vunit-hdl, since package 'wheel' is not installed.
Installing collected packages: colorama, vunit-hdl
    Running setup.py install for vunit-hdl ... done
Successfully installed colorama-0.4.3 vunit-hdl
root@b5b80c839da3:/# python3 -m pip list
Package    Version
---------- -------
colorama   0.4.3
gcovr      4.2
Jinja2     2.11.2
lxml       4.5.2
MarkupSafe 1.1.1
pip        20.2.2
setuptools 50.0.0
root@b5b80c839da3:/# python3 -c "import vunit"
Traceback (most recent call last):
  File "<string>", line 1, in <module>
ModuleNotFoundError: No module named 'vunit'
root@b5b80c839da3:/# 

So the vunit_hdl package does not appear to be installed according to pip. And even after a manual install it is not listed, and can not be imported.

I tried the same thing in the python:3-slim-buster (upon which ghdl/vunit:gcc-master is based? I had a hard time following the scripts) and the problem was not present:

master [/home/lukas/work/repo/docker]$ docker run --rm --interactive --tty python:3-slim-buster /bin/bash
root@29e906d4848a:/# python3 -m pip list
Package    Version
---------- -------
pip        20.2.2
setuptools 49.3.1
wheel      0.34.2
root@29e906d4848a:/# python3 -m pip install vunit_hdl
Collecting vunit_hdl
  Downloading vunit_hdl-4.4.0.tar.gz (6.3 MB)
     |████████████████████████████████| 6.3 MB 10.0 MB/s 
Collecting colorama
  Downloading colorama-0.4.3-py2.py3-none-any.whl (15 kB)
Building wheels for collected packages: vunit-hdl
  Building wheel for vunit-hdl (setup.py) ... done
  Created wheel for vunit-hdl: filename=vunit_hdl-4.4.0-py3-none-any.whl size=6581190 sha256=d20b5911b017cc6d4214492934acd65fd9d6de96671f0ecd4031bc6fde87a72f
  Stored in directory: /root/.cache/pip/wheels/a9/e2/17/e5b8e2569e52742b550213746e0e0042138036acb7aef13e52
Successfully built vunit-hdl
Installing collected packages: colorama, vunit-hdl
Successfully installed colorama-0.4.3 vunit-hdl-4.4.0
root@29e906d4848a:/# python3 -c "import vunit"
root@29e906d4848a:/# 

Pulling from Docker Hub fails.

docker pull ghdl/ghdl

Using default tag: latest
Error response from daemon: manifest for ghdl/ghdl:latest not found: manifest unknown: manifest unknown

Size of GCC packages

Sizes of Buster images:

ghdl/run:buster-gcc        96MB
ghdl/run:buster-llvm-7     116MB
ghdl/run:buster-mcode      84.6MB

ghdl/ghdl:buster-gcc-8.3.0 303MB - 96MB  = 207MB
ghdl/ghdl:buster-llvm-7    123MB - 116MB = 7MB
ghdl/ghdl:buster-mcode     88MB  - 84.6  = 3.4MB

It is surprising that GHDL tarballs with mcode or LLVM backends require less than 10MB, but GCC requires >200MB! I think we might be doing something wrong, such as adding build artifacts to the GCC tarball (which should not be there).

Image ghdl/ghdl:buster-gcc-8.3.0 can be inspected with wagoodman/dive:

docker run --rm -it \
  -v /var/run/docker.sock:/var/run/docker.sock \
  wagoodman/dive \
  ghdl/ghdl:buster-gcc-8.3.0

dive_gcc

dive_gcc2

@tgingold, in the first screenshot I'm concerned with lib/ghdl/*/*/*.o files. Should all of those be there? With LLVM or mcode backend only lib/ghdl/*/*/*.cf files exist. Regarding the second screenshot, libexec/gcc/x86_64-pc-linux-gnu/8.3.0/cc1 requires 239MB, and ghdl1 in the same dir requires 237MB! Is this ok?

Last, as seen in the second screenshot, info and man pages for ghdl are added. However, I believe this is not the same man doc that we generate with sphinx (see ghdl/ghdl#733). Where does it come from?

Update info about running GUI

I recently asked this in the Gitter:

I'm using the ghdl/ext:latest image and would like to run the gtkwave GUI. I am SSH'd to a server with the image via MobaXTerm and can run GUI apps directly on that server. However, when I pass DISPLAY to the container, gtkwave gives an error. Is there a way to get this to work? The x11docker script that is mentioned seems like it's specifically for MSYS2?

I ended up figuring this out. For reference, I was able to accomplish this by volume mounting my users home directory and passing the HOME and DISPLAY env vars to the container. Note, home is passed to the container for the ~/.Xauthority file.

For example, after I SSH to Linux server using MobaXTerm, I run something along the lines of this to open the GUI:

docker run --user=$(id -u):$(id -g) --interactive $TTY --rm \
  --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --network=host \
  -e HOME=$HOME \
  -e DISPLAY=$DISPLAY
  -v $HOME:$HOME \
  ... \
  <image> <command>

where command might be python3 run.py -g. It might be good to add this to the USE_CASES since it took me a lot of digging to get it working right. Probably not very secure, though.

Code coverage stopped working after switch to Debian Bullseye and gcc 9.1

I have a CI system setup that automatically runs simulation tests for our codebase and generates artifacts with code coverage reports.
Today we've noticed that all of our pipelines started failing all of a sudden. Quick investigation suggests that it's most likely caused by the new version of ghdl/vunit:gcc images that we use.
Errors vary wildly between testbenches, but all are caused by the ghdl-gcc inability to compile VHDL, run testbench or generate code coverage data. A few log extracts below (we use Vunit btw.):

Compiling into sci_master_lib: common/vhd/interfaces/wishbone/sci_master/rtl/sci_master_top.vhd                                                      failed
=== Command used: ===
/usr/local/bin/ghdl -a --workdir=/builds/cce/cce/vunit_out/ghdl/libraries/sci_master_lib --work=sci_master_lib --std=08 -P/builds/cce/cce/vunit_out/ghdl/libraries/vunit_lib -P/builds/cce/cce/vunit_out/ghdl/libraries/osvvm -P/builds/cce/cce/vunit_out/ghdl/libraries/sci_master_lib -frelaxed -fprofile-arcs -ftest-coverage /builds/cce/cce/common/vhd/interfaces/wishbone/sci_master/rtl/sci_master_top.vhd
=== Command output: ===
during IPA pass: profile
/builds/cce/cce/common/vhd/interfaces/wishbone/sci_master/rtl/sci_master_top.vhd: In function ‘sci_master_lib__sci_master_top__ARCH__rtl__P2__PROC’:
/builds/cce/cce/common/vhd/interfaces/wishbone/sci_master/rtl/sci_master_top.vhd:165: internal compiler error: in coverage_begin_function, at coverage.c:656
  165 |   sci_mode_o                   <= (sci_clk_i or master_command_clk_high) and (not master_command_clk_low);
      | 
0x60ead2 coverage_begin_function(unsigned int, unsigned int)
	../../gcc-srcs/gcc/coverage.c:656
0xb01d14 branch_prob(bool)
	../../gcc-srcs/gcc/profile.c:1233
0xc34b62 tree_profiling
	../../gcc-srcs/gcc/tree-profile.c:793
0xc34b62 execute
	../../gcc-srcs/gcc/tree-profile.c:898
Please submit a full bug report,
with preprocessed source if appropriate.
Please include the complete backtrace with any bug report.
See <https://gcc.gnu.org/bugs/> for instructions.
/usr/local/bin/ghdl: exec error
l_serdes_stop_bit_9770f3ec736766095851a7d3f5f1fec2ae269065/coverage
WARNING - Missing coverage directory: /builds/cce/cce/vunit_out/test_output/uut.aol_serdes_vunit_tb.aol_serdes_zero_bits_607788854b84d43782763aab602c90277ff3345e/coverage
WARNING - Missing coverage directory: /builds/cce/cce/vunit_out/test_output/uut.aol_serdes_vunit_tb.aol_serdes_analogue_patterns_461895921170e375859f53d6a68dd9be965fc4c2/coverage
WARNING - Missing coverage directory: /builds/cce/cce/vunit_out/test_output/uut.aol_serdes_vunit_tb.aol_serdes_gated_transmission_a4d899af41db12640e66feea19a4c2568eae9de0/coverage
WARNING - Missing coverage directory: /builds/cce/cce/vunit_out/test_output/uut.aol_serdes_vunit_tb.aol_serdes_gated_to_normal_transmission_99cc1732306fbbbf9906b29a21db3e6b28fc45bd/coverage
WARNING - Missing coverage directory: /builds/cce/cce/vunit_out/test_output/uut.aol_serdes_vunit_tb.aol_serdes_address_patterns_6cb5d8aa3912c9a556294a7746c1c9f11a9c1979/coverage
WARNING - Missing coverage directory: /builds/cce/cce/vunit_out/test_output/uut.aol_serdes_vunit_tb.aol_serdes_parity_bits_a98bccb3b3c40fb4dba71942347a6558dd8ad898/coverage
WARNING - Missing coverage directory: /builds/cce/cce/vunit_out/test_output/uut.aol_serdes_vunit_tb.aol_serdes_break_sequence_133e6f90afa3241856d0fbd9fcc54bb706740063/coverage
WARNING - Missing coverage directory: /builds/cce/cce/vunit_out/test_output/uut.aol_serdes_vunit_tb.aol_serdes_start_bit_082961e4c64492079a2092f47e66325b09c7d3ce/coverage
Traceback (most recent call last):
  File "/usr/local/lib/python3.9/dist-packages/vunit/ui/__init__.py", line 726, in main
    all_ok = self._main(post_run)
  File "/usr/local/lib/python3.9/dist-packages/vunit/ui/__init__.py", line 772, in _main
    all_ok = self._main_run(post_run)
  File "/usr/local/lib/python3.9/dist-packages/vunit/ui/__init__.py", line 819, in _main_run
    post_run(results=Results(self._output_path, simulator_if, report))
  File "/builds/cce/cce/analog_optical_link/common/tb/run.py", line 33, in post_run
    results.merge_coverage(file_name="coverage_data")
  File "/usr/local/lib/python3.9/dist-packages/vunit/ui/results.py", line 33, in merge_coverage
    self._simulator_if.merge_coverage(file_name=file_name, args=args)
  File "/usr/local/lib/python3.9/dist-packages/vunit/sim_if/ghdl.py", line 439, in merge_coverage
    assert len(gcda_dirs) == 1, "Expected exactly one folder with gcda files"
AssertionError: Expected exactly one folder with gcda files

Closed

Looking at it again, this does not seem to be the issue. Closing.

Packing artifacts

From ghdl/ghdl#477

Packing, integration with Appveyor and RTD

There is no built-in feature in travis to merge all the artifacts and deploy to github just once, instead of having each job edits the release. An external storage service is required, such as S3, to have all of them merged in a dir and the deploy all at once. See https://docs.travis-ci.com/user/build-stages/#Data-persistence-between-stages-and-jobs Luckily, #489 includes using DockerHub, and we can reuse it in replacement of S3. This avoids the requirement to handle credentials for a fourth service. The scheme is as follows:

  • Each job in stage 1 (see #489) builds ghdl/pkg after tests are successfully executed. The difference compared to ghdl/ghdl is that the latter is based on ghdl/run (as explained aboved), but the first one is based on scratch. I.e., ghdl/pkg contains only ghdl-*-stretch-mcode.tgz.

Travis stage 3, named Pack artifacts

This stage mimicks the nested loops of stage 0 (see #489). However, instead of building anything, it pulls ghdl/pkg images which were created in the modified stage 1 and merges all the tarballs in a single directory. Then, a single deploy can be triggered in this stage.

This stage is especially useful for the following reasons:

  • At now tarballs are generated 'raw', i.e., there is no build, license, copying... This stage allows to enhance the tarballs with missing information/files, prior to pushing them to GitHub. @tgingold 2017-02-14
  • I think it is better to handle this here, and not in the build/test stage. This is because the tarballs in stage 1 (and, therefore, ghdl/ghdl images) are generated before the tests are run.
  • Additional meta-information can be added here:
    • The versions of the dependencies which where used to build the tarball: make, gcc, gnat, clang...
    • The date the tarball/image was created.
    • The size of ghdl binaries, ghdl libraries and images.
    • Expecting that appveyors builds will not take much longer than all the linux builds + mac build, appveyor artifacts can be retrieved here, and added to the release deploy. The same applies to PDFs generated in RTD.
    • ...
  • The info gathered here can be used to enhace the downloads section of the documentation.

Related to nightly builds mentioned in USE_CASES.md, ghdl/pkg images will always have tarballs corresponding to the latest succesful build. It is kind of stupid to require docker in order to download a tarball, even if a 3-4 line long shell script can be used to make it straightforward. Once again, play-with-docker can be used to execute the script, but this requires a Docker ID. Yet, adquiring the tgz is just a hopefully useful side effect, not the main feature.

Support versioning

Currently, all images are rolling releases, i.e. all of them are automatically updated if the corresponding travis job succeeds.

On the one hand, the 'buildtest' branches (mcode, mcodegpl, llvm and gcc) run the default testsuite before pushing the images. This ensures that fundamentally broken images are not pushed to dockerhub. Still, uncaught regressions might find their way.

On the other hand, most of the images created in branches master, synth and ext are unversioned. Some of them are very experimental (most of synth), but some other are expected to be used in "production" (i.e. vunit job in ext, see ghdl/ghdl#883).

Therefore, even though users can use digests to fix their scripts to specific images, it would be desirable to include some kind of versioning. We need to think it thoroughly, tho; supporting images for simulation, synthesis and/or LSP is starting to get complex!

libgnarl-7 missing in latest docker (ghdl/ext) builds

Description
Latest docker (ghdl/ext) builds are not usable anymore due to missing libgnarl-7 library.

Expected behaviour
If i try to run the docker image of any ghdl/ext: build :
docker run -it --rm ghdl/ext:vunit ghdl

i get this error

ghdl: error while loading shared libraries: libgnarl-7.so.1: cannot open shared object file: No such file or directory

And not the the ghdl version info.

How to reproduce?
Simply run :
docker run -it --rm ghdl/ext:vunit ghdl

Suggested fix
Add install libgnat-7 in dockerfile.
apt-get install -y libgnat-7

Added ghdl/vunit:* images for CI environments

Coming from ghdl/ghdl#883

I've been reworking the images in ghdl/docker (see the README and the repository list at dockerhub). Here are some notes which are still to be properly documented, and which are related to the conversation in ghdl/ghdl#883:

  • All the ghdl/run:*gcc images include lcov now. As a result, all ghdl/ghdl:*gcc* images should be ready-to-use for coverage analysis now.
  • A new repository has been added, ghdl/vunit:*, which includes six images: mcode, mcode-master, llvm, llvm-master, gcc and gcc-master. These are built on top of ghdl/ghdl:buster-mcode, ghdl/ghdl:buster-llvm-7 and ghdl/ghdl:buster-gcc-8.3.0, respectively. *-master variants include latest VUnit (master branch), while others include the latest stable release (installed through pip).
  • In case GtkWave is needed, it is available in images ghdl/ext:gtkwave, ghdl/ext:broadway or ghdl/ext:latest. All of them include VUnit too. The first two of them are based on ghdl/vunit:llvm-master, and the last one is based on ghdl/ext:ls-debian (which includes GHDL with LLVM backend too).

@sjaeckel, these changes should allow you to rewrite your dockerfile as:

FROM ghdl/vunit:gcc

RUN apt-get update -qq \
 && apt-get -y install git vim \
 && apt-get autoclean -y && apt-get clean -y && apt-get autoremove -y

RUN mkdir /work && chmod 777 /work

Which makes me wonder: do you really need git and vim inside the container? You might have a good reason to do so. I'm just asking so I can help you rethink your workflow to get rid of those dependencies, should you want to do so.

[@sjaeckel]
and btw. I'd happily also test it!

I'd really appreciate if you could test this, since I have never used the coverage feature. There are five ghdl/ghdl:*gcc* tags and six tags in ghdl/vunit. I'm not going to ask you to test all of them! Should you want to try a few, I think the priority for your use case is:

  • ghdl/vunit:gcc
  • ghdl/vunit:gcc-master
  • ghdl/ghdl:ghdl/ghdl:buster-gcc-8.3.0 (if any of the two previous do work, this will work for sure)
  • ghdl/ghdl:ghdl/ghdl:sid-gcc-9.1.0 (might still fail, that's why we didn't close this issue yet)
  • ...

Overall, please do not hesitate to request changes/features, such as including lcov in images with GCC (this was, honestly, so stupid of me) or providing images with VUnit and coverage support (i.e. GCC bakend).

Use middle stages in dockerfiles

Now all the stages declared in the Dockerfiles are built as images. For example:

# [run] Fedora 28

FROM fedora:28 AS mcode

RUN dnf --nodocs -y install libgnat gcc \
 && dnf clean all --enablerepo=\*

#---

FROM mcode AS llvm

RUN dnf --nodocs -y install zlib-devel \
 && dnf clean all --enablerepo=\*

RUN dnf --nodocs -y install llvm-libs \
 && dnf clean all --enablerepo=\*

#---

FROM mcode AS gcc-8.1.0

RUN dnf --nodocs -y install zlib-devel \
 && dnf clean all --enablerepo=\*

RUN dnf --nodocs -y install libstdc++* libstdc++*.i686 \
 && dnf clean all --enablerepo=\*

This could be cleaner if the installation of zlib-devel was defined in a intermediate 'dummy' stage:

# [run] Fedora 28

FROM fedora:28 AS do-mcode

RUN dnf --nodocs -y install libgnat gcc \
 && dnf clean all --enablerepo=\*

#---

FROM mcode AS zlib

RUN dnf --nodocs -y install zlib-devel \
 && dnf clean all --enablerepo=\*

#---

FROM zlib AS do-llvm

RUN dnf --nodocs -y install llvm-libs \
 && dnf clean all --enablerepo=\*

#---

FROM zlib AS do-gcc-8.1.0
RUN dnf --nodocs -y install libstdc++* libstdc++*.i686 \
 && dnf clean all --enablerepo=\*

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.