Giter Club home page Giter Club logo

Comments (18)

sjaeckel avatar sjaeckel commented on August 16, 2024 1

I've stripped it down to a MWE

https://gist.github.com/sjaeckel/e6ace0cb5df3eac87b95cf9f587b1bf0

from docker.

sjaeckel avatar sjaeckel commented on August 16, 2024 1

A probably meaningful difference between your gist/example and the MWE, is that in the MWE coverage related sim/compile options are added to a LIBRARY object (lib), NOT to the VUnit object (vu, ui). Did you spot that?

Yep, I've seen that and I've tried both but the result is the same.

I'll open a fresh issue as soon as I've had the time to create a MWE

from docker.

sjaeckel avatar sjaeckel commented on August 16, 2024

Would you mind providing an image with lcov, gcov and VUnit? Then I can try it out these days

from docker.

sjaeckel avatar sjaeckel commented on August 16, 2024

Sorry btw. for the long time of silence!

from docker.

eine avatar eine commented on August 16, 2024

Would you mind providing an image with lcov, gcov and VUnit? Then I can try it out these days

The images mentioned above do already have all those tools. ghdl/vunit:gcc (latest stable VUnit) or ghdl/vunit:gcc-master (latest VUnit master) are the ones you want to try.

Sorry btw. for the long time of silence!

It's absolutely ok!

from docker.

sjaeckel avatar sjaeckel commented on August 16, 2024

Thanks a lot! I've re-built as described and it works, besides that gcovr is missing.

...
All passed!
Traceback (most recent call last):
  File "PeIMC6/run.py", line 27, in <module>
    subprocess.call(["gcovr", "-r", os.getcwd(), "-f", srcs])
  File "/usr/lib/python3.7/subprocess.py", line 323, in call
    with Popen(*popenargs, **kwargs) as p:
  File "/usr/lib/python3.7/subprocess.py", line 775, in __init__
    restore_signals, start_new_session)
  File "/usr/lib/python3.7/subprocess.py", line 1522, in _execute_child
    raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'gcovr': 'gcovr'

Do you want to install it in the base image or should I in my custom image?

I think it'd make sense to have it per default as it's the coverage tool that is "supported" by GitLab more or less OOTB

from docker.

sjaeckel avatar sjaeckel commented on August 16, 2024

FYI the information that gcovr was installed was lost when I created the minimal example... in the original GHDL issue.

Thanks again for your work btw!

from docker.

eine avatar eine commented on August 16, 2024

Yes, I think it's a fair request. Images ghdl/vunit:gcc and ghdl/vunit:gcc-master do include gcovr now. I didn't include it in ghdl/ghdl:*gcc* images, which did not have Python already. You should be able to test it by just pulling the image again.

Thank you for testing! Do you have any repo (here or at GitLab) where you are actually using these images in a CI workflow/pipeline?

from docker.

sjaeckel avatar sjaeckel commented on August 16, 2024

by just pulling the image again.

yep, that worked and I rebuilt my image but I'm not sure what is wrong now... it seems like gcov never comes to an end

it stays in

$ ps ax | grep gcov
32630 pts/0    R+    16:56 gcov /work/sync_pkg-body.gcda --branch-counts --branch-probabilities --preserve-paths --object-directory /work

and after I killed the process I get

Traceback (most recent call last):
  File "/usr/local/bin/gcovr", line 10, in <module>
    sys.exit(main())
  File "/usr/local/lib/python3.7/dist-packages/gcovr/__main__.py", line 248, in main
    collect_coverage_from_gcov(covdata, options, logger)
  File "/usr/local/lib/python3.7/dist-packages/gcovr/__main__.py", line 301, in collect_coverage_from_gcov
    contexts = pool.wait()
  File "/usr/local/lib/python3.7/dist-packages/gcovr/workers.py", line 146, in wait
    w.join(timeout=1)
  File "/usr/lib/python3.7/threading.py", line 1036, in join
    self._wait_for_tstate_lock(timeout=max(timeout, 0))
  File "/usr/lib/python3.7/threading.py", line 1048, in _wait_for_tstate_lock
    elif lock.acquire(block, timeout):
KeyboardInterrupt
Traceback (most recent call last):
  File "PeIMC6/run.py", line 27, in <module>
    subprocess.call(["gcovr", "-r", os.getcwd(), "-f", srcs])
  File "/usr/lib/python3.7/subprocess.py", line 325, in call
    return p.wait(timeout=timeout)
  File "/usr/lib/python3.7/subprocess.py", line 990, in wait
    return self._wait(timeout=timeout)
  File "/usr/lib/python3.7/subprocess.py", line 1624, in _wait
    (pid, sts) = self._try_wait(0)
  File "/usr/lib/python3.7/subprocess.py", line 1582, in _try_wait
    (pid, sts) = os.waitpid(self.pid, wait_flags)
KeyboardInterrupt

the relevant parts of run.py are as follows

ui.set_compile_option("ghdl.flags",["-g", "-O2", "-Wc,-fprofile-arcs", "-Wc,-ftest-coverage"])
ui.set_sim_option("ghdl.elab_flags",["-Wl,-lgcov", "-Wl,--coverage"])

# Run vunit function
try:
    ui.main()
except SystemExit as exc:
    retval = exc.code


if retval == 0:
    srcs = join(dirname(__file__), "src/rtl/*").lstrip("/")
    subprocess.call(["gcovr", "-r", os.getcwd(), "-f", srcs])

exit(retval)

Do you have any repo (here or at GitLab) where you are actually using these images in a CI workflow/pipeline?

sorry, only in a private GitLab instance.

from docker.

eine avatar eine commented on August 16, 2024

Are you using exactly the same repo/commit as before? Did it work before? I ask it because all I did in this regard was to add a Python package through pip. It should be unrelated to gcov/lcov.

You might try using ghdl/ghdl:buster-gcc* as a base instead. This does not include Python or VUnit, but includes GHDL, gcov and lcov.

Nonetheless, please provide a MWE. It is very hard to guess what's going on otherwise. This might be an issue/bug in VUnit.

About the run.py, you should not need to use an exception. There is support for passing a post_run function to main: VUnit/vunit#578.

from docker.

sjaeckel avatar sjaeckel commented on August 16, 2024

Are you using exactly the same repo/commit as before? Did it work before?

of ghdl/vunit? or my own repo? if you meant ghdl/vunit I used ghdl/vunit:gcc as base

Now I re-tried based on latest ghdl/ghdl:buster-gcc-7.4.0 + python3 etc. and that one works as expected! I also have the impression that the tests based on ghdl/vunit:gcc took a lot longer to compile & execute than when running them based on ghdl/ghdl:buster-gcc-7.4.0 ...

Nonetheless, please provide a MWE

I'll see if I find the time to create one

About the run.py, you should not need to use an exception. There is support for passing a post_run function to main: VUnit/vunit#578.

I'll change that as soon as it's working again ;-)

from docker.

eine avatar eine commented on August 16, 2024

Are you using exactly the same repo/commit as before? Did it work before?

of ghdl/vunit? or my own repo? if you meant ghdl/vunit I used ghdl/vunit:gcc as base

I meant of your own repo. Anyway, if some of the images that we generate here do work and others don't, the issue is clearly unrelated to your repo. Let's focus on hunting the differences on our side.

Now I re-tried based on latest ghdl/ghdl:buster-gcc-7.4.0 + python3 etc. and that one works as expected! I also have the impression that the tests based on ghdl/vunit:gcc took a lot longer to compile & execute than when running them based on ghdl/ghdl:buster-gcc-7.4.0 ...

This is interesting. That image is 5 months old. The latest is ghdl/ghdl:buster-gcc-8.3.0. I'd be grateful if you could do the same test with this one. Performanse-wise, there is no reason for ghdl/vunit:gcc to be slower than ghdl/ghdl:buster-gcc-8.3.0 + Python added by you. However, either GCC, GHDL or gcov my have gone slower in the last five months.

This is just for you to understand the context:

Hence, we have that:

ghdl/ghdl:buster-gcc-8.3.0 -> ghdl/vunit:gcc
ghdl/ghdl:buster-gcc-8.3.0 -> ghdl/vunit:gcc-master

I'll see if I find the time to create one

As long as you can consistently test which images work and which do not, it's ok with you just running the tests.

from docker.

sjaeckel avatar sjaeckel commented on August 16, 2024

I tried now based on ghdl/ghdl:buster-gcc-8.3.0 and ghdl/vunit:gcc-master and both show the same behavior that gcov hangs ...

from docker.

eine avatar eine commented on August 16, 2024

It might be because of the GCC version. Unfortunately, I cannot reproduce. I modified example coverage from VUnit according to your comment above: https://github.com/eine/vunit/blob/test-coverage/examples/vhdl/coverage/run.py I can successfully run it on ghdl/vunit:gcc. A bunch of *.gcda and *.gcno files are created and the following output is produced:

All passed!
------------------------------------------------------------------------------
                           GCC Code Coverage Report
Directory: /src/examples/vhdl/coverage
------------------------------------------------------------------------------
File                                       Lines    Exec  Cover   Missing
------------------------------------------------------------------------------
tb_coverage.vhd                                7       7   100%   
------------------------------------------------------------------------------
TOTAL                                          7       7   100%
------------------------------------------------------------------------------

May know if you set sim option enable_coverage to True in your run.py file? Without it *.gcda files are created only, not *.gcno.

from docker.

sjaeckel avatar sjaeckel commented on August 16, 2024

FYI it's not limited to gcovr, lcov hangs as well

from docker.

eine avatar eine commented on August 16, 2024

I added the MWE with a few modifications to run.py and Makefile, as a commit in: eine/vunit@b2ba5f3 As you see, test.sh executes the example in ghdl/vunit:gcc and ghdl/vunit:gcc-master. In both cases, execution seems successful and coverage results are reported by gcovr. See log of a CI run: https://github.com/eine/vunit/runs/399915453#step:3:412

from docker.

sjaeckel avatar sjaeckel commented on August 16, 2024

I've rebuilt the docker image with the updated base from ghdl/vunit:gcc and the MWE now succeeds!

After running my real code gcov still hangs ... let's see if I can find the time to create another MWE ... :-\

from docker.

eine avatar eine commented on August 16, 2024

I've rebuilt the docker image with the updated base from ghdl/vunit:gcc and the MWE now succeeds!

Good! Unfortunately, we don't have a completely automated sequence to update all the images in order. Hence, when we push some breaking change, a couple of days might be needed for it to settle up. Nevertheless, it works! I'm closing this issue, because the purpose is fulfilled (we have images with GCC, VUnit and/or coverage available now). But, let's continue hunting your issue!

After running my real code gcov still hangs ... let's see if I can find the time to create another MWE ... :-\

A probably meaningful difference between your gist/example and the MWE, is that in the MWE coverage related sim/compile options are added to a LIBRARY object (lib), NOT to the VUnit object (vu, ui). Did you spot that?

from docker.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.