Giter Club home page Giter Club logo

Comments (27)

vmordan avatar vmordan commented on May 23, 2024 1

My questions regarding the extends property still need to be answered.

It was assumed, that extends refers to one file on disk, for example:

extends: "../MemSafety-Arrays.yaml"

Such files should contain general information (architecture, category, language, etc.), which can be reused in other YAML files.

If we keep this information out of the YAML for now, then extends tag is also redundant. Of course, if we add corresponding tags in YAML format, then extends could be useful.

I also think you should avoid - and use _ instead.

Property names (for example, valid-deref) is taken for YAML file from SV-COMP rules. I think it is better to use the same single-string identifier for property names.

I am not sure what makes more sense, but probably in most cases we need a single-string identifier for properties anyway?

Using a single-string identifier for property names has the following advantages:

  1. YAML file schema will be more simple, hierarchy of expected_results_for_property is not needed. Also sub-tags (such as error_function_1) will not be required, for example:
expected_results:
  valid-deref:
    correct: true
  unreach-call(error_function_1):
    correct: false
  1. The same string (valid-deref or unreach-call(error_function_1)) can be used everywhere (correct results in YAML file, output of verification tool, xml file with BecnExec results, results of table-generator). Therefore, less actions for processing property names are required.

Do you agree to use single-string identifier as property names?

from sv-benchmarks.

delcypher avatar delcypher commented on May 23, 2024 1

@PhilippWendler I don't have free time to do it this week due to impending paper deadlines but next week I could have a shot at changing the svcomp-build-mockup to reflect the decisions made in this PR. This would allow people to try out the proposal. Removing variants would greatly simplify the implementation :)

from sv-benchmarks.

vmordan avatar vmordan commented on May 23, 2024

In order to introduce a new format for benchmarks description we suggest a hierarchy of YAML configuration files.
Some configuration files should describe categories of verification tasks, for example:

architectures:
  - x86_64
categories:
  - examples
language: c99
memory_model: precise
schema_version: 0
comments: |
  comment

Inside each category each verification task should inherit base category configuration file and contain its own YAML configuration file, which is required for providing correct verdicts, for example:

extends:
  - <category_config_file>
name:
  - <...>
sources:
  - <...>
correct results:
  property1: true|false|unknown
  ...
  propertyN: true|false|unknown

This format can be used both for single-property and multi-property verification. Example of this format in case of checking multiple properties (in accordance with sosy-lab/benchexec#217 suggestion):

name:
  - linux-4.0-rc1---drivers--atm--fore_200e.ko_false-unreach-call.cil
sources:
  - ldv-multiproperty/linux-4.0-rc1---drivers--atm--fore_200e.ko_false-unreach-call.cil.c
correct results:
  unreach-call,__VERIFIER_error_linux_alloc_irq: true
  unreach-call,__VERIFIER_error_linux_arch_io: false

Example of this format in case of checking one property:

name:
  - linux-4.0-rc1---drivers--atm--fore_200e.ko_false-unreach-call.cil
sources:
  - ldv-linux-4.0-rc1-mav/linux-4.0-rc1---drivers--atm--fore_200e.ko_false-unreach-call.cil.c
correct results:
  unreach-call: false

In this case property name is taken from the .prp file and can take the following values: unreach-call, valid-free, valid-deref, valid-memtrack, overflow or termination (in accordance with SV-COMP rules).

from sv-benchmarks.

PhilippWendler avatar PhilippWendler commented on May 23, 2024

@delcypher Does this match what your suggestion would be?

@dbeyer What is your opinion about how such a format should look like?

@mdangl I am not sure, is there something to think about for this format with regards to witnesses?

from sv-benchmarks.

mdangl avatar mdangl commented on May 23, 2024

@mdangl I am not sure, is there something to think about for this format with regards to witnesses?

I cannot currently think of anything, but will keep it in mind.

On an unrelated note, however, I am surprised to see memory_model: precise.
Since we got rid of the "simple" memory model, do we need that property at all?

from sv-benchmarks.

vmordan avatar vmordan commented on May 23, 2024

What do you think about passing yaml configuration file to BenchExec directly? In this case which information should contain this configuration file?

We suggest to add option in BenchExec to read such yaml files as output (for example, in <include> tag or by means of some new tag), which should contain the following required attributes:

sources:
  - <verification task>
correct results:
  property1: true|false
  ...
  propertyN: true|false

Thus BenchExec can obtain correct results for each passed verification task (for example, C file) and then use this information to create a new Run. This could be very helpful for passing correct results in multi-property verification.

All other attributes, which are not required explicitly for BenchExec, can be placed in yaml configuration files, which describe categories of verification tasks (as I understand instead of Category.cfg files). Those files will be used by SV-Comp scripts.

Do you agree with this suggestion?

from sv-benchmarks.

delcypher avatar delcypher commented on May 23, 2024

@PhilippWendler

@delcypher Does this match what your suggestion would be?

Not exactly. I have been using my sv-comp
mockup
in the form of
fp-bench for the last few months so
this has given me some time to reflect on the pros/cons of the design decisions
I made. @vmordan 's proposal seems to be partially inspired by the work I've done.

Here are some high-level comments

extends category config file

Having a separate file category information solves the duplication problems but
it creates new ones.

  • What is the namespace of extends? Does it refer to a file on disk or is it just
    a category name and the reader needs to figure out where the file is for themselves?
    If it is a file on disk we need to be care that categories can still be modular
    (i.e. we have the ability to add/remove benchmarks without breaking anything).

  • You have made extends map to a list. This implies you intend to allow
    a benchmark to belong to multiple categories. This is problematic because
    the multiple categories could have conflicting properties (e.g. one could be
    c99 and the other could be c11). If you have conflicts what are the semantics?
    Does it mean there are two variants of the benchmark (i.e. a c99 and c11 variant)
    or is this invalid?

Category configuration file

  • memory_model should be dropped as this is no-longer relevant

  • categories should be dropped. This does not belong here. This file
    is supposed to describe a single category so what is a list of categories
    supposed to mean here?

You should have a name property that maps to a string that gives the category's
name.

Tags

In fp-bench we used the catgories: property like tags. I found this incredibly
convenient because it allowed me to "tag" benchmarks. For example I had a tag that
indicated that a benchmark was "synthetic" and a tag that indicated that a benchmark
is part of a study for a paper I was working on.

We should add an optional tags: property both to the benchmark specification file
and category specification file. This allows us to add arbitrary tags which I've said
above is incredibly useful.

Compiler Defines

Your example is missing compiler defines. In fp-bench I found these incredibly useful.
The scheme I used is that defines: should map to a dictonary (object) where the key
is the define name. Here's an example

defines:
  A: "0"
  B: NULL

This defines A to be the string "0" and defines B but does not give it value (i.e. this
is like passing `-DB to gcc)

Depenencies

In fp-bench benchmarks can also state their dependencies (e.g. "C Math library",
"Pthreads", "openmp"). This indicates library/runtime dependencies are surves two
purposes. First it tells the build system what libraries to link the benchmark
against. Second it communicates to a verifier that the benchmark depends on a
particular library which the verifier may find useful.

In [fp-bench]) dependencies: maps to a dictionary (object) where each key is a dependency
name (e.g. cmath) which itself maps to a dictionary (object) that contains optional
information (e.g. library version).

e.g

dependencies:
  openmp:
    version: 4.0

Runtime environment

In [fp-bench(https://github.com/delcypher/fp-bench)] benchmarks can also state what the expected runtime environment is
(i.e. environment variables and command line flags). It also supports substitutions
(e.g. @spec_dir@) that make it easy to refer to a file in the same directory as the
specification file.

If this is not specified then it is assumed that the environment variables and command
line arguments are empty (except for the implicit 0th command line argument).

This is something I found very useful in my own work. I'm not sure if SV-COMP should
adopt this.

correct_results (a.k.a verification_tasks - I prefer this name)

This needs more work. In fp-bench this field is called verification_tasks and is
far more expressive. Here's an example.

I find your key names (e.g. unreach-call,__VERIFIER_error_linux_alloc_irq)
hard to read. The use of the , to me is very strange. We have a structured data
format so we should use it appropriately.

In fp-bench each verification_task maps to a dictionary (object) that contains information
about a task rather than just true or false. For example in fp-bench there is a correct
property in the dictonary and that can be true, false or null (correctness is not known).

In fp-bench if a verification task is not listed it is implicitly assumed that the benchmark
is correct with respect to that verification task.

fp-bench also lists expected error locations (referred to as counter examples). This format
may not be suitable for SV-COMP but the point here is we should choose a sensible and flexible
way to describe verification tasks.

Variants

fp-bench has a the ability for a benchmark specification file to actually describe multiple
benchmarks, each is called a variant. Each variant can have different compiler defines, verification tasks, dependencies and runtime environment.

I found this very useful because it allowed me to make minor variants of a benchmark.
While this was very convenient it comes at significant cost

  • We have to decided what the semantics are of variants when the properties are stated
    globally for all variants and for an individual variant. For example if a benchmark
    has a set of global defines and the variant has its own defines what are the semantics.
    For fp-bench we union the defines but for other properties (e.g. verification_tasks)
    it's less clear what the semantics should be.
  • I found it necessary to implement part of the build system a pass that would walk over
    benchmark specification files that contained variants and generate a benchmark specification
    file for each variant. This then made it easier for external tools to consume the benchmark
    specification format.

The SV-COMP community needs to decide whether having the ability to specify variants
of a benchmark is worth it. My suggestion would be to not do this and potentially
add it in a future version of the schema if there is a desire for the feature.

Misc data

fp-bench supports an optional misc property which maps to a dictonary (object)
containing any data.

This is potentially useful to people who wish to annotate the benchmark with
additional data that is more sophisticated than tags.

This is better than having YAML comments because they will not survive
deserialization/serialization.

Schema and schema version

Having a schema version proved very useful in my work. It allows the format
to be described unambiguosly.

Given that the proposal is to have a category specification and benchark
specification format I would recommend having a schema for both and therefore
a version number for both.

I would encourage you to look at schema for the benchmark specification format

https://github.com/delcypher/fp-bench/blob/master/svcb/svcb/schema.yml

from sv-benchmarks.

mutilin avatar mutilin commented on May 23, 2024

correct_results

This needs more work. In fp-bench this field is called verification_tasks and is
far more expressive. Here's an example.

@delcypher, how correct_results are passed to BenchExec?

from sv-benchmarks.

delcypher avatar delcypher commented on May 23, 2024

@mutilin

@delcypher, how correct_results are passed to BenchExec?

This isn't my concern. I was just commenting on design proposed here and comparing it to design decisions I have made when making my own benchmark specification format.

Although it's important that BenchExec be able to consume the verification tasks I don't BenchExec should force a rigid design for the benchmark specification language. Instead the design should be made as flexible (and foward looking, i.e. try to anticipate likely changes we might want to make in the future) as possible. BenchExec can be taught to consume the new format or some short of shim can be put in place to do the necessary translation.

from sv-benchmarks.

mutilin avatar mutilin commented on May 23, 2024

I want to have a format for correct_results that can be used for sosy-lab/benchexec#217
Ideally, I think it should be a subset of the design proposed here.
So I would be happy to work out the solution to be implemented in BenchExec.

from sv-benchmarks.

PhilippWendler avatar PhilippWendler commented on May 23, 2024

@delcypher Thanks.

It looks like together you can get a nice proposal in shape, this is really great!

A few comments from me:

Tags, compiler defines and dependencies declarations sounds like an optional feature that we can add later on if we need it, we can probably skip it now to keep things simple.

The runtime environment seems like it would always contain tool-specific information. So it should not be part of the specification of verification tasks. Or are there useful examples that are not specific to which tool is going to be executed on the task?

I agree that dropping variants is probably good, and it makes everything easier.

How is your verification task with dictionaries containing field correct different from correct_results with a mapping from property to expected results? (If we ignore name differences, and do not have the possibility to specify multiple variants per file.) What kinds of things can it express that are not expressible with this proposal?

I think that we must not implicitly assume anything about whether a property holds. If something isn't specified, it should be treated either as error or unknown.

from sv-benchmarks.

delcypher avatar delcypher commented on May 23, 2024

@PhilippWendler

Tags, compiler defines and dependencies declarations sounds like an optional feature that we can add later on if we need it, we can probably skip it now to keep things simple.

It's up to the community. In fp-bench these are all optional field. I would say for dependencies there is already a need for this feature because of the mpi benchmarks.

The runtime environment seems like it would always contain tool-specific information. So it should not be part of the specification of verification tasks. Or are there useful examples that are not specific to which tool is going to be executed on the task?

I think I need explain this more clearly. Every userspace C program runs with a set of environment variables and command line arguments.
Runtime information is about the environment that the benchmark is intended to run in. It is not the environment that a verifier should be run in.

A concrete example where I needed this was a unit conversion library (e.g. converting meters to feet ) that had an external file that contained all the conversion constants. The benchmark needed to know where to find this file and it could be specified either in an environment variable or on the command line.

In fp-bench only concrete values for the runtime environment can be specified. This is problematic for SV-COMP because we may want the verifier to assume non-deterministic values for some parts of the environment.

How is your verification task with dictionaries containing field correct different from correct_results with a mapping from property to expected results? (If we ignore name differences, and do not have the possibility to specify multiple variants per file.)

Here is a concrete example from fp-bench.

verification_tasks:
  no_assert_fail:
    correct: false
    counter_examples:
      -
        description: 'NaN can cause property not to hold.'
        locations:
          -
            file: 'vector_example.c'
            line: 56
  no_reach_error_function:
    correct: true

The idea is to map a verification task name to a dictionary containing any information we want to hold about the verification task with respect to the benchmark. @vmordan's proposal was to effectively map the verification task to a boolean true, false or unknown which leaves no room for easy expansion of the format.

In the particular example above verification_tasks maps to two verification tasks no_assert_fail and no_reach_error_function.

no_assert_fail maps to a dictionary containing two properties.

  • correct. This indicates whether or not the benchmark is expected to be correct with respect to the parent property (i.e. in this case no_assert_fail). It is either true, false or null (i.e. unknown).
  • counter_examples. This is a list of expected bugs in the benchmarks. It lists the known source locations along with a description.

no_reach_error_function maps to a dictionary containing a single property correct which maps to true (i.e. we expect the benchmark to be correct
with respect to the no_reach_error_function verification task). This illustrates multiple verification tasks being declared for the benchmark.

I am not saying I think that counter_examples should be part of what SV-COMP uses in its format. It is just here to illustrate the sort of thing that can be done. counter_examples was useful for the project I was working on but I think SV-COMP would need something more sophisticated (e.g. a list of witness files or something like that)

It is up to the community to decide what information to associate with a verification task but I strongly believe that using the scheme I suggest (mapping verification task to a set of verification tasks where each task maps to a dictionary describing properties about that task) is superior to just mapping to a boolean value.

What kinds of things can it express that are not expressible with this proposal?

See above. Basically anything that we can describe in YAML can be expressed for each verification task.

I think that we must not implicitly assume anything about whether a property holds. If something isn't specified, it should be treated either as error or unknown.

Sure. This was a decision that was convenient for the project I was working on. It is not something I would recommend for SV-COMP.

from sv-benchmarks.

delcypher avatar delcypher commented on May 23, 2024

@PhilippWendler @vmordan
Something else I should note is that the SV-COMP community needs to be very careful when introducing multiple verification tasks.

It caused me a lot of trouble when I did so for fp-bench.

Consider this example.

int main() {
  char A[4];

  int x = (int) A[4];
  if ( x > 0)
    return 0;

  __VERIFIER_error();
}

And now let's imagine that there are two verification tasks one that is concerned with memory safety and the other that is concerned with the reachability of __VERIFIER_error().

The reachability of __VERIFIER_error() depends on reading data out of bounds which is what the other verification task is concerned about.

The point is that the two verification tasks are intertwined and it's difficult to look at one in isolation.

If the SV-COMP benchmark format allows for multiple verification tasks then it needs to be very clear what assumptions a verifier should make when considering each task individually.

from sv-benchmarks.

PhilippWendler avatar PhilippWendler commented on May 23, 2024

@delcypher Thanks.

You are right about the MPI tasks, the people interested in #163 should discuss what they need (@ziqingluo, @yihaoyan).

For the runtime environment, this makes sense of course. However, given the potential problem you see, I would suggest for now to keep it out (when preparing a verification task, one can also easily add some wrapper code that sets up args and env, which is more flexible because it allows any kind of logic). And of course we can also add it later easily.

For the mapping per property, your example sounds convincing. I could also easily envision this as a place for storing test vectors that trigger the counterexample. And probably for witnesses this also indeed interesting.

So @vmordan, what is your opinion on this? Could you answer the open questions from @delcypher? Maybe prepare an updated proposal that includes the changes you agree with for easier discussion?

@dbeyer Any comments? Is this going in a direction you find useful for SV-Comp?

from sv-benchmarks.

vmordan avatar vmordan commented on May 23, 2024

Something else I should note is that the SV-COMP community needs to be very careful when introducing multiple verification tasks.

I think this is a problem for verification tool (whether it allows to search for such intertwined property violations or not) rather than repository metadata. Currently it is suggested, that result of multi-property verification should be the same as result of verifying each property in isolation.

We have a question about the suggested format (https://github.com/delcypher/fp-bench/blob/master/svcb/svcb/schema.yml) - which tags should be mandatory for both SV-COMP and BenchExec? Most likely, it is sources (to pass YAML file to BenchExec), verification_tasks (to pass correct results) and, maybe, architectures. Do you think some other tags also should be mandatory or optional features (perhaps only for SV-COMP scripts)?

I agree, that suggested earlier format for verification_tasks can be extended, but it should contain property name in some established format (for example, SV-COMP categories) and correct status (true or false). If property was not specified in verification_tasks, it is treated as unknown. Do you agree to use the following constants as property names:
valid-deref,
valid-free,
valid-memtrack,
no-overflow,
no-deadlock,
termination,
unreach-call(<function_name>) (for example, unreach-call(__VERIFIER_error_linux_alloc_irq))?

from sv-benchmarks.

PhilippWendler avatar PhilippWendler commented on May 23, 2024

I think at the beginning only sources and verification tasks would be necessary. However, these could be named differently to be more generic, e.g. (input) files and expected results or something similar.

from sv-benchmarks.

vmordan avatar vmordan commented on May 23, 2024

We suggest the following format of mandatory tags for both SV-COMP and BenchExec:

input_files:
  type: array
  minItems: 1
  uniqueItems: true
  items:
    type: string
expected_results_for_property: &expected_results_for_property
  type: object
  properties:
    correct:
      type: boolean
expected_results:
  type: object
  minProperties: 0
  additionalProperties: false
  properties:
    "valid-deref": *expected_results_for_property
    "valid-free": *expected_results_for_property
    "valid-memtrack": *expected_results_for_property
    "no-overflow": *expected_results_for_property
    "no-deadlock": *expected_results_for_property
    "termination": *expected_results_for_property
    "unreach-call": *expected_results_for_property

For example:

input_files:
  - test_false-unreach-call.c
expected_results:
  valid-deref:
    correct: true
  no-overflow:
    correct: false
  unreach-call:
    error_function_1:
      correct: true
    error_function_2:
      correct: false

Of course, for SV-COMP scripts only other tags can be mandatory, which will be ignored in BencExec.

Do you agree with the suggested format of mandatory tags for both SV-COMP and BenchExec?

from sv-benchmarks.

delcypher avatar delcypher commented on May 23, 2024

@vmordan

I think this is a problem for verification tool (whether it allows to search for such intertwined property violations or not) rather than repository metadata. Currently it is suggested, that result of multi-property verification should be the same as result of verifying each property in isolation.

Sure it is a problem for a verification tool. The point I was trying to make is if we have multiple properties it may be desirable to document how these should be handled by a verifier.

We have a question about the suggested format (https://github.com/delcypher/fp-bench/blob/master/svcb/svcb/schema.yml) - which tags should be mandatory for both SV-COMP and BenchExec? Most likely, it is sources (to pass YAML file to BenchExec), verification_tasks (to pass correct results) and, maybe, architectures. Do you think some other tags also should be mandatory or optional features (perhaps only for SV-COMP scripts)?

sources, verification_tasks, category, architecture should be mandatory.

I agree, that suggested earlier format for verification_tasks can be extended, but it should contain property name in some established format (for example, SV-COMP categories) and correct status (true or false). If property was not specified in verification_tasks, it is treated as unknown. Do you agree to use the following constants as property names:
valid-deref,
valid-free,
valid-memtrack,
no-overflow,
no-deadlock,
termination,
unreach-call(<function_name>) (for example, unreach-call(__VERIFIER_error_linux_alloc_irq))?

I mostly agree but unreach-call(<function_name>) is not a good idea. This means every client of the specification has to do some extra parsing which is completely unnecessary. Instead the task should be called unreach_call and the dictionary it maps to should contain the name of the call. We can debate if should assume __VERIFIER_error() by default. I also think you should avoid - and use _ instead. - is not a valid identifier in many programming languages and this may cause clients problems.

from sv-benchmarks.

delcypher avatar delcypher commented on May 23, 2024

@vmordan
The schema you've sketched looks mostly right except you need to enforce that some properties are required (e.g. correct must be specified).

Rather than sketching the schema any further here what I would suggest to do is modify the svcomp-build-mockup so we have a working demo of what we want. This makes it much easier to validate design decisions.

I need to do a bunch of clean up to that project (e.g. remove variants and back port some features from fp-bench). Once that's done we can start doing the modification.

from sv-benchmarks.

delcypher avatar delcypher commented on May 23, 2024

@PhilippWendler @vmordan
Do we want to have anything the spec to indicate whether the source file is pre-processed or not? Alternatively we could follow a convention by just looking at the extension of the source files (i.e. *.i means the file was pre-processed).

from sv-benchmarks.

delcypher avatar delcypher commented on May 23, 2024

@vmordan
My questions regarding the extends property still need to be answered.

from sv-benchmarks.

PhilippWendler avatar PhilippWendler commented on May 23, 2024

@vmordan In the example, the properties unreach-call and valid-deref/no-overflow are expressed differently, but I do not find this in the schema? The need to handle this differently would go away if we accept unreach-call(f) as property name. I am not sure what makes more sense, but probably in most cases we need a single-string identifier for properties anyway?

I think @delcypher mentioned once that there should be some version identifier in such files?

For now, I would keep architecture and category out of the file. Currently this would create redundant information, because this information is already defined by the category-specific files. It is probably good to move this information at some point, but we can do this step-wise.

Similarly for preprocessing, I would delay this. We could add this later on with some language key, for example. However, this is more complex anyway, there is no clear line between preprocessed files and non-preprocessed files. After all, a lot of files would satisfy both labels. So some more thought needs to go into this anyway.

What kind of demo would you develop in the svcomp-build-mockup repository? Sounds interesting. So far I have only thought of having a schema file somewhere, some documentation, a few examples (we can use the already existing ldv-multiproperty tasks for this), and some code in check.py that checks for validity. This could be done directly in this repository.

from sv-benchmarks.

vmordan avatar vmordan commented on May 23, 2024

Currently we propose the following schema for YAML file:

title: "SV-COMP benchmark description schema"
__version__: 0
type: object
additionalProperties: false
properties:
  input_files:
    type: array
    minItems: 1
    uniqueItems: true
    items:
      type: string
  expected_results_for_property: &expected_results_for_property
    type: object
    properties:
      correct:
        type: boolean
    required:
      - correct
  expected_results:
    type: object
    minProperties: 1
    additionalProperties: false
    patternProperties:
      # Single-string identifier.
      "^[a-zA-Z0-9_-]+$": *expected_results_for_property
required:
  - input_files
  - expected_results

Potential example of YAML file:

input_files:
  - test_false-unreach-call.c
expected_results:
  valid-deref:
    correct: true
  unreach-call(error_function_1):
    correct: false
  unreach-call(error_function_2):
    correct: true

from sv-benchmarks.

mutilin avatar mutilin commented on May 23, 2024

Where YAML schema should be placed at the end?
Should we place it to sv-bechmarks repository, benchexec or somewhere else?

from sv-benchmarks.

PhilippWendler avatar PhilippWendler commented on May 23, 2024

I would tend to put it in this repository. The check.py script will need it for checking validity of yaml files.

from sv-benchmarks.

PhilippWendler avatar PhilippWendler commented on May 23, 2024

In sosy-lab/benchexec#226 (comment) I have explained a few suggested changes. In addition to what is described there, we think that a possibility to include/extend other files is indeed a good idea, and we should add this as soon as we have more information than just source files and properties.

The reason for this is that we think it is better to have one yaml file per program, without the possibility to define multiple variants of the program in a single yaml file (i.e., no variants). The reason for this is that the metadata format should be easy to use. Having multiple variants in a single file significantly complicates parsing and handling the files (for example, everybody handling these files now needs to carry around both the name of the yaml file as well as the name of the variant as a tuple instead of a single identifier). Using a composite name like <name>_<build_variant_name> as identifier creates other problems, for example backwards mapping can become impossible. So we prefer to avoid variants.

Without variants, extends/include is useful to avoid duplicating redundant information across yaml files if there are several variants of a program, e.g. with different values of defines. It still needs to be discussed how this key should be named, and whether it should be allowed to override information from the included file in the including file, or wether contradicting information would be an error.

from sv-benchmarks.

PhilippWendler avatar PhilippWendler commented on May 23, 2024

We have task metadata in machine readable files for some time now.

from sv-benchmarks.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.