Giter Club home page Giter Club logo

common-workflow-language's Introduction

Common Workflow Language

Main website: https://www.commonwl.org

GitHub repository for www.commonwl.org: https://www.github.com/common-workflow-language/cwl-website

CWL v1.0.x: https://github.com/common-workflow-language/common-workflow-language (this repository)

CWL v1.1.x: https://github.com/common-workflow-language/cwl-v1.1/

CWL v1.2.x: https://github.com/common-workflow-language/cwl-v1.2/

Support Gitter GitHub stars

[Video] Common Workflow Language explained in 64 seconds The Common Workflow Language (CWL) is a specification for describing analysis workflows and tools in a way that makes them portable and scalable across a
variety of software and hardware environments, from workstations to cluster, cloud, and high performance computing (HPC) environments. CWL is designed to meet the needs of data-intensive science, such as Bioinformatics, Medical Imaging, Astronomy, Physics, and Chemistry.

Open Stand badge CWL is developed by a multi-vendor working group consisting of organizations and individuals aiming to enable scientists to share data analysis workflows. The CWL project is maintained on Github and we follow the Open-Stand.org principles for collaborative open standards development. Legally, CWL is a member project of Software Freedom Conservancy and is formally managed by the elected CWL leadership team, however every-day project decisions are made by the CWL community which is open for participation by anyone.

CWL builds on technologies such as JSON-LD for data modeling and Docker for portable runtime environments.

User Guide

The CWL user guide provides a gentle introduction to learning how to write CWL command line tool and workflow descriptions.

CWLの日本語での解説ドキュメント is a 15 minute introduction to the CWL project in Japanese.

CWL Recommended Practices

CWLの日本語での解説ドキュメント is a 15 minute introduction to the CWL project in Japanese.

A series of video lessons about CWL is available in Russian as part of the Управление вычислениями(Computation Management) free online course.

Citation

To reference the CWL project in a scholary work, please use the following citation:

Michael R. Crusoe, Sanne Abeln, Alexandru Iosup, Peter Amstutz, John Chilton, Nebojša Tijanić, Hervé Ménager, Stian Soiland-Reyes, Bogdan Gavrilović, Carole Goble, and The CWL Community. (2022): Methods Included: Standardizing Computational Reuse and Portability with the Common Workflow Language. Commun. ACM 65, 6 (June 2022), 54–63. https://doi.org/10.1145/3486897

To cite version 1.0 of the CWL standards specifically, please use the following citation inclusive of the DOI.

Peter Amstutz, Michael R. Crusoe, Nebojša Tijanić (editors), Brad Chapman, John Chilton, Michael Heuer, Andrey Kartashov, Dan Leehr, Hervé Ménager, Maya Nedeljkovich, Matt Scales, Stian Soiland-Reyes, Luka Stojanovic (2016): Common Workflow Language, v1.0. Specification, Common Workflow Language working group. https://w3id.org/cwl/v1.0/ doi:10.6084/m9.figshare.3115156.v2

A collection of existing references to CWL can be found at https://zotero.org/groups/cwl

Code of Conduct

The CWL Project is dedicated to providing a harassment-free experience for everyone, regardless of gender, gender identity and expression, sexual orientation, disability, physical appearance, body size, age, race, or religion. We do not tolerate harassment of participants in any form. This code of conduct applies to all CWL Project spaces, including the Google Group, the Gitter chat room, the Google Hangouts chats, both online and off. Anyone who violates this code of conduct may be sanctioned or expelled from these spaces at the discretion of the leadership team.

For more details, see our Code of Conduct.

For the following content:

  • Support, Community and Contributing
  • CWL Implementations
  • Repositories of CWL Tools and Workflows
  • Software for working with CWL
    • Editors and viewers
    • Utilities
    • Converters and code generators
    • Code libraries
  • Projects the CWL community is participating in
  • Participating Organizations
  • Individual Contributors
  • CWL Advisors
  • CWL Leadership team

Please see https://www.commonwl.org

common-workflow-language's People

Contributors

bogdang989 avatar boysha avatar chapmanb avatar cure avatar denis-yuen avatar dleehr avatar gijzelaerr avatar guillermo-carrasco avatar hmenager avatar jmchilton avatar kapilkd13 avatar kellrott avatar manabuishii avatar manu-chroma avatar mdmiller53 avatar michael-kotliar avatar mr-c avatar otiai10 avatar portah avatar porterjamesj avatar psaffrey-illumina avatar psafont avatar sersorrel avatar sinisa88 avatar stain avatar tetron avatar thomashickman avatar tjelvar-olsson avatar tom-tan avatar wgerlach avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

common-workflow-language's Issues

Galaxy-Inspired Features to Consider

Very preliminary list to consider (and unfortunately probably biased toward the kinds of things @jmchilton thinks about on the Galaxy project). In order of increasing of controversy:

  • DOIs for citing underlying application on the tool.
  • Annotating inputs and outputs with platform-specific type extensions (URL's were mentioned at 2015/03/17 meeting - this is fine but documentation or example would be ideal).
  • Version for the tool (not the underlying application).
  • Requirement list - combinations of abstract package name and version that can be used by platforms to configure the environment for a particular tool. (This would be if Docker is not available).

Some more ideas of things I would like for Galaxy users but might not make sense to put in the draft - platform specific extensions (#26) may be better for these and advice on how to do that would be most appreciated.

  • BibTeX for citing underlying tool on application.
  • Way to build a command for fetching the version of the underlying application (e.g. application --version).

Then platform specific UI hints are huge -

There are existing issues for some other features - in particular

  • Annotating datatypes on inputs and outputs - #7
  • Conditional outputs - #14 (the purposed flexibility will be a real challenge to represent in Galaxy - but Galaxy's variant is tied to Python and Galaxy data structures so is not appropriate).

multiple inbound link handling

The spec says we should handle multiple inbound links by making an array, unless all links have List data (where T is the same for all links), in which case arrays should be concatenated.

It's hard to determine if data is of same type, since types can be defined in different places. Perhaps it would be better to always create arrays (so value of a port with N inbound links would be a list with N elements for N>1), and introduce an explicit "merge strategy" property to allow easy concatenation where needed.

File objects should include type

Current way of representing file objects:

path: /path/to/file.ext
size: 42
metadata:
  key: value
secondaryFiles:
 - "@type": "File"
   path: /path/to/file.ext.ndx
 - "@type": "File"
   path: /path/to/file.replaced_ext

It makes us rely on type definitions to tell if something is a file or not, which is bad since type defs can be unions and thus ambiguous.

I'd suggest we include type explicitly, add a "name" property, and represent secondary files with the naming convention. Also, metadata should hopefully be prefixed with ontology base. Example:

class: File
path: /path/to/file.ext
name: file.ext
size: 42
metadata:
  prefix:key: value
secondaryFiles:
 - ".ext"
 - "^.replaced"

Alternative way to encode wiring (dataLinks array)

While I prefer the connect property, many people I talked with said that it feels more natural to encode data links as a separate list of source+destination pairs (exactly as wf4ever originally does). Perhaps it is worth introducing this as an alternative? In any case, duplicate links should be ignored.

Move streamable flag from bindings to ports?

If streamable property remains on input/output bindings, we cannot handle a very common case: input files bound to stdin should be streamable. Since the flag is in the binding, it means the binding exists (hehe) and therefore file should be passed through the command line (rather than e.g. stdin).

It is highly unlikely there will be cases where a port has complex type which contains files, and some of these files are streamable while others are not. Because of this, I think it makes sense to simply have streamable be the property of ports rather than bindings.

lightweight java reference implementation

A lightweight CWL java stack along lines of existing python tools without any support for complex execution environments.

It should have an API providing:

  • methods for processing tool descriptions (phase 0)
  • constructing command lines based on tool descriptions and file locations/parameter values (phase 1)
  • ability to discover tool descriptions matching specific input/output/functional criteria (phase 2)
  • facilitate connection with heavyweight CWL enabled systems to allow workflow discovery, data staging and execution of complex workflows (phase 3)

Proposed for topic during BOSC Codefest 2015 (g+hangout 23rd June https://docs.google.com/document/d/1Ye8I35EHbyFXJD7h_M-5FMu21FsEXvE4DrxHiB3WIOE/edit?usp=sharing )

Initial use case
Java UI auto-generation for a CWL tool available as binary on local file system which has specific input / output data requirements.

Demonstration
integrate with Jalview MSA workbench to provide autogenerated UI from tool descriptions for:
a. MSA tool (input unaligned sequences, output is aligned sequences and optionally phylogenetic tree, similarity matrix, alignment quality file)
b. sequence database search tool (input sequence + search parameters, output search report, profiles, other data)
c. phylogeny command line tools (input alignment + tree building parameters, output tree(s), similarity matrices).

Goal
Lightweight component library with OSGi descriptors suitable for use in Cytoscape, Jalview, and ultimately included in BioJava.

Add AlternateRequirement

Meta-requirement which is a list of requirements where at most one requirement needs to be satisfied.

expressionDefs should be more generic

Since we stopped using JS in the spec, it makes more sense for the expressionDefs property to be more generic and pass any information to expression engines.

Perhaps we can rename it to e.g. expressionEngineConfig or expressionEngineContext or such and allow any data to be passed.

document how to model a one-or-the-other parameter

From https://groups.google.com/d/msg/common-workflow-language/t7GEjKwJHys/2BvtkhsjXxMJ

> I'm not seeing how to model a boolean so that both 'true' and 'false' result
> in differents strings being added to the command line.

One way to do this is to define an "enum" where the symbols are the
two alternate command line flags.  Another way is to use an expression
that overrides the default handling of boolean.  A third way would be
to define two types with the command line flag in inputBinding.prefix
and use a union.

ShellCommandRequirement

propose ShellCommandRequirement which enables the following two features:

  • shell (default false) on command line tools. If true, this will builds the command line as normal but runs it as a shell command instead of directly exec()
  • shellquote (default true) on commandLineBinding to indicate whether fragments text that should be quoted or included in the shell command as-is, allowing pass-through of as shell metacharacters such as pipes

Tag releases in git

Continued from #28

I would be nice if CWL releases were tagged to give users a way to fall back easily to a previous, known-to-work release. Currently every build gets pushed to PyPi (correct me if I'm wrong), so in a way, every build is a release and tagging them does not make sense since every commit leads to a build (again, correct me if I am wrong).

Here's what I do: To get stable releases, only selected builds are pushed to PyPi. On master, the package version number has the .devN suffix except when a release is prepared. Then the .devN suffix is removed (as the only change in a commit) and a release build is triggered on CI, the only difference to normal builds being that it pushes to PyPi. The commit that removed the dev suffix is tagged manually and another commit is made to bump the package version to the next dev release (the minor version in incremented and a .dev1 prefix is added).

Anyways, there are a thousand ways to skin this cat.

BTW: pip supports installing a tagged release directly from Github via

pip install git+https://github.com/foo/[email protected]

Processes can declare they explicitly implement a certain input/output signature

Various groups have expressed the desire to be able to define input/output signatures for abstract operations that may be implemented by multiple containers or workflows. Possible implementation would be a field like "implementsInterface" on a process which references an external resource (by URI?) which defines the input and output signature.

WorkflowStepInput.param vs naming convention

Currently, according to spec, step ports are connected with "implementation" ports using an IRI from param property of WorkflowStepInput. An alternative option we discussed was to use naming convention where step inputs should be named #<step_id>/<impl_port_id>.

Example with IRI:

steps:
  - inputs:
      - { param: "revtool.cwl#input", connect: { source: "#input" } }
    outputs:
      - { id: "#reversed", param: "revtool.cwl#output" }
    run: { import: revtool.cwl }

Example with naming convention:

steps:
  - id: "#step1"
    inputs: 
      - {id: "#step1/input", connect: { source: "#input" }}
    outputs:
      - { id: "#step1/output"}
    run: { import: revtool.cwl }

The key difference is that naming convention allows us to recognize an "interface" of a step and easily switch out one implementation for another. For example, if we had a "variant calling" step which has inputs "reference" and "fastq_mates" and a "vcf" output, one can create subworkflows matching that interface and switch implementations by just modifying the run property.

I suggest we at least keep the second option as fallback: if param is not specified, match ports using the naming convention.

Define way to access the output directory in a tool definition

To make it easier for tool definitions to conform to the spec's requirement that:

Output files produced by tool execution must be written to the designated output directory.

My suggestion for an implementation is to make the output directory accessible via the job object, so that a JsonPointer expression can pull it out, e.g.:

arguments:
  - position: 1
    prefix: '--log-dir'
    valueFrom:
      engine: JsonPointer
      script: 'job/output_dir'

I'm happy to take a stab at this if you want.

Add successCodes property to CommandLineTool

Currently, we recognize success or failure by checking if exit code is zero or not. Some tools (e.g. grep) have successful runs with non-zero exit code. To accommodate this, we should add a successCodes property to CommandLineTools (a set of integers with [0] being default).

Have scatter and scatterMethod be properties of WorkflowStep

If requirements of the workflow step are interpreted as mixing with or overriding requirements of the step implementation, it doesn't feel consistent to have the scatter configuration in the requirement.

I'd suggest we revert to having scatter and scatterMethod be properties of steps and, if we want to keep scattering an optional feature, require that a class: ScatterFeatureRequirement (or equivalent) be listed in workflow requirements if workflow uses scatter.

Should we require ports to always define type or always specify "depth" property?

(or perhaps neither of those?)

Avro is pretty strict when defining types. If port schemas are required and data dimensions determined from the schema (as is the case with the reference implementation at the moment), this disallows generic or flexible (e.g. "this input is a dict of whatever configuration") tools. Example:

class: ExpressionTool
description: Flatten a list of lists of things to a single list of things.
inputs:
  - id: "#list"
outputs:
  - id: "#flattened"
script:
  class: JavascriptExpression
  value: "return {flattened: [].concat.apply([], $job.inputs.list)}"

I don't think there's a way to specify input/output type in Avro for the above example (List<List> and List), but the dimensionality needs to be known for implicit parallel-for-each.

One way to resolve this would be to explicitly define a "depth" property of each input/output and not require a schema at all. Any other suggestions?

Installation fails since 1.0.20150318015654

Steps to reproduce:

virtualenv FOO
FOO/bin/pip install cwltool
FOO/bin/cwltool

and I get

Traceback (most recent call last):
  File "FOO/bin/cwltool", line 9, in <module>
    load_entry_point('cwltool==1.0.20150318015654', 'console_scripts', 'cwltool')()
  File "/Users/hannes/FOO/lib/python2.7/site-packages/pkg_resources/__init__.py", line 474, in load_entry_point
    return get_distribution(dist).load_entry_point(group, name)
  File "/Users/hannes/FOO/lib/python2.7/site-packages/pkg_resources/__init__.py", line 2582, in load_entry_point
    return ep.load()
  File "/Users/hannes/FOO/lib/python2.7/site-packages/pkg_resources/__init__.py", line 2265, in load
    return self._load()
  File "/Users/hannes/FOO/lib/python2.7/site-packages/pkg_resources/__init__.py", line 2268, in _load
    module = __import__(self.module_name, fromlist=['__name__'], level=0)
  File "/Users/hannes/FOO/lib/python2.7/site-packages/cwltool/main.py", line 3, in <module>
    import draft1tool
  File "/Users/hannes/FOO/lib/python2.7/site-packages/cwltool/draft1tool.py", line 23, in <module>
    with open(jsonschemapath) as f:
IOError: [Errno 2] No such file or directory: '/Users/hannes/FOO/lib/python2.7/site-packages/cwltool/schemas/draft-1/json-schema-draft-04.json'

cwtool generates a TypeError

When I run

$ python ./reference/cwltool/main.py examples/bwa-mem-tool.json examples/bwa-mem-job.json 

This error is generated.

bwa mem -t4 -I1,2,3,4 -m 3 /tmp/job983894154_test-files/chr20.fa /tmp/job983894\
154_test-files/example_human_Illumina.pe_1.fastq /tmp/job983894154_test-files/e\
xample_human_Illumina.pe_2.fastq > output.sam
Traceback (most recent call last):
  File "./reference/cwltool/main.py", line 53, in <module>
    sys.exit(main())
  File "./reference/cwltool/main.py", line 44, in main
    print job.run(dry_run=args.dry_run)
  File "/home/jkern/src/common-workflow-language/reference/cwltool/job.py", lin\
e 23, in run
    subprocess.call("docker", "pull", self.container["uri"])
  File "/usr/lib/python2.7/subprocess.py", line 522, in call
    return Popen(*popenargs, **kwargs).wait()
  File "/usr/lib/python2.7/subprocess.py", line 659, in __init__
    raise TypeError("bufsize must be an integer")
TypeError: bufsize must be an integer

Submitting PR.

Interpretation of WorkflowStep.requirements

It's not clear how the requirements property of workflow steps should be interpreted. Are they simply added to the requirements of the step "implementation"? What happens if they conflict (e.g. require different docker images)?

add conditionals to inputs and outputs

Based on the value of some inputs, some other inputs/outputs/groups of inputs/groups of outputs may or may not be relevant.
I suggest representing conditionals as a property of an input item like this e.g.:

        "properties": {
            "sam": {
                "adapter": {
                    "glob": "output.sam"
                },
                "type": "file"
                "when" : {"$expr": "job.inputs.produce_a_sam_option==true"}
            }
        }

Here I tell that this output is produced only if the input produce_a_sam_option has a value set to true. This can also be applied to inputs, specifying with the same syntax that a given input is to be taken into account only if the result of the $expr is true.

Encode some features as Requirements

To keep the core spec simple, we can encode certain features as Requirements. For example, instead of specifying defined schemas, files and env vars through top-level properties, they can be encoded as:

requirements:
  - class: SchemaDefRequirement
    types: [{"type": "record", "name": "MyType", "fields": []}]
  - class: EnvVarsRequirement
    vars: [{"key": "SOME_VAR": "value": "string or expression"}]
  - class: CreateFileRequirement
    fileName: "script.sh"
    fileContent: "./something $@"

Document avro-ld

As I noted on the mailing list, the spec makes reference to avro-ld but does not describe what it is or how to use it, which makes it difficult for an implementation to include validation.

suggestion: using env to pass evars to shell script

Hello,

I was reviewing run_test.sh. It accepts environment variables as arguments and uses eval to parse these args into environment variables(Evars)

$ ./run_test.sh CWLTOOL=../reference RABIX=$HOME/work/rabix/rabix

Have you considered inverting this command?

evn CWLTOOL=../reference RABIX=$HOME/work/rabix/rabix ./run_test.sh

This way there is no need for the script to process the args into Evars.

https://github.com/common-workflow-language/common-workflow-language/blob/master/conformance/run_test.sh#L22

-jk

outputBinding in workflow ports

It's likely an artifact of entity inheritance, but should be made clear - is it meaningful to keep output binding in workflow output ports only to allow embedded transformations or do we remove the property?

I'd vote for the latter since transformations can be done with a downstream component and "binding" doesn't make much sense in context of a workflow.

Draft 2 changes

  • stdin and stdout become flags on File input parameters.
  • Use valueFrom to specify deterministic output filenames via string literal or expression. Name from valueFrom will be used for stdout.
  • Simple syntax for constructing filenames of secondary files based on the primary file name. Can also use an expression to generate secondary file names.
  • Instead of separator which is a character value, have a flag separate (default true) which indicates whether the prefix and value are separate argv entries or concatenated into a single entry.

reference implementation install fails

I tried a clean install of cwltool today, and it seems to fail because "sandboxjs" lib is missing:

~/cwl$ cwltool
Traceback (most recent call last):
  File "/home/hmenager/cwl/bin/cwltool", line 9, in 
    load_entry_point('cwltool==1.0.20150525010411', 'console_scripts', 'cwltool')()
  File "/home/hmenager/cwl/local/lib/python2.7/site-packages/pkg_resources.py", line 353, in load_entry_point
    return get_distribution(dist).load_entry_point(group, name)
  File "/home/hmenager/cwl/local/lib/python2.7/site-packages/pkg_resources.py", line 2321, in load_entry_point
    return ep.load()
  File "/home/hmenager/cwl/local/lib/python2.7/site-packages/pkg_resources.py", line 2048, in load
    entry = __import__(self.module_name, globals(),globals(), ['__name__'])
  File "/home/hmenager/cwl/local/lib/python2.7/site-packages/cwltool/main.py", line 3, in 
    import draft1tool
  File "/home/hmenager/cwl/local/lib/python2.7/site-packages/cwltool/draft1tool.py", line 4, in 
    import sandboxjs
ImportError: No module named sandboxjs

any pointers?

Add the ability to configure step input value without exposing it as workflow port

It would be really useful to be able to "hardcode" some step inputs without exposing them as workflow ports. If they are exposed as inputs with default value, there is no way to distinguish between ports meant to be configured by the "workflow user" and values that are not meant to change.

Having a default property on WorkflowInputParameter and WorkflowStepInput should handle this nicely - if no value is supplied in JobOrder or through data links, input value should be content of default or null.

published cwltool on pip missing schemas

Trying to run an example from conformance with the published version of cwltool on pip results in the following:

(cwl)vagrant@localhost /vagrant/common-workflow-language/conformance $ cwltool draft-2/cat1-tool.cwl draft-2/cat-job.json
Traceback (most recent call last):
  File "/home/vagrant/cwl/bin/cwltool", line 9, in <module>
    load_entry_point('cwltool==1.0.20150527172438', 'console_scripts', 'cwltool')()
  File "/home/vagrant/cwl/local/lib/python2.7/site-packages/cwltool/main.py", line 100, in main
    t = workflow.makeTool(from_url(args.workflow), basedir)
  File "/home/vagrant/cwl/local/lib/python2.7/site-packages/cwltool/workflow.py", line 35, in makeTool
    return draft2tool.CommandLineTool(toolpath_object, docpath)
  File "/home/vagrant/cwl/local/lib/python2.7/site-packages/cwltool/draft2tool.py", line 224, in __init__
    super(CommandLineTool, self).__init__(toolpath_object, "CommandLineTool", docpath)
  File "/home/vagrant/cwl/local/lib/python2.7/site-packages/cwltool/process.py", line 39, in __init__
    self.names = get_schema()
  File "/home/vagrant/cwl/local/lib/python2.7/site-packages/cwltool/process.py", line 22, in get_schema
    with open(cwl_avsc) as f:
IOError: [Errno 2] No such file or directory: '/home/vagrant/cwl/local/lib/python2.7/site-packages/cwltool/schemas/draft-2/cwl-avro.yml'

somehow some of the schemas are not being included. Weirdly enough it appears to be only the .yml files that are missing:

(cwl)vagrant@localhost /vagrant/common-workflow-language/conformance $ ls /home/vagrant/cwl/local/lib/python2.7/site-packages/cwltool/schemas/draft-2/
cwl-context.json  cwl-rdfs.jsonld

I'm not sure what's wrong, the relevant part of setup.py looks right to me, but then again setuptools seems to have an inexhaustible supply of behaviors that surprise and frustrate me.

www.commonwl.org redesign + logo

Top page should contain:

  • One sentence description
  • who is providing support ($/effort)
  • use cases with snippets

A logo would be nice as well

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.