Giter Club home page Giter Club logo

air-script's People

Contributors

bitwalker avatar bobbinth avatar grjte avatar hackaugusto avatar jjcnn avatar overcastan avatar tohrnii avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

air-script's Issues

Refactor Periodic columns handling in IR and codegen

  1. Based on @bobbinth's comments on PR #40 here, it might be better to change the periodic columns to Vec<Vec<Felt>> instead of Vec<Vec<u64>> at IR level as it might be helpful in the future when we may want to evaluate constraints at a random point..

  2. Also we could refactor the to_string() method for periodic columns proposed by @bobbinth here.

A thought for the future: instead of doing conversions from u64 to Felt here, we could do the following:

  1. Define static arrays for all periodic columns such that these arrays already contain Felt's.
  2. Inside get_periodic_column_values() convert these arrays to vectors (which should be just memory copy operations) and return.
    The performance benefit of the above is probably very minor - so, definitely not a high priority item.

Throw error when boundary_constraints is empty or omitted

We should make it a requirement to have boundary constraints. When this is updated, the mdbook docs section about boundary_constraints should also be updated.

  • update in the parser: boundary_constraints sections should have at least one boundary constraint
  • update in the IR: at least one boundary_constraints source section should be defined
  • update in mdbook docs as noted here: #36 (comment)

Add support for grouping of trace columns to parser

We should allow users to name trace columns in a group as an array. For eg,

trace_columns:
    main: [x, y, a[4], z]

Here the main trace contains 7 columns where 4 are grouped under a. These columns can be accessed using their local indices inside a (for e.g., a[1] to refer to the second column in group a).

Allow auxiliary columns to be defined before main trace columns inside trace_columns section

Currently inside the trace_columns section, the parser expects the main trace columns to be defined before the auxiliary columns. We should consider allowing users to define the auxiliary columns before the main columns. However, If it is preferable to keep main trace columns definition first as convention, we should show a better error to users regarding the same.

Consider making trace reference more general in the IR

Currently, in the Operation enum the way we reference trace columns is via MainTraceCurrentRow and MainTraceNextRow (and similar for auxiliary trace). This hard-codes the evaluation frame to work only with two consecutive rows. Instead, we could make the structure more general by putting the row offset into the enum itself. For example, the enum could look like this:

pub enum Operation {
    Const(u64),
    MainTraceRow(usize, usize),
    AuxTraceRow(usize, usize),
    ...
}

In the MainTraceRow and AuxTraceRow the first usize would refer to the column offset and the second usize would refer to the row offset. So, for example: MainTraceRow(1, 0) would mean column 1 at the current row, but MainTraceRow(1, 1) would mean column 1 at the next row.

Formatting issue

Formatting issue in:

  1. Syntax overview/ built-in variables
  2. Syntax overview/ Delimiters and special characters
  3. AirScript example

Implement codegen option for generic constraint evaluation format

The output of this codegen option should be a JSON file with the following structure:

{
    num_polys: integer,
    num_variables: integer,
    constants: [array of field elements in the Goldilocks field],
    expressions: [array of expression nodes],
    outputs: [array of node references],
}

In the above, node reference has the following structure:

{
  type: "POL | POL_NEXT | VAR | CONST | REF",
  index: integer,
}

Where:

  • POL refers to the value in the column at the specified index in the current row.
  • POL_NEXT refers to the value in the column at the specified index in the next row.
  • VAR refers to a public input or a random value at the specified index.
  • CONST refers to a constant at the specified index.
  • REF refers to a previously defined expression at the specified index.

An expression has the following structure:

{
  op: "ADD | SUB | MUL",
  lhs: node_reference,
  rhs: node_reference, 
}

Where ADD, SUB, and MUL are the corresponding operations in the Goldilocks base field.

Code generation

To generate the code in the above format, we would need to do the following:

  1. Merge main and auxiliary trace columns together into a single list of columns. For example, if we have 5 main trace columns and 2 aux trace columns, we should end up with with a list of either 9 or 11 columns, depending on the extension degree specified (i.e., 9 columns for quadratic extension and 11 columns for cubic extension). This also means that field extension degree should be an input parameter for this codegen option.
  2. Flatten public inputs into a single array of values and merge it with random values array. This array will represent the variables array for the expressions.
  3. Reduce all operations to the 3 supported operations in the base field. This also means that all operations over the extension field must be transformed into equivalent operations in the base field.

In addition to the above, we also need to add constraint merging logic to the constraint evaluation description. There should be only 3 entry points in the outputs section:

  • One entry point to compute merged boundary constraints against the first step.
  • One entry point to compute merged boundary constraints against the last step.
  • One entry point to compute merged transition constraints.

The logic for merging transition constraints is described here and the logic for computing and merging boundary constraints is described here.

A few other things to consider:

  • How to represent periodic columns? I am not yet sure if we can use constants or variables to represent them. We might have to introduce a new type of column in the top level.
  • We might need to include a column which represents successive powers of the generator (i.e., $\omega^0, \omega^1, ..., \omega^{n-1}$). This might be implicit though.
  • For merging columns, we will need to assume that extra randomness is provided by the verifier. These random values would be included in the variables section. We also may need to add degree adjust factors to the variables array as well.

Implementation approach

It would probably make senes to split implementation of this into several steps. The first step should probably not involve constraint merging and should have one output for each boundary and transition constraint. Only after this is working, we should implement constraint merging.

Throw error for boundary constraints using `$rand` against main trace

The use of $rand random values by boundary constraints should be limited to auxiliary constraints and therefore to boundary constraints which are against columns from the auxiliary trace.

We need to throw an error if $rand is used in boundary constraints defined against main trace columns. This should be handled in the IR.

For example, this should be an error:

...
trace_columns:
    main: [a, b]
...
boundary_constraints:
    enf b.first = a + $rand[0]
...

See:
#36 (comment)
#36 (comment)

Tracking issue: core functionality for Miden VM constraints

Goal(s)

  • functionality for defining ~90% of constraints for Miden VM and Polygon Zero
  • codegen for hardware acceleration
  • improved testing for code correctness

Details

Functionality for Miden VM & Polygon Zero

In order to define the constraints on the stack overflow column in Miden VM we need a way to declare public input vectors of unknown size

We need to make the process of defining boundary constraints more flexible for Polygon Zero. We can do this by making it possible for them to define lagrange polynomials directly. This requires:

  • enabling declaration of validity constraints (constraints against a single row that don't follow the boundary constraint convenience syntax used by Miden VM)
  • giving access to the x value in each row via a new built-in $x

Codegen for hardware acceleration

  1. Add Sub to the IR
  2. #56

Improved testing

We need unit tests for the IR and the codegen modules. We should come up with a good setup (possibly using an external tool) that keeps our tests simple and readable and can be used for all codegen cases.

Tasks

  1. IR v0.2
    Overcastan
  2. grjte
  3. enhancement v0.2
    grjte
  4. IR parser v0.2
    tohrnii

Tasks

  1. IR good first issue
    Overcastan
  2. codegen
    Overcastan

Working group:

@Al-Kindi-0, @grjte, @Overcastan, @tohrnii

Workflow
  • Discussion should happen here or in the related sub-issues.
  • PRs should only be merged by the coordinator, to ensure everyone is able to review.
  • Aim to complete reviews within 24 hours.
  • When a related sub-issue is opened:
    • add it to the list of sub-issues in this tracking issue
  • When opening a related PR:
    • request review from everyone in this working group
  • When a sub-issue is completed:
    • close the related issue with a comment that links to the PR where the work was completed

Coordinator: @tohrnii

The working group coordinator ensures scope & progress tracking are transparent and accurate. They will:

  • Merge approved PRs after all working group members have completed their reviews.
    • add the PR # to the relevant section of the current tracking PR.
    • close any completed sub-issue(s) with a comment that links to the PR where the work was completed
  • Monitor workflow items and complete anything that slips through the cracks.
  • Monitor scope to see if anything is untracked or unclear. Create missing sub-issues or initiate discussion as required.
  • Monitor progress to see if there's anything which isn't moving forward. Initiate discussion as required.
  • Identify PRs with especially significant changes and add @grjte and @bobbinth for review.

Throw error when `public_inputs` is empty or omitted

At least one public input should be required, so this source section should be required and non-empty.

We should add tests and errors for the following cases:

  • no public_inputs section - this error & test should probably happen in the ir
  • public_inputs section is empty - this error & test can happen in the parser

Initial parsing of constraints against the `clk` column

To start the project very simply, we should define the constraints and then the grammar to parse the following constraints against Miden VM's clock cycle column using LALRPOP

  • One boundary constraint that clk at the first step is 1.
  • One transition constraint which says that clk' = clk + 1.

Throw error when `trace_columns` do not contain `main` declaration

Currently it's possible to declare a trace columns source section without declaring the shape of the main execution trace. This shouldn't be possible. A new test should be added when it is fixed.

This is wrong but currently transpiles without error

trace_columns:
    aux: [a, b]

add support for random values

We need access to random values when evaluating transition constraints over the auxiliary trace. The main complication with these is that they may be values in the extension field (and will be, in Miden's case), whereas everything that's been implemented so far (for the main trace) only requires values from the base field.

We also need to determine how random values are going to be referenced. Based on the previous discussion, it seems like we should refer to them by $rand[0], $rand[1], etc.

  • lexer
  • parser
  • IR
  • codegen

Throw error for unsatisfiable transition constraint due to base vs ext field

This is an example of a problematic constraint:

enf b = a + $rand[0]

Context from Bobbin:
This probably should be invalid as this constraint should never be satisfied: the right hand side is an extension field element, while the left hand side is a base field element. If the value in $rand[0] is truly random, these will never be equal.

We'll need to think about a general rule which can describe this.

Originally posted by @bobbinth in #36 (comment)

Consider adding sections for random values and validity constraints

For some backends it may be useful to have additional sections defined which could make constraint descriptions cleaner.

Random values

One such section could be random_values. This section would enable naming/grouping of random values similar to how trace columns section allows for trace columns. The syntax for this section could look as follows:

random_values:
    rand: [15]

The above specifies that there should be 15 random available for use in auxiliary trace constraints, and that these random values will be available under variable $rand.

Similar to trace columns, we could name various values as follows:

random_values:
    rand: [a, b]

The above specifies that a and b names could be used to refer to specific random values, and also that $rand variable could be used to refer to the entire vector.

Validity constraints

Another such section could be validity_constraints. These constraints would be similar to transition constraints, with the only exception that they would work only with the current row of the trace. For example:

trace_columns:
    main: [a, b, c]

validity_constraints:
    enf a^2 - a = 0

Another option is to determine whether a constraint is a validity constraint or a transition constraint automatically and then we don't need an additional section. Though, we will need to rename transition_constraints section into something more general.

Add handling of public inputs

We need to add a way to specify boundary constraints that rely on public inputs.

  • clarify syntax for declaring & referring to public inputs in the Air DSL
  • handle parsing of public input declarations and references
  • add public inputs to IR
  • handle codegen of public inputs and boundary constraints that depend on public inputs

add support for $main and $aux built-ins

Currently, we have the built-in $rand which allows accessing a random value by index, e.g. $rand[0].

It would be convenient to do the same thing with the main and auxiliary execution traces and allow columns to be accessed by index rather than exclusively by their column identifier.

This requires:

  • add lexing/parsing for $main[n], $aux[n], $main[n]', and $aux[n]' where n is a number. This should only be allowed in transition constraint expressions. The AST needs to be updated to allow the new column representation as well.
  • update the IR to process and validate columns accessed this way when building the transition constraints graph

Tracking issue: Improved ergonomics for defining constraints in AirScript

Goal(s)

  • better ergonomics when defining constraints in AirScript

Details

To improve the ergonomics of the AirScript language, we should add support for:

  • defining intermediate variables with the let keyword with these 3 types: scalars, vectors, matrices
  • declaring named constants with these 3 types: scalars, vectors, matrices
  • grouping columns (e.g.: a[4]) for
    • declarations in the trace_columns section
    • codegen referencing groups of columns (e.g. for generating one of the structs Polygon Zero uses when defining constraints)

Tasks

  1. IR codegen parser v0.2
    tohrnii
  2. IR codegen parser v0.2
    tohrnii
  3. parser v0.2
    tohrnii
  4. IR v0.2
    tohrnii

Tasks

  1. enhancement v0.3
    tohrnii
  2. enhancement v0.2
    Overcastan
  3. v0.2
    tohrnii
  4. enhancement v0.2
    Overcastan

Working group:

@Al-Kindi-0, @grjte, @Overcastan, @tohrnii

Workflow
  • Discussion should happen here or in the related sub-issues.
  • PRs should only be merged by the coordinator, to ensure everyone is able to review.
  • Aim to complete reviews within 24 hours.
  • When a related sub-issue is opened:
    • add it to the list of sub-issues in this tracking issue
  • When opening a related PR:
    • request review from everyone in this working group
  • When a sub-issue is completed:
    • close the related issue with a comment that links to the PR where the work was completed

Coordinator: @tohrnii

The working group coordinator ensures scope & progress tracking are transparent and accurate. They will:

  • Merge approved PRs after all working group members have completed their reviews.
    • add the PR # to the relevant section of the current tracking PR.
    • close any completed sub-issue(s) with a comment that links to the PR where the work was completed
  • Monitor workflow items and complete anything that slips through the cracks.
  • Monitor scope to see if anything is untracked or unclear. Create missing sub-issues or initiate discussion as required.
  • Monitor progress to see if there's anything which isn't moving forward. Initiate discussion as required.
  • Identify PRs with especially significant changes and add @grjte and @bobbinth for review.

Add section for random values and allow declaring names for accessing random values

For some backends it may be useful to have a random_values section. This section would enable naming/grouping of random values similar to how trace columns section allows for trace columns. The syntax for this section could look as follows:

random_values:
    rand: [15]

The above specifies that there should be 15 random available for use in auxiliary trace constraints, and that these random values will be available under variable $rand.

Similar to trace columns, we could name various values as follows:

random_values:
    rand: [a, b]

The above specifies that a and b names could be used to refer to specific random values, and also that $rand variable could be used to refer to the entire vector.

Originally posted by @bobbinth in #53.

Add unit tests for codegen

The initial codegen is missing some things and does not yet produce valid code, since it was primarily targeting generation of constraints.

The following should be added:

  • any missing required codegen in order for output files to be valid
  • Unit tests for each of the generated methods / structs

refactor IR for periodic columns

  • Add the cycle length of the periodic column to the IdentifierType::PeriodicColumn enum variant
  • update the insertion of periodic columns into the graph to include the cycle length
  • change the computation of degrees so that nothing needs to be passed to the method

Add support for declaring named constants (scalars, vectors, matrices)

We should add support for constants. The syntax was defined by @bobbinth in the original Constraints Description Language discussion here. Taken directly from @bobbinth's original post:

Constants could be of the following three types:

  • scalars
  • vectors
  • matrices

The constants should be declared in a separate section.

Declaring constants could be done like so:

constants:
  a: 123 // a scalar constant
  b: [1, 2, 3] // a vector constant
  c: [
    [1, 2, 3],
    [4, 5, 6],
  ] // a matrix constant

Referring to vector constants inside expressions could be done with index notation:

let x = b[1]
let y = c[0][2]

Something like this could also be possible:

let x = c[1][1..3] // this sets x to [5, 6]

Other open suggestions by @grjte and @bobbinth to easily distinguish between trace columns and constants:

  • Use CAPITALS for constant names. If we do this, we should decide if we want to show an error to the user or a warning to use correct convention.
  • Another option is to prefix trace columns with a some special symbol.

This task involves making changes to the parser, IR and codegen.

  • parser
  • IR
  • codegen

Add validity constraints

We should add support for defining validity constraints. These constraints would be similar to transition constraints, with the only exception that they would work only with the current row of the trace. For example:

trace_columns:
    main: [a, b, c]

integrity_constraints:
    enf a^2 - a = 0    # this is a validity constraint

These constraints should be in the same section as transition constraints however we should change the name of the combined section to integrity_constraints (or something similar). Also, in the IR we should still probably have a single algebraic DAG, but we could mark entry points as transition vs. validity constraints.

Originally posted by @bobbinth in #53.

Improve IR tests

The IR unit tests currently only check that errors were not thrown when building the AirIR from the parsed source. We should also ensure that the resulting IR is expected, and more rigorous testing should be added for edge cases

Replace exponentiations with multiplications whenever possible

Comment for the future: exponentiations could be really expensive, especially for constant-time implementations (like the one we are currently using). So, whenever possible, we should replace them with multiplications or more specialized operations.

For example, here, instead of doing (current[0]).exp(E::PositiveInteger::from(2_u64)) we should be doing (current[0] * current[0]) or current[0].square().

Also, if we know that a constant value is smaller than u32 we should try to use conversions from u32. For example: E::from(2_u32) instead of E::from(2_u64) as we may come up with a more efficient reduction for smaller values later on.

Originally posted by @bobbinth in #58 (comment)

Throw error when transition_constraints is empty or omitted

We should make it a requirement to have transition constraints. When this is updated, the mdbook docs section about transition_constraints should also be updated.

  • update in the parser: transition_constraints sections should have at least one transition constraint
  • update in the IR: at least one transition_constraints source section should be defined
  • update in mdbook docs as noted here: #36 (comment)

Add handling of periodic columns

Periodic columns are used for transition constraints. The syntax has already been defined, but we need to add support for them

  • add parsing of periodic columns
  • add periodic columns to IR (and IR's AlgebraicGraph)
  • update degree calculation for periodic columns
  • add codegen of periodic columns
  • update codegen of transition constraints to use periodic columns

Clarify syntax for core language in example constraints file

Once we implement a core subset of everything discussed here, we will be able to define the majority of Miden VM's constraints using this DSL.

To identify & settle questions for this first core version, it's useful to see what our constraints actually look like in this language for a reasonably self-contained set of constraints.

@tohrnii has defined the constraints for Miden's bitwise chiplet here according to the previously referenced discussion

Let's discuss/adjust & agree on any questions related to this file, then possibly expand a bit to match the scope of the first milestone for this project. (We will want to include boundary_constraints and possibly the aux trace_columns and use imports as well)

This will give us a guiding example as we move forward and ensure that the language is clear, consistent, and usable.

add auxiliary trace handling

Once #14 is handled, we'll need to add the IR and codegen for the auxiliary trace

  • add auxiliary trace to the IR
  • add aux trace boundary constraint handling
  • add aux trace transition constraint handling with random values in algebraic graph and handling of base vs. extension field
  • add aux trace codegen

Add support for intermediate variables (scalars, vectors, matrices)

We should add support for variables. The syntax was defined by @bobbinth in the original Constraints Description Language discussion here. Taken from @bobbinth's original post:

A variable is defined using a let keyword. For e.g., let a = b * c

Variables could be of the following three types:

  • scalars
  • vectors
  • matrices

Declaring variables could be done like so:

trace_columns:
  main: [a, b, c]
transition_constraints:
  let x = a + 123 // a scalar variable
  let y = [a + 1, b + 2, c + 3] // a vector variable
  let z = [
    [a + 1, b + 2, c + 3],
    [a + 4, b + 5, c + 6],
  ] // a matrix variable

Referring to vector variables inside expressions could be done with index notation:

let a = [m, n]
let x = a[0] + 1

We could build variables from column references as well:

trace_columns:
  main: [a, b]

let x = [a, b]   // x is a vector with current values from a and b
let y = [a', b'] // y is a vector with next values from a and b

Variables should only be defined in boundary_constraints and transition_constraints sections.

This task involves making changes to the parser, IR and codegen.

  • parser
  • IR
  • codegen

Add `Sub` to IR

Currently, the IR has the operations Add and Neg, but not Sub. For future changes, it will be easier if the IR has Sub as well.

This requires changes to:

  • IR
  • codegen

identify & warn for identical periodic columns

    Comment for the future: what should we do if two periodic columns are identical (same cycle length and values)? Should it be an error? A warning? Or something else?

Originally posted by @bobbinth in #24 (comment)

This should probably be handled as a warning.

We could also consider doing an optimization to identify duplicate periodic columns and always reference the one that was declared first instead, then remove the duplicate from the generated Air. This may be more work than it's worth though, and is lower priority than issuing a warning.

add new built-in $x for accessing x value at each row

In order do define some constraints, we may want to be able to define lagrange polynomials directly. In such cases, the domain value that is used for interpolation is needed, so we need a shortcut for referring to this x-value at each row.

We can do this by adding a new built-in value $x that represents x at any given row.

Here's an example from @dlubarov at Polygon Zero of how this could give more flexibility in defining constraints:

// Boundary constraint.
a * l_0(x) = 0

// Every other row.
a * odd_row(x) = 0

def odd_row(x) = (x^(n/2) - 1)

def l_0(x) = (x^n - 1) / (x - 1)

Required changes:

  • parser
  • IR
  • codegen

This adds flexibility to AirScript, but Miden VM does not need this at the moment, so we could keep updates to the miden codegen very minimal (i.e. just throw an error if this is used)

List Comprehension

It would be nice to have python style list comprehension supported in AirScript. For eg.

trace_columns:
  main: [a, b, c[4]]

# raise value in the current row to power 7
let x = [col^7 for col in c]

# raise value in the next row to power 7
let y = [col'^7 for col in c]

In both cases, the result would be a vector of 4 elements.

We could also add support for iterating over two lists simultaneously like:

trace_columns:
  main: [a, b, c[4], d[4]]

let diff = [x - y for (x, y) in (c, d)]

We could also add support for iterating and enumerating like:

trace_columns:
  main: [a, b, c[4]]

let x = [2^i * c for (i, c) in (0..3, c)]

We could also support list folding. There are two possible syntax:

trace_columns:
  main: [a, b, c[4], d[4]]

# compute sum of products of values in c and d
let x += c * d for (c, d) in (c, d)

# compute product of sums of values in c and d
let y *= c + d for (c, d) in (c, d)

OR

trace_columns:
  main: [a, b, c[4], d[4]]

# compute sum of products of values in c and d
let x = sum([c * d for (c, d) in (c, d)])

# compute product of sums of values in c and d
let y = prod([c + d for (c, d) in (c, d)])

The syntax was proposed by @bobbinth here.

Tracking issue: AirScript modularity

Goal(s)

  • add modules & imports
  • support for functions
  • support for evaluators
  • support for selectors

Details

These features are described in the original AirScript discussion.

Modules & imports

We want to be able to define functionality in one file and use it in another, e.g.:

use: bar

trace_columns:
  main: [a, b, c, d]
  aux: [e, f]

enf bar(main[0..2], aux[0..1])

Functions

Add function support as described here.

Evaluator functions

Add support for evaluator functions as described here.

Selectors

Add selector support as described here

Tasks

  1. enhancement v0.3
    bitwalker
  2. enhancement
    tohrnii
  3. enhancement v0.3
    tohrnii
  4. enhancement v0.3
    tohrnii

Tasks

No tasks being tracked yet.

Working group:

@torhnii, @grjte, @Overcastan, @Al-Kindi-0

Workflow
  • Discussion should happen here or in the related sub-issues.
  • PRs should only be merged by the coordinator, to ensure everyone is able to review.
  • Aim to complete reviews within 24 hours.
  • When a related sub-issue is opened:
    • add it to the list of sub-issues in this tracking issue
  • When opening a related PR:
    • request review from everyone in this working group
  • When a sub-issue is completed:
    • close the related issue with a comment that links to the PR where the work was completed

Coordinator: @tohrnii

The working group coordinator ensures scope & progress tracking are transparent and accurate. They will:

  • Merge approved PRs after all working group members have completed their reviews.
    • add the PR # to the relevant section of the current tracking PR.
    • close any completed sub-issue(s) with a comment that links to the PR where the work was completed
  • Monitor workflow items and complete anything that slips through the cracks.
  • Monitor scope to see if anything is untracked or unclear. Create missing sub-issues or initiate discussion as required.
  • Monitor progress to see if there's anything which isn't moving forward. Initiate discussion as required.
  • Identify PRs with especially significant changes and add @grjte and @bobbinth for review.

Rename this DSL

We need to come up with a new name for this DSL that will ideally tick these boxes:

  1. not in use
  2. easy to pronounce
  3. distinct enough to search for
  4. no negative connotations

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.