Giter Club home page Giter Club logo

ash's Introduction

Logo Logo

Elixir CI License: MIT Hex version badge Hexdocs badge

Ash Framework

Welcome! Here you will find everything you need to know to get started with and use Ash. This documentation is best viewed on hexdocs.

Dive In

About the Documentation

Tutorials walk you through a series of steps to accomplish a goal. These are learning-oriented, and are a great place for beginners to start.


Topics provide a high level overview of a specific concept or feature. These are understanding-oriented, and are perfect for discovering design patterns, features, and tools related to a given topic.


How-to guides are goal-oriented recipes for accomplishing specific tasks. These are also good to browse to get an idea of how Ash works and what is possible with it.


Reference documentation is produced automatically from our source code. It comes in the form of module documentation and DSL documentation. This documentation is information-oriented. Use the sidebar and the search bar to find relevant reference information.

Tutorials


Topics

About Ash

Resources

Actions

Security

Development

Advanced


CookBook


Reference

Packages

The Ash ecosystem consists of numerous packages, all of which have their own documentation. If you can't find something in this documentation, don't forget to search in any potentially relevant package.

Data Layers

API Extensions

Web

Finance

Resource Utilities

  • AshOban | Background jobs and scheduled jobs for Ash, backed by Oban
  • AshArchival | Archive resources instead of deleting them
  • AshStateMachine | Create state machines for resources
  • AshPaperTrail | Keep a history of changes to resources
  • AshCloak | Encrypt attributes of a resource

Admin & Monitoring

Testing

  • Smokestack | Declarative test factories for Ash resources

ash's People

Contributors

ahey avatar andrewcallahan avatar axelson avatar barnabasj avatar bcksl avatar dependabot[bot] avatar fcapovilla avatar franckstifler avatar frankdugan3 avatar jechol avatar jimsynz avatar joshprice avatar kernel-io avatar mario-mazo avatar marmor157 avatar michaelst avatar rapidfsub avatar rbino avatar rgraff avatar sevenseacat avatar thefirstavenger avatar theosaurus-rex avatar tlietz avatar totaltrash avatar vbrazo avatar vherr2 avatar vonagam avatar woutdp avatar zachdaniel avatar zimt28 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ash's Issues

Proposal: text filter parser

It would be pretty easy to support, in ash core, a text filter parser. It could also potentially be implemented as an extension/external tool.

We should be able to parse a string like (name == "Zach" or id in [1, 2, 3])

Validate that the data layer supports composite primary keys

Is your feature request related to a problem? Please describe.
Currently we just assume that the data layer supports composite primary keys.

Describe the solution you'd like
composite_primary_key needs to be added to the can?/2 functionality of data layers, and validated at compile time.

Document data layer callbacks

The Ash.DataLayer behaviour is very complex, and will only get more and more complex over time. We need to document its callbacks well.

Support lazy actor evaluation

If we make it part of the check behavior to express which parts of the actor it needs to access, and when we support arbitrary calculated fields in ash core, we could support lazy actor evaluation if the actor is a resource. This could allow them to start a request pipeline with just a user id, and the authorizer will request data as it becomes necessary.

Support calculated attributes

The default is for this information to not be populated, and to be populated on demand. So in the same vein of %Ecto.Association.NotLoaded{} we're going to want %Ash.Attribute.NotLoaded{} (and frankly we may want to create our own %Ash.Relationship.NotLoaded{}). The query will support specifying a list of calculated fields to load. Additionally, we will want a way to load attributes after the fact, via a MyApi.load_attributes(records, [:full_name]).

We will most likely also need to have a configurable option to load some fields by default (full_name, for example, is very cheap to load).

There are four scenarios we would eventually need to support:

1.) embedding statements into the query. This will need to be implemented as a data layer feature. These would be useable in filters/sorts, and would generally be This is out of scope for this issue, but you might see something like:

postgres do
  source :full_name, fragment("? || ' ' || ?", record.first_name, record.last_name)
end

2.) Something derived functionally, receiving a resource and returning a value. This would be the first one we implement and would be quite easy to do.

attributes do
  calculated_attribute :full_name, function: &MyMod.user_full_name/1
end

defmodule MyMod do
  def user_full_name(%{first_name: first_name, last_name: last_name}) do
    first_name <> " " <> last_name
  end
end

3.) An intermediate representation that each datalayer will have some amount of support for. Ash framework will use this to make smart optimizations about how to generate metadata. For instance, if a value is used in filters/sorts, it will pass it to the data layer to attach to the query. Otherwise, the engine can generate it (in parallel with all the other calculated attributes that weren't needed in the query) after the query is run. If relationships are referenced in the attribute, all attributes that require related values can be batched together to ensure that no unnecessary data is queries and to mitigate the expense of joining/fetching related data.

This would look like elixir code, but it would only support a specific set of syntax, and there would be special bound variables available (just record to start). The data layer would express which expressions it supports as well.

attributes do
  calculated_attribute :full_name do
    record.first_name <> " " <> record.last_name
  end

  calculated_attribute :comment_count do
    count(record.comments, filter: [archived: false]) 
  end
end

4.) Inverting this by allow it to be specified in the query.

query
|> calculate(:comment_count, count(record.comments, filter: [archived: false]))

This would have the added benefits of allowing the front end extensions to define their own method of passing this information in. For example, in JSON:API, we could support calculate[user][comment_count][type]=count&calculate[user][comment_count][relationship]=record.comments&calculate[user][comment_count][filter][archived]=false. We'd have to properly escape it, of course. That's ugly, but you'd typically just URL encode a JSON object to make a query like that, like so:

{
  "calculate": {
    "comment_count": {
      "type": "count",
      "relationship": "record.comments",
      "filter": {
        "archived": false
      }
    }
  }
}

Figure out handing engine failure

For ash_json_api we will be able to stop on the first error, but for ash_graphql we'll want all successful paths to return and to show errors at the path they occurred. Engine errors are generally just not handled well right now.

Support cross data layer filtering

We have the tools to support cross data filtering, but it will take a lot of extra rigging and there are some unknowns. This should be done after filters are revamped. Specifically what we need to do is try to take the subsections of the filters that apply to other resources, fetch those resources, and rewrite the filter to reference the ids that apply.

Proposal: Refactor `Filter` to be simpler, joined boolean expressions

Is your feature request related to a problem? Please describe.
The current implementation is basically the first stab I took a while back, and the strategy of %{ands: [], ors: [], not: nil} just doesn't make sense. Instead, we should just have a simple nested expression.

Describe the solution you'd like
For example:

%Ash.Filter{
  resource: resource,
  api: api,
  expr: %Ash.Filter.BooleanExpr{
    op: :and,
    left: %Ash.Filter.BooleanExpr{...},
    right: %Ash.Filter.BooleanExpr.Not{
      expr: %Ash.Filter.BooleanExpr{...}
    }
  }
}

Add an `effect` or `commit` step to the engine

Is your feature request related to a problem? Please describe.
Currently, the "result" of the engine is data, but for destroy actions, for instance, that really doesn't make sense. We want the data to be "fetch the record to delete", and then an "effect/commit" step which you can use to actually commit said changes. For updates/creates you want to generate the changeset and then the "effect/commit" would actually perform that change.

This is a really important change to make because currently authorization doesn't really work for creates/updates/destroys that aren't in transactions (the changes are made, even if the check part of authorization fails)

Document the Interface

Right now the interface functions have no documentation (just the auto-generated options documentation)

Increase the elixir/erlang matrix we use for CI

We want to make sure we support more than just the latest elixir/erlang. Additionally, in our extension projects, we want to do the same thing, and also add the ash version to the matrix

Allow for expressing the behavior of related items when a record is deleted.

Is your feature request related to a problem? Please describe.

There are two versions of this, one implemented at the data layer, e.g ON DELETE CASCADE options in postgres, and one handled by application logic. Generally, implementing it in the database is the best way to ensure integrity. We don't currently support deleting with a query, but when we do it would be difficult (and maybe just completely unreasonable) to work these rules into that process.

However, we want to leverage that information to make smarter decisions, especially when we start adding caching layers. Additionally, we want to do it manually for data layers that don't support it.

The difficulty here is that if we have:

relationships do
  has_many :comments, MyApp.Comments, on_delete: :cascade
end

But they've configured it in their database as well, we need some way to reconcile those two configurations so we know that there isn't actually anything to do when something is deleted.

Support transactions

Is your feature request related to a problem? Please describe.
Currently, we don't support transactions. What we do support is splitting up the requests given to Ash.Engine and running some synchronously, and others asynchronously. Supporting transactions means three things:

1.) When the engine runs, get the unique data layers involved. Ask that data layer if it is in a transaction (will need to be added to the data layer behaviour). If it is, then all requests for that data layer need to be included in the synchronous request list.

2.) We will want to support some actions automatically creating a transaction when it is supported. For instance, creates and updates should run in a transaction when relationship changes are included.

3.) Potentially, a way to start transactions manually via ash, perhaps something like:
Ash.transaction(Ash.data_layer(resource), fn -> end)

Remove `name` and `type` from ash core

Currently, ash_postgres uses it to guess a default table name. We will just remove that default guess. ash_json_api uses it to guess a base route, and uses the type in its response/request parsing. We will make base_route explicit, and move the type configuration to json_api.

Validate that all related resources are in any given API

Is your feature request related to a problem? Please describe.
Right now you can only list a subset of your resources in the resources list on an API. The big problem that stems from that is that is what to do with resources that are related to one of the resources in an API, but not included in the resource list.

Describe the solution you'd like
Validate that all relationships on each resource in the API, have their destination also in the API.

Proposal: implement limiting fields/relationships at the action level, so the extensions don't have to do it

Is your feature request related to a problem? Please describe.

Currently, ash_json_api supports fields in its DSL that limit the fields/attributes that appear in the JSON:API resource. By pushing
this to the web layer, we lose the context of why there is a limiting of the fields in that specific example. Additionally, the extension needs to do all the work on its own to determine the relevant attributes, as opposed to just using information stored on the action.

Describe the solution you'd like

We accept options to include/exclude certain portions of the resource in an action. To start with, we should just keep it simple and use a keyword list like in my example. Eventually, overriding resource behavior for specific actions may get its own DSL, as it will get much more complex if we lean into that pattern

Describe alternatives you've considered
The alternative is leaving this declaration up to the extension using the resource

Express the feature either with a change to resource syntax, or with a change to the resource interface

actions do
  read :public_read, include: [
    attributes: [:first_name],
    relationships: [:friends]
  ]

  read :standard_read, exclude: [
    attributes: [:secret_field],
    relationships: [:family_members]
  ]

  read :admin_only_read  
end

Support expressing unique constraints in the resource

We want to support expressing unique constraints, which is easy as unique?: true on an attribute when its a single attribute. When its multiple, we will want something like this:

attributes do
  unique_constraint [:first_name, :last_name]
end

We will use this information to inform the changeset of these conditions, which will get us better error messages from the data layer. Additionally, we can validate certain assumptions that should hold, for instance that the destination_field of a has_one is unique?: true on the destination resource. (otherwise its a has_many) Lets make sure to do that validation as part of this change or we can make a separate ticket.

Optimize manual policy check running

There are more than a few things we can do to support optimizations of manual checks:

1.) batch preparations. Perhaps we can have each manual check expose a prepare function that returns things like side_load: [:relationship], so we can fetch all necessary data in one go
2.) run checks in parallel if we know that we need to run more than one to invalidate/validate a scenario
3.) choose which check to run better. Currently, we just choose the first unknown one. we should be able to get smart about it, and choose the fact that will invalidate the most scenarios (would be the same as the fact that could validate the most scenarios)

Figure out how to explain/fully utilize reverse relationships

Currently, the only way to involve other resources in a filter is to use the relationships from one resource to the other. However, this leads to an issue in the case of many to many relationships, when we can't turn a relationship filter into a simple field = value statement.

For example, in this request where students has a many_to_many relationship to classrooms:

:students
|> MyApi.query()
|> side_load(:classrooms)
|> MyApi.read()

We have to make a query to the classrooms table and include a filter for only classrooms that the student is in. The naive approach is to just get the ids off of the join table, and make a request to classrooms like so:

:classrooms
|> MyApi.query()
|> filter(id: [in: [1, 2, 3]])

However, due to the kinds of things we are doing in AshPolicyAccess, we want to use filters for authorization. Additionally, caching will want to leverage these filters. If we are informed of the reverse_relationship of a given relationship, in this case the verse of classrooms on the student resource, would be students on the classroom resource, then we can make a much clearer statement:

:classrooms
|> MyApi.query()
|> filter(students: [id: [in: [1, 2, 3]])

So if authorization for the classrooms resource is saying that you can see the classroom if you are a student in the classroom, it will know just from the statement that you are.

Support descriptions at every level of the DSL

We want to make sure that everything supports text descriptions, which can be used when scaffolding front end layers and building documentation.

  • actions
  • relationships
  • identities
  • validations
  • calculations
  • aggregates

Look into supporting many to many relationships with duplicates in the join resource

Right now we don't support having duplicates in many_to_many relationships, so we'll need to document the limits around that: the primary key of the join resource must be (at least as it is in ash) the join keys. If they don't want to do that, then they should at a minimum define a unique constraint on the two join keys.

We also might want to figure out how to support duplicates (having multiple relationships to the same destination)

Make a public function that takes a list of resources and a filter, and applies it

This could be immensely useful, both for users of ash and for ash internals. This makes things like lifting joins easy. An example is offloading simple filters out of the database (perhaps for performance reasons).

This might look like:

query = 
  :user
  |> MyApi.query()
  |> MyApi.filter(id: 1)

[%User{id: 1}, %User{id: 2}] |> Ash.Filter.apply(query)
[%User{id: 1}]

Consider supporting richer relationship filters

Right now, a filter of related_thing: [admin?: true] means "there exists a related thing with admin?: true". Although it may be difficult to implement at the data layer level, we may want to support a few richer statements along those lines:

related_things: [all: [admin?: true]] and related_things: [exists: [admin?: true]] (with exists being assumed if you just say related_things: [admin?: true].

Additionally, we may want to specify the behavior of certain filters that, in a database like postgres, would return true even if there was no related thing like related_things: [admin?: nil]. We may want to configure the behavior of that nil. Not sure exactly how it would play out, perhaps something like:

related_things: [exists_or_not: [admin?: nil]], which would be the equivalent of left joining and saying IS NULL, as in it would work even if the destination doesn't exist.

Those aren't perfect syntaxes, but its enough to get the conversation started.

Figure out `impossible` in terms of filters

Right now we use a special key in filters, __impossible__: true when we've determined that it is not possible for any records to meet those criteria. We need to determine if it is safe/sensible to continue to use this, and make sure it is honored wherever possible.

Flesh out the data layer `can?` pattern

We need to figure out what kinds of things the data layer can? pattern needs to encompass. A few things that aren't accounted for now, but should be, are

  • data types
  • filter predicate types
  • various constraints

Support bulk creates/updates/destroy

We'll have to think through this, and making sure it is consistent with the rest of the framework capabilities will be difficult.

Preliminarily, I imagine we will support versions of update/destroy that takes a query instead of a record. In the case of create, it would have to be something like a list of %{attributes: %{}, relationships: %{}.

  • bulk creates
  • atomics on update
  • where clauses on atomic changes
  • bulk updates
  • bulk destroys
  • atomics on inserts
  • default to get and lock when update action can't be atomic
  • make managed relationships support bulk actions (and use bulk actions) #303

Create a `predicate` behaviour, and implement it for all predicates

Right now, only equals and in filters get special treatment, but we want to make a "predicate" behavior and implement it for all of the current predicates. This will include a callback that takes an instance of the given predicate and another predicate and expresses mutual inclusion/exclusion. We can use this to make smarter filter subset logic with fewer false negatives (for instance, during strict check).

DSL building utilities in ash that the extensions can use

Is your feature request related to a problem? Please describe.
All of the DSL building code looks roughly the same, and writing it manually in each extension/for each component of the DSL is error prone and annoying.

Describe the solution you'd like
We should be able to put the DSL building code in core, and just call into it with a NimbleOptions schema (and some additional options related to building the DSL) and have it build it automatically.

Express the feature either with a change to resource syntax, or with a change to the resource interface

For example

# in Ash.Resource.Dsl.Attribute
...
defstruct [:type, :name, :allow_nil?, ...]
def schema() do
  [
    name: [
      type: :atom,
      required: true # required options in the schema would become positional arguments in the builder, so 'attribute name, type, ...'
    ],
    type: [
      type: {:custom, Ash.Type, :validate_type, []},
      required: true
    ],
    allow_nil?: [
      type: :boolean,
      required: false
    ]
  ...
  ]
end

# In the DSL code
dsl_spec = 
%Ash.Dsl.Section{
  name: :attributes,
  components: [
    %Ash.DSl.Item{
      name: :attribute,
      creates: Ash.Resource.Dsl.Attribute,
      schema: Ash.Resource.Dsl.Attribute.schema()
    }
  ] 
}


Ash.DSLBuilder.build(dsl_spec)

Additionally, we should see if we can find a non-manual way to automatically include all of the dsl components/builders in the .formatter.exs exports

Additionally, this issue should involve implementing/supporting validations of DSL created objects and DSL usage, for each project.

Expand the example documentation

The example on the readme should include

  • a short description of getting started, basically just a link to mix new or mix phx.new documentation.
  • a file path to their resources
  • an example of an API to contain the resources
  • an example of using the code api, e.g `:user |> MyApi.query() |> filter(name: "zach") |> MyApi.read!()

use the option schemas to validate interface options

Is your feature request related to a problem? Please describe.
We define schemas for the options in Ash.Api.Interface. We use those for documentation currently but do not actually validate the options with them.

Describe the solution you'd like
It should be easy to put a step in each interface function to validate the opts before continuing.

Build/validate filter based policy checks at compile time

Currently, filter-based checks just build at runtime and could return errors. However, if we add provisions for the templating that we do e.g {:_actor, field} to the validation that the filter parser does, we could validate the filters/build a partial filter at compile time. This would help spot errors quickly.

runtime configuration

We need a story for runtime configuration. Generally speaking, we won't ever be able to support changing the details of a resource at runtime, so runtime configuration will most likely be relegated to specified portions of extensions, like maybe the prefix in ash_json_api, or table in ash_postgres. We are currently trying something out in ash_postgres that augments the repo pattern with additional configuration, and that may be a good strategy going forward.

Proposal: switch from the satsolver to a rules engine

Is your feature request related to a problem? Please describe.
The satsolver is used for filter subsets and for authorization, but a few things have become clear:

1.) filter subset logic requires the expression of constraints in the two filters provided in order to be accurate, due to the nature of predicates that are mutually exclusive/inclusive e.g [id: [not_in: [1, 2]]] and [id: [not_eq: 2, not_eq: 1]]. The naive choice for this example is to turn not_in into a set of not_eq and not_eq and in into a set of eq or eq. However, this pattern breaks down when you start talking about things like greater_than: 10 and its relationship to eq: 11. We can't turn greater_than: 10 into not_eq for all values less than 10. If we use a rules engine instead of a sat solver, we should be able to express many/most of these variations much more easily. Specifically, it can be part of the work to implement #18, essentially allowing predicates to state their rules/relationships to other predicates as part of their behavior. This shouldn't logically lead to false positives, because two unrelated predicates are assumed to be unrelated facts. But it would lead to false negatives.

2.) Building policy authorization, having to transpile the policy expression to a boolean statement is difficult, and often requires that the emitted scenarios lose some of the context that was used to generate them (in effect, policy -> boolean statement is a lossy translation). We want users to be able to choose between multiple methods of applying each individual policy (e.g this policy can be applied as a filter to the data vs this policy must be expressed as a filter by the caller vs this policy can be figured out at runtime after fetching data) and due to the fact that you can have clauses that are the same except for that detail, you lose the context when translating to boolean.

Describe the solution you'd like
Find and use/create a rules engine tailored to these use cases

The challenge will be setting up a rules engine that can partially evaluate/provide the scenarios so we can transform them into filter statements.

Extension Proposal: ash_twirp

** What is the purpose of the extension? **
twirp is an RPC framework. Ash resources ought to be capable of generating the protobuf definitions that twirp runs off of.

** How would it extend the DSL? ***
I haven't looked into how we'd leverage it, but it was recommended by @keathley as a potentially good fit for Ash.

Get all `ash-project` repositories cleaned up and ready to be worked on

We should refer back to this list when setting up any new repos:

  • Delete/archive any current issues/tickets
  • Remove any unnecessary GH repos
  • Descriptions on all repos (under the repo name)
  • Topics for all repos (under description)
  • Remove Ashton, replace it with https://github.com/dashbitco/nimble_options, delete ashton repo
  • Add git_ops to each repo, create initial release
  • Publish to hex
  • Make the hex documentation the repo home-page
  • Licenses, with a badge
  • Contributor guidelines
  • Code of Conduct
  • Pull request template with commit name requirements
  • Issue templates
  • code coverage
  • Readme with at minimum a short summary and an example of usage (tickets created to this effect)
  • Any long-form writing moved to in-code documentation.
  • All todos into GH issues/triaged/removed if out of date
  • CI - with all of the steps runnable by mix check https://github.com/karolsluszniak/ex_check, running on a matrix of different elixir/downstream dependency versions (e.g Ecto, Phoenix)
  • Badges for all CI checks/anything relevant
  • CI automatically deploys when a new release is pushed
  • Ensuring only the maintaining team can push to master
  • Requiring PR approvers
  • Requiring PRs pass a Continuous Integration build
  • guidelines on using issues
  • Uniform GitHub labels
  • All public interfaces at minimum specced, but ideally with function/module docstrings The primary one in ash was done, but I think we can push off more documentation for later.
  • All private modules with @doc false (we can be overeager with this for now, and do a more comprehensive documentation pass later) - done for ash
  • Logo?
  • Ensure reasonable test coverage for all repos this shouldn't be part of the initial cleanup.
  • Ensure ash.formatter --check is being run on CI

Eliminate usage of/need for `primary_action!/2`

Is your feature request related to a problem? Please describe.
Currently, certain engine behaviors require that a primary action of some type exists on a specific resource. Right now it checks at runtime. Instead we should find all of those cases, and validate that we can't reach those cases without a default action. For instance, if a relationship to another resource exists, we should require that the other resource has a default read. If that relationship is editable (all relationships are editable at the time of writing this, but that will likely not always be the case), then we should require a primary create/update/delete.

Proposal: Add pagination support in read actions

Is your feature request related to a problem? Please describe.
In the initial design of ash, pagination was included in core. Currently its being done in extensions, as a simplification.

Describe the solution you'd like
Describe an interface for an "Ash.Paginator", and support a simple limit/offset paginator in core. Then, let read actions configure a paginator.

Describe alternatives you've considered
Pagination could just be done external to core, but I think there are enough "engine relevant" components.

Express the feature either with a change to resource syntax, or with a change to the resource interface

For example

  actions do
    read :default, paginator: :simple # The default
  end

Additional context
Add any other context or screenshots about the feature request here.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.