Giter Club home page Giter Club logo

bash's Introduction

For support, please post in the new Exercism forum. New posts here will be closed.


Welcome to Exercism

Where to open issues

For the time being we are triaging all issues from our forum. Please start a new topic there for your issue (presuming there isn't one already). Issues opened here will be automatically closed and you will receive a message redirecting you to the forum.

Feeling uncomfortable?

If you need to report a code of conduct violation, please email us at [email protected] and include [CoC] in the subject line. We will follow up with you as a priority.

Where to find the code

The code for the website lives in exercism/website. The code for the old website is in this repository, in the v1.exercism.io branch.

Who's behind Exercism?

Read about our Team on the site: https://exercism.org/team

bash's People

Contributors

alireza-taher avatar alirezaghey avatar bkhl avatar budmc29 avatar dependabot[bot] avatar dkinzer avatar edwin0258 avatar ee7 avatar erikschierboom avatar exercism-bot avatar glennj avatar guygastineau avatar isaacg avatar jaggededgedjustice avatar joeltaylor avatar kenden avatar kotp avatar kytrinyx avatar ljsr avatar nywilken avatar philosoft avatar quartzinquartz avatar rpalo avatar rpdelaney avatar sjwarner avatar sjwarner-bp avatar smarticles101 avatar yamb00 avatar zapanton avatar zengjiapei3000 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

bash's Issues

Invalid UUIDs in config.json

Looking over config.json, it seems like a couple of the UUIDs are invalid. This is mentioned in configlet issue 113, as older versions of configlet used an invalid UUID library.

triangle and atbash-cipher seem to be the only two invalid UUIDs.

Topics: what is two-fer about?

In preparation for the launch of the Exercism redesign, we need to decide what topics to list for two-fer.
See #61 for context.

What topics would you say that two-fer touches upon?

Topics: what is difference-of-squares about?

In preparation for the launch of the Exercism redesign, we need to decide what topics to list for difference-of-squares.
See #61 for context.

@patbl - Having solved this, what would you say that difference-of-squares is about? Did you struggle with anything in particular? What did you learn?

Topics: what is word-count about?

In preparation for the launch of the Exercism redesign, we need to decide what topics to list for word-count.
See #61 for context.

What topics would you say that word-count touches upon?

Two-fer readme talks about tdd like it's the second exercise

So two-fer is meant to be the second exercise, but it's a little too difficult to be the second one on bash (986b124), so gigasecond is. However, because the readme's are generated, the two-fer readme explains test driven development like it is the second exercise. Is there a fix for this as far as generation goes @kytrinyx? @exercism/configlet?

Verify contents and format of track documentation

Each language track has documentation in the docs/ directory, which gets included on the site
on each track-specific set of pages under /languages.

We've added some general guidelines about how we'd like the track to be documented in exercism/exercism#3315
which can be found at https://github.com/exercism/exercism.io/blob/master/docs/writing-track-documentation.md

Please take a moment to look through the documentation about documentation, and make sure that
the track is following these guidelines. Pay particularly close attention to how to use images
in the markdown files.

Lastly, if you find that the guidelines are confusing or missing important details, then a pull request
would be greatly appreciated.

Add helpful information to the SETUP.md

The contents of the SETUP.md file gets included in
the README.md that gets delivered when a user runs the exercism fetch
command from their terminal.

At the very minimum, it should contain a link to the relevant
language-specific documentation on
help.exercism.io.

It would also be useful to explain in a generic way how to run the tests.
Remember that this file will be included with all the problems, so it gets
confusing if we refer to specific problems or files.

Some languages have very particular needs in terms of the solution: nested
directories, specific files, etc. If this is the case here, then it would be
useful to explain what is expected.


Thanks, @tejasbubane for suggesting that we add this documentation everywhere.
See exercism.io#2198.

How to set up a local dev environment

See exercism/problem-specifications#28

See issue #2092 for an overview of operation welcome contributors.


Provide instructions on how to contribute patches to the exercism test suites
and examples: dependencies, running the tests, what gets tested on Travis-CI,
etc.

The contributing document
in the x-api repository describes how all the language tracks are put
together, as well as details about the common metadata, and high-level
information about contributing to existing problems, or adding new problems.

The README here should be language-specific, and can point to the contributing
guide for more context.

From the OpenHatch guide:

Here are common elements of setting up a development environment you’ll want your guide to address:

Preparing their computer
Make sure they’re familiar with their operating system’s tools, such as the terminal/command prompt. You can do this by linking to a tutorial and asking contributors to make sure they understand it. There are usually great tutorials already out there - OpenHatch’s command line tutorial can be found here.
If contributors need to set up a virtual environment, access a virtual machine, or download a specific development kit, give them instructions on how to do so.
List any dependencies needed to run your project, and how to install them. If there are good installation guides for those dependencies, link to them.

Downloading the source
Give detailed instructions on how to download the source of the project, including common missteps or obstacles.

How to view/test changes
Give instructions on how to view and test the changes they’ve made. This may vary depending on what they’ve changed, but do your best to cover common changes. This can be as simple as viewing an html document in a browser, but may be more complicated.

Installation will often differ depending on the operating system of the contributor. You will probably need to create separate instructions in various parts of your guide for Windows, Mac and Linux users. If you only want to support development on a single operating system, make sure that is clear to users, ideally in the top-level documentation.

Error Handling exercise planning

What should the error handling exercise cover for bash? The current list I can think of would be:

  • Argument validation
  • set -e
  • set -u

Are there others?

Though I’m not sure how to test usage of the set flags . Perhaps the argument validation would be enough, that could resolve the remaining concerns from #29.

Override probot/stale defaults, if necessary

Per the discussion in exercism/discussions#128 we
will be installing the probot/stale integration on the Exercism organization on
April 10th, 2017.

By default, probot will comment on issues that are older than 60 days, warning
that they are stale. If there is no movement in 7 days, the bot will close the issue.
By default, anything with the labels security or pinned will not be closed by
probot.

If you wish to override these settings, create a .github/stale.yml file as described
in https://github.com/probot/stale#usage, and make sure that it is merged
before April 10th.

If the defaults are fine for this repository, then there is nothing further to do.
You may close this issue.

Core exercises

Looking through config.json, I have noticed we have few core exercises, but all exercises have an unlocked_by. I believe all 'optional' exercises need to be unlocked by core exercises (that's how it works on the Java track).

@exercism/bash Let me know if there is a reason we wouldn't have this policy, otherwise I'll adopt it and work on a fix.

We need a rough estimate of difficulty for each exercise

We've never really taken a pass through this track to figure out the relative difficulties of each exercise on a scale from 1 - 10 (1 == easy, 10 == hard).

If we could order the exercises so that the difficulty ramps up more gently, we can provide a better experience to those who are working through the exercises.

These are the exercises that we have

  • hello-world
  • gigasecond
  • leap
  • hamming
  • rna-transcription
  • raindrops
  • word-count
  • bob
  • difference-of-squares
  • anagram
  • pangram
  • two-fer
  • phone-number

Note that hello-world was pretty difficult, and we've fixed this by simplifying hello world and adding a new exercise, two-fer, which is equivalent to the old one.

/cc @patbl, @dantiel, @mkrehbs, @deepbsd, @canel-rom1, @rpalo, @ConstantlyLost, @fwten, @fbi1714, @timmyjose, @mattj-io, @nielssorensen
You have all done at least some of these. Do you have any ideas about relative difficulties?

Bash linting

I've noticed that there isn't linting as part of the track yet. In my experience contributing to the Python track I see they have implemented linting, and it allows a greater track consistency.

I'd be interested in hearing other's thoughts around linting of code (and if anyone here has any particular favourites when it comes to linters for Bash).

A cursory glance points me towards shellcheck, which seems to integrate with Travis alright!

Investigate track health and status of the track

I've used Sarah Sharp's FOSS Heartbeat project to generate stats for each of the language track repositories, as well as the x-common repository.

The Exercism heartbeat data is published here: https://exercism.github.io/heartbeat/

When looking at the data, please disregard any activity from me (kytrinyx), as I would like to get the language tracks to a point where they are entirely maintained by the community.

Please take a look at the heartbeat data for this track, and answer the following questions:

  • To what degree is the track maintained?
  • Who (if anyone) is merging pull requests?
  • Who (if anyone) is reviewing pull requests?
  • Is there someone who is not merging pull requests, but who comments on issues and pull requests, has thoughtful feedback, and is generally helpful? If so, maybe we can invite them to be a maintainer on the track.

I've made up the following scale:

  • ORPHANED - Nobody (other than me) has merged anything in the past year.
  • ENDANGERED - Somewhere between ORPHANED and AT RISK.
  • AT RISK - Two people (other than me) are actively discussing issues and reviewing and merging pull requests.
  • MAINTAINED - Three or more people (other than me) are actively discussing issues and reviewing and merging pull requests.

It would also be useful to know if there a lot of activity on the track, or just the occasional issue or comment.

Please report the current status of the track, including your best guess on the above scale, back to the top-level issue in the discussions repository: exercism/discussions#97

We need topics for each exercise for the new prototype

We are working on a redesign of Exercism, which will have a much richer way of structuring the exercises.

The short version is that we'll have a handful of "core" exercises that are essential for learning about the language (typically 15-20), where completing each exercise unlocks a bunch of optional exercise that let you dive more deeply into different topics and practice different aspects of the language.

You can read more about this design choice in https://github.com/exercism/docs/blob/master/about/conception/progression.md

In order to make this possible, we need to figure out which topics each individual exercise covers. We have a list of potential topics that we can draw from, but this is by no means an exhaustive list, and the topics might not be relevant to Bash at all.

As an experiment I'm creating a separate issue for each exercise so that I can ping people who have solved it to help figure this out.

Topics: what is bob about?

In preparation for the launch of the Exercism redesign, we need to decide what topics to list for bob.
See #61 for context.

@dantiel, @mkrehbs, @patbl - Having solved this, what would you say that bob is about? Did you struggle with anything in particular? What did you learn?

Run exercise tests through Travis

As mentioned in #4, for each pull request, a run through all exercise tests would help maintainers quickly identify if the submission causes the tests to fail.

Travis is currently used to lint submissions. <- incorrect! configlet lint is pulled in, but #116 talks about code linting.

Topics: what is hello-world about?

In preparation for the launch of the Exercism redesign, we need to decide what topics to list for hello-world.
See #61 for context.

What topics would you say that hello-world touches upon?

Move exercises to subdirectory

The problems api (x-api) now supports having exercises collected in a subdirectory
named exercises.

That is to say that instead of having a mix of bin, docs, and individual exercises,
we can have bin, docs, and exercises in the root of the repository, and all
the exercises collected in a subdirectory.

In other words, instead of this:

x{TRACK_ID}/
├── LICENSE
├── README.md
├── bin
│   └── fetch-configlet
├── bowling
│   ├── bowling_test.ext
│   └── example.ext
├── clock
│   ├── clock_test.ext
│   └── example.ext
├── config.json
└── docs
│   ├── ABOUT.md
│   └── img
... etc

we can have something like this:

x{TRACK_ID}/
├── LICENSE
├── README.md
├── bin
│   └── fetch-configlet
├── config.json
├── docs
│   ├── ABOUT.md
│   └── img
├── exercises
│   ├── bowling
│   │   ├── bowling_test.ext
│   │   └── example.ext
│   └── clock
│       ├── clock_test.ext
│       └── example.ext
... etc

This has already been deployed to production, so it's safe to make this change whenever you have time.

Improve ABOUT.md

Currently, our About the Bash Track page is looking pretty empty in comparison to other tracks.

At the moment, it is just a quote and reference. For starters, I think the formatting could be improved, but overall the whole content could be improved. Paragraphs could include information about the history of Bash, where Bash is used (e.g. which systems), about the Bash community etc.

Exercise README insert

In #130 and #131, changes to READMEs have been merged in that alter the test command from bats whatever_test.sh to a test command that includes the exercise slug (as it will appear in each exercise directory).

However, if this is the convention others have chosen to go with, this now means that generating READMEs using configlet will not work properly, as configlet will pull the insert in automatically.

So, in order to stay consistent across the READMEs, I suggest either changing the command to something more obviously a placeholder for the slug name (e.g. bats <slug_name>_test.sh) or using the .Spec.SnakeCaseName found here. I recommend either one of these, as the current method would mean automatically generating most of the README, then manually changing parts, which could lead to error/inconsistency.

Enhance introductory copy

As part of #4, the introductory copy should be improved.

This should give a more comprehensive overview of the language than there currently is.

Topics: what is pangram about?

In preparation for the launch of the Exercism redesign, we need to decide what topics to list for pangram.
See #61 for context.

What topics would you say that pangram touches upon?

Hard test cases for hello-world

I found the last two test cases for the hello world example to be rather difficult.

I don't consider myself proficient in bash by any means, but I have written my share of shell scripts. Figuring out how to detect if there are zero arguments vs a blank argument in bash seems like an intermediate challenge.

Could they be moved to a different exercise later on?

Topics: what is phone-number about?

In preparation for the launch of the Exercism redesign, we need to decide what topics to list for phone-number.
See #61 for context.

What topics would you say that phone-number touches upon?

Where to keep HINTS.md

In hello-world the HINTS.md file is kept at the top-level of the hello-world directory, but in gigasecond the HINTS.md file is kept in a .meta folder inside of the gigasecond exercise directory.

We should decide the best place to keep these, and unify the code 🙂

Git workflow advice for contributing.md

#123 brought out the start of a discussion about --force-with-lease option on git push. We should decide if this is the place for that advice, or if it should be removed, allowing for the contributor to decide how to manage their branch, or if it should be suggested as a practice that is good for more than just the Bash team.

Specifically this comment in review.

Track policies (with a view to create POLICIES.md)

It would be most useful to have a succinct list of the current track policies, to allow future contributors (and reviewers) to facilitate the most desirable contributions.

This would also be a great reference tool, and would be a useful part of a template for PRs.

Before this PR, I'd like a discussion to take place regarding current policies (older members of @exercism/bash may have a better idea of longer-standing policies) and policies we should strive to implement/achieve.

Update config.json to match new specification

For the past three years, the ordering of exercises has been done based on gut feelings and wild guesses. As a result, the progression of the exercises has been somewhat haphazard.

In the past few months maintainers of several tracks have invested a great deal of time in analyzing what concepts various exercises require, and then reordering the tracks as a result of that analysis.

It would be useful to bake this data into the track configuration so that we can adjust it over time as we learn more about each exercise.

To this end, we've decided to add a new key exercises in the config.json file, and deprecate the problems key.

See exercism/discussions#60 for details about this decision.

Note that we will not be removing the problems key at this time, as this would break the website and a number of tools.

The process for deprecating the old problems array will be:

  • Update all of the track configs to contain the new exercises key, with whatever data we have.
  • Simultaneously change the website and tools to support both formats.
  • Once all of the tracks have added the exercises key, remove support for the old key in the site and tools.
  • Remove the old key from all of the track configs.

In the new format, each exercise is a JSON object with three properties:

  • slug: the identifier of the exercise
  • difficulty: a number from 1 to 10 where 1 is the easiest and 10 is the most difficult
  • topics: an array of strings describing topics relevant to the exercise. We maintain
    a list of common topics at https://github.com/exercism/x-common/blob/master/TOPICS.txt. Do not feel like you need to restrict yourself to this list;
    it's only there so that we don't end up with 20 variations on the same topic. Each
    language is different, and there will likely be topics specific to each language that will
    not make it onto the list.

The difficulty rating can be a very rough estimate.

The topics array can be empty if this analysis has not yet been done.

Example:

"exercises": [
  {
    "slug": "hello-world" ,
    "difficulty": 1,
    "topics": [
        "control-flow (if-statements)",
        "optional values",
        "text formatting"
    ]
  },
  {
    "difficulty": 3,
    "slug": "anagram",
    "topics": [
        "strings",
        "filtering"
    ]
  },
  {
    "difficulty": 10,
    "slug": "forth",
    "topics": [
        "parsing",
        "transforming",
        "stacks"
    ]
  }
]

It may be worth making the change in several passes:

  1. Add the exercises key with the array of objects, where difficulty is 1 and topics is empty.
  2. Update the difficulty settings to reflect a more accurate guess.
  3. Add topics (perhaps one-by-one, in separate pull requests, in order to have useful discussions about each exercise).

hello_world.sh tests do not pass

The tests for hello_word.sh do not pass.

 $ bats hello_world_test.sh 
 ✗ When given no name, it should greet the world!
   (in test file hello_world_test.sh, line 4)
     `[ "$status" -eq 0 ]' failed
 ✗ When given "Alice" it should greet Alice!
   (in test file hello_world_test.sh, line 11)
     `[ "$status" -eq 0 ]' failed
 ✗ When given "Bob" it should greet Bob!
   (in test file hello_world_test.sh, line 18)
     `[ "$status" -eq 0 ]' failed
 ✗ When given an empty string it should have a space and punctuation, though admittedly this is strange.
   (in test file hello_world_test.sh, line 25)
     `[ "$status" -eq 0 ]' failed
 ✗ When given "Alice and Bob" it greets them both
   (in test file hello_world_test.sh, line 32)
     `[ "$status" -eq 0 ]' failed

5 tests, 5 failures

The content of hello_world.sh comes from
https://github.com/exercism/xbash/blob/master/hello-world/example.sh
and looks correct:

$ env bash hello_world.sh 
Hello, World!
$ env bash hello_world.sh Alice
Hello, Alice!
$ env bash hello_world.sh Bob
Hello, Bob!
$ env bash hello_world.sh ""
Hello, !
$ env bash hello_world.sh Alice and Bob
Hello, Alice and Bob!

Versions of software used:

$ bats --version
Bats 0.4.0

$ uname -a
Linux machinename 0 3.19.0-32-generic #37~14.04.1-Ubuntu SMP Thu Oct 22 09:41:40 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

$ env bash --version
GNU bash, version 4.3.11(1)-release (x86_64-pc-linux-gnu)

Exercise test structure

My question here: exercism/problem-specifications#1070 (comment) was to know if we need to enforce the rules of the problem-specifications in our tests.

Requiring the property does not look that it's something that is critical and should be enforced, my reasoning being that everyone should be free to structure their code as their wish, some people will do a big procedure, no functions, others will do multiple functions.

If you look at: https://github.com/exercism/bash/blob/master/exercises/armstrong-numbers/armstrong_numbers_test.sh, you will see that we require the user to create a function in the tests.
In contrast, for exercises like this: https://github.com/exercism/bash/blob/master/exercises/triangle/triangle_test.sh is pretty much required.

What do you think about not forcing the user to add functions unless the exercises are dependent on them, ex: triangle, and let the user decide how to structure their implementations?

Travis CI master build

While PR branches are building, it looks like master hasn't been building for the last 5 months.

I think I know why, and where to enable that - I've just opened this as a placeholder, and a suitable place to raise any problems if they do come up. If there aren't problems, I'll close it.

Decision: Plain Bash or POSIX compliance?

It might be a good idea to have the discussion about this in this separate issue.

As @kenden stated here: #4 (comment), we need to decide what are the requirements for this track.

Kenden: I vote for: only bash, no extra utilities needed, no posix/sh/zsh compatibility required.

I agree to that, we should just teach plain Bash with no extra requirements or POSIX compliance in the code or in the tests.

Learning bash is difficult and for most of the users (especially the targeted users for exercism v2), and the majority of the use cases for bash is to create scripts to aid in development, which means the environment is similar or at least known.

What do you think?

Topics and difficulty

As per issues #62 - #74, a list of topics need to be added to config.json for each exercise.

Looking over config.json, it seems that most exercises have a minimum of 2 topics listed. Should these open issues now be closed, or are they left open for any particular reason? I think that these have all been resolved and would like to close them.

@Smarticles101 would you agree? I like to minimise clutter so we are able to focus on the matters at hand. Additionally, in #4 you mentioned the range of difficulty in exercises didn't seem right. Do you think we should amend this by reviewing the current difficulties, or just seek to implement some more difficult exercises? 🙂

Edit: regarding reviewing difficulties, both two-fer and word-count are rated 2 here, but the word-count's example implementation seems to be a bit more complex than two-fer's example.

Topics: what is anagram about?

In preparation for the launch of the Exercism redesign, we need to decide what topics to list for anagram.
See #61 for context.

What topics would you say that anagram touches upon?

Copy track icon into language track repository

Right now all of the icons used for the language tracks (which can be seen at http://exercism.io/languages) are stored in the exercism/exercism.io repository in public/img/tracks/. It would make a lot more sense to keep these images along with all of the other language-specific stuff in each individual language track repository.

There's a pull request that is adding support for serving up the track icon from the x-api, which deals with language-specific stuff.

In order to support this change, each track will need to

In other words, at the end of it you should have the following file:

./img/icon.png

See exercism/exercism#2925 for more details.

word-count: test suite is missing which causes Travis CI build to fail

From Travis CI logs:

travis_time:end:03762840:start=1502581992494811503,finish=1502581992868551731,duration=373740228
�[0K
�[32;1mThe command "bin/fetch-configlet" exited with 0.�[0m
travis_time:start:0f0d1e20
�[0K$ bin/configlet lint .
-> The implementation for 'word-count' is missing a test suite.

travis_time:end:0f0d1e20:start=1502581992873047268,finish=1502581992880369493,duration=7322225
�[0K
�[31;1mThe command "bin/configlet lint ." exited with 1.�[0m

Done. Your build exited with 1.

We should add word_count_test.sh test suite ASAP to fix this issue

Add test versions in order to keep exercises up to date

I think it would be worthwhile putting a version file in each exercise, indicating which version of the problem-specifications canonical data it is based off. This would be useful as the problem specs are updated fairly frequently with new test cases. We'd then be able to keep our exercises up to date!

Coming from the Java track myself, we stick a version file into each exercise, and then occasionally will run a script that checks against the problem specifications to indicate if an exercise is out of date. I think this is a really good way to do it, and would pose this as my suggestion.

To be more exact, we could use this script almost as is, and put a version file in a .meta folder per exercise. Each version check / PR could even be opened as good-first-patch issues.

This would not be for 'user' benefit as much as it would be for the @exercism/bash team.

Verify that nothing links to help.exercism.io

The old help site was deprecated in December 2015. We now have content that is displayed on the main exercism.io website, under each individual language on http://exercism.io/languages.

The content itself is maintained along with the language track itself, under the docs/ directory.

We decided on this approach since the maintainers of each individual language track are in the best position to review documentation about the language itself or the language track on Exercism.

Please verify that nothing in docs/ refers to the help.exercism.io site. It should instead point to http://exercism.io/languages/:track_id (at the moment the various tabs are not linkable, unfortunately, we may need to reorganize the pages in order to fix that).

Also, some language tracks reference help.exercism.io in the SETUP.md file, which gets included into the README of every single exercise in the track.

We may also have referenced non-track-specific content that lived on help.exercism.io. This content has probably been migrated to the Contributing Guide of the x-common repository. If it has not been migrated, it would be a great help if you opened an issue in x-common so that we can remedy the situation. If possible, please link to the old article in the deprecated help repository.

If nothing in this repository references help.exercism.io, then this can safely be closed.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.