exercism / nim Goto Github PK
View Code? Open in Web Editor NEWExercism exercises in Nim.
Home Page: https://exercism.org/tracks/nim
License: MIT License
Exercism exercises in Nim.
Home Page: https://exercism.org/tracks/nim
License: MIT License
This issue is part of the migration to v3. You can read full details about the various changes here.
Exercism v3 introduces a new type of exercise: Concept Exercises. All existing (V2) exercises will become Practice Exercises.
Concept Exercises and Practice Exercises are linked to each other via Concepts. Concepts are taught by Concept Exercises and practiced in Practice Exercises. Each Exercise (Concept or Practice) has prerequisites, which must be met to unlock an Exercise - once all the prerequisite Concepts have been "taught" by a Concept Exercise, the exercise itself becomes unlocked.
For example, in some languages completing the Concept Exercises that teach the "String Interpolation" and "Optional Parameters" concepts might then unlock the two-fer
Practice Exercise.
Each Practice Exercise has two fields containing concepts: a practices
field and a prerequisites
field.
The practices
key should list the slugs of Concepts that this Practice Exercise actively allows a student to practice.
strings
). In those cases we recommend choosing a few good exercises that make people think about those Concepts in interesting ways. For example, exercises that require UTF-8, string concatenation, char enumeration, etc, would all be good examples.The prerequisites
key lists the Concept Exercises that a student must have completed in order to access this Practice Exercise.
strings
, optional-params
, implicit-return
.loops
or recursion
), the maintainer should choose the one approach that they would like to unlock the Exercise, considering the student's journey through the track. For example, the loops/recursion example, they might think this exercise is a good early practice of loops
or that they might like to leave it later to teach recursion. They can also make use of an analyzer to prompt the student to try an alternative approach: "Nice work on solving this via loops. You might also like to try solving this using Recursion."Although ideally all Concepts should be taught by Concept Exercises, we recognise that it will take time for tracks to achieve that. Any Practice Exercises that have prerequisites which are not taught by Concept Exercises, will become unlocked once the final Concept Exercise has been completed.
The "practices"
field of each element in the "exercises.practice"
field in the config.json
file should be updated to contain the practice concepts. See the spec.
To help with identifying the practice concepts, the "topics"
field can be used (if it has any contents). Once prerequisites have been defined for a Practice Exercise, the "topics"
field should be removed.
Each practice concept should have its own entry in the top-level "concepts"
array. See the spec.
The "prerequisites"
field of each element in the "exercises.practice"
field in the config.json
file should be updated to contain the prerequisite concepts. See the spec.
To help with identifying the prerequisites, the "topics"
field can be used (if it has any contents). Once prerequisites have been defined for a Practice Exercise, the "topics"
field should be removed.
Each prerequisite concept should have its own entry in the top-level "concepts"
array. See the spec.
{
"exercises": {
"practice": [
{
"uuid": "8ba15933-29a2-49b1-a9ce-70474bad3007",
"slug": "leap",
"name": "Leap",
"practices": ["if-statements", "numbers", "operator-precedence"],
"prerequisites": ["if-statements", "numbers"],
"difficulty": 1
}
]
}
}
Each language track has documentation in the docs/
directory, which gets included on the site
on each track-specific set of pages under /languages.
We've added some general guidelines about how we'd like the track to be documented in exercism/exercism#3315
which can be found at https://github.com/exercism/exercism.io/blob/master/docs/writing-track-documentation.md
Please take a moment to look through the documentation about documentation, and make sure that
the track is following these guidelines. Pay particularly close attention to how to use images
in the markdown files.
Lastly, if you find that the guidelines are confusing or missing important details, then a pull request
would be greatly appreciated.
Looking at many other language tracks in Exercism, most of them seem to have stubbed out solution files. This helps people know what the name of the file should be, along with the method names and parameters. I think this would help people learning Nim and make this track overall better!
For this to happen, we need to update our test runner. Right now our test runner renames the example.nim
to the correct file name to run the test for the exercise. We would need the test runner to keep track of the stubbed solution file and rename it after the test are done running.
After this is done, we can start adding stubbed out solutions files to all the exercises.
Nimble is the package manager for the Nim programming language, but also provides a directory structure for the project along with the ability to build the project. As Nimble comes with Nim, we could use this structure to make the test easier to run. We could set up the project to have a "test" task so people would just need to run nimble test
, like so https://github.com/nim-lang/nimble#tests
I think this would take some work to get to this point and I'm not fully sure what this would look like as far as the test runner, and CI system we have in place right now but I think this would provide a better user experience for someone learning Nim.
We're about to start a big push towards version 3 (v3) of Exercism. This is going to be a really exciting step forward for Exercism, with in-browser coding, new Concept Exercises with automated feedback, improved mentoring and much more.
This to be a big community effort, with the work spread out among hundreds of volunteers across Exercism. One key thing is going to be each track having enough maintainers who have the time to manage that community effort. We are therefore putting out a call for new maintainers to bolster our numbers. We're hoping that our existing maintainers will be able to act as mentors to the newer maintainers we add, and take on a parental role in the tracks.
If you are an existing maintainer, could you please reply to this letting us know that you think you'll have time (2-3hrs/week) to help with this over the next 6 months. If you won't have that time, but still want to be a maintainer and just help where you can instead, please tell us that too. If you have come to the end of the road as a maintainer, then we totally understand that and appreciate all your effort, so just let us know.
For anyone new who's interested in becoming a maintainer, thanks for your interest! Being an Exercism maintainer is also a great opportunity to work with some other smart people, learn more about your language of choice, and gain useful skills and experience that are useful for growing your career in the technical leadership direction. Please write a comment below introducing yourself along with your Exercism handle, and telling us why you're interested in becoming a maintainer, and any relevant experience. We will then evaluate every application and contact you using your exercism email address once we have finished the evaluation process.
Thank you!
See also exercism/exercism#5161
This issue is part of the migration to v3. You can read full details about the various changes here.
There are several new features in Exercism v3 for tracks to build. To selectively enable these features on the Exercism v3 website, each track must keep track of the status of the following features:
The status of these features is specified in the top-level "status"
field in the track's config.json
, as specified in the spec.
The "status"
field in the config.json
file should be updated to indicate the status of the features for this track. The list of features is defined in the spec.
{
"status": {
"concept_exercises": true,
"test_runner": true,
"representer": false,
"analyzer": false
}
}
Right now all of the icons used for the language tracks (which can be seen at http://exercism.io/languages) are stored in the exercism/exercism.io repository in public/img/tracks/
. It would make a lot more sense to keep these images along with all of the other language-specific stuff in each individual language track repository.
There's a pull request that is adding support for serving up the track icon from the x-api, which deals with language-specific stuff.
In order to support this change, each track will need to
img/
at the root of this repository if it doesn't already exist, thenimg/
directory, and importantlyicon.png
In other words, at the end of it you should have the following file:
./img/icon.png
See exercism/exercism#2925 for more details.
At present, I need to read the test suite to figure out the complete problem statement.
Each problem README should contain information like:
wordcount.nim
, not WordCount.nim
or anything else, because the test suite is importing using import wordcount
.isLeapYear
while the libary name expected is leap
. There is not way for a user to guess this.. they have to read and understand the test suite.Basically, the submission ready code should be able to be created by reading just the README. At the moment, the README's provide almost no information, and I need to reverse engineer the tests.
Add this to the resource section of nim
To install use choosenim
Tutorial:
Video Tutorial:
Book:
Awesome List:
Rosettacode
Chat/Forum
Just a review from some guys
Each track needs a file that contains track-specific instructions on how to manually run the tests. The contents of this document are only presented to the student when using the CLI. This file lives at exercises/shared/.docs/tests.md
. You almost certainly already have this information, but need to move it to the correct place.
For v2 tracks, this information was (usually) included in the readme template found at config/exercise_readme.go.tmpl
. As such, tracks can extract the test instructions from the config/exercise_readme.go.tmpl
file to the exercises/shared/.docs/tests.md
file.
See https://github.com/exercism/csharp/pull/1557/files for an example PR.
Hello π
Over the last few months we've been transferring all our CI from Travis to GitHub Actions (GHA). We've found that GHA are easier to work with, more reliable, and much much faster.
Based on our success with GHA and increasing intermittent failures on Travis, we have now decided to try and remove Travis from Exercism's org altogether and shift everything to GHA. This issue acts as a call to action if your track is still using Travis.
For most CI checks this should be a transposing from Travis' syntax to GHA syntax, and hopefully quite straightforward (see this PR for an example). However, if you do encounter any issues doing this, please ask on Slack where lots of us now have experience with GHA, or post a comment here and I'll tag relevant people. This would also make a good Hacktoberfest issue for anyone interested in making their first contribution π
If you've already switched this track to GHA, please feel free to close this issue and ignore it.
Thanks!
Each Concept will have to define a blurb, which is a short description of the exercise.
In an earlier PR, a placeholder blurb has been added to the .meta/config.json file of each Concept Exercise. The placeholder blurbs should be replaced with an actual blurb for each Concept Exercise. Blurbs must be limited to 350 chars and will be truncated in some views.
For forked exercises, it might be useful to check how other tracks have defined their blurb.
The blurb will be displayed on a track's exercises page and on exercise tooltips. For example:
See the Concept Exercise spec for more information.
This would greatly cut down the time when creating new exercises, being able to run the test just for the exercise you are working on and not the entire test suite.
This issue is part of the migration to v3. You can read full details about the various changes here.
In Exercism v3, each track must specify exactly six "key features". Exercism uses these features to highlight the most interesting, unique or "best" features of a language to a student.
Key features are specified in the top-level "key_features"
field in the track's config.json
file and are defined as an array of objects, as specified in the spec.
The "key_features"
field in the config.json
file should be updated to describe the six "key features" of this track. See the spec.
{
"key_features": [
{
"icon": "features-oop",
"title": "Modern",
"content": "C# is a modern, fast-evolving language."
},
{
"icon": "features-strongly-typed",
"title": "Cross-platform",
"content": "C# runs on almost any platform and chipset."
},
{
"icon": "features-functional",
"title": "Multi-paradigm",
"content": "C# is primarily an object-oriented language, but also has lots of functional features."
},
{
"icon": "features-lazy",
"title": "General purpose",
"content": "C# can be used for a wide variety of workloads, like websites, console applications, and even games."
},
{
"icon": "features-declarative",
"title": "Tooling",
"content": "C# has excellent tooling, with linting and advanced refactoring options built-in."
},
{
"icon": "features-generic",
"title": "Documentation",
"content": "Documentation is excellent and exhaustive, making it easy to get started with C#."
}
]
}
We have decided to require all file-based tracks to provide stubs for their exercises.
The lack of stub file generates an unnecessary pain point within Exercism, contributing a significant proportion of support requests, making things more complex for our students, and hindering our ability to automatically run test-suites and provide automated analysis of solutions.
We believe that itβs essential to understand error messages, know how to use an IDE, and create files. However, getting this right as youβre just getting used to a language can be a frustrating distraction, as it can often require a lot of knowledge that tends to seep in over time. At the start, it can be challenging to google for all of these details: what file extension to use, what needs to be included, etc. Getting people up to speed with these things are not Exercismβs focus, and weβve decided that we are better served by removing this source of confusion, letting people get on with actually solving the exercises.
The original discussion for this is at exercism/discussions#238.
Therefore, weβd like this track to provide a stub file for each exercise.
Per the discussion in exercism/discussions#128 we
will be installing the probot/stale integration on the Exercism organization on
April 10th, 2017.
By default, probot will comment on issues that are older than 60 days, warning
that they are stale. If there is no movement in 7 days, the bot will close the issue.
By default, anything with the labels security
or pinned
will not be closed by
probot.
If you wish to override these settings, create a .github/stale.yml file as described
in https://github.com/probot/stale#usage, and make sure that it is merged
before April 10th.
If the defaults are fine for this repository, then there is nothing further to do.
You may close this issue.
There are a number of things we're going to want to check before the v2 site goes live. There are notes below that flesh out all the checklist items.
TODO
)core
auto_approve: true
The v2 site has a landing page for each track, which should make people want to join it. If the track page is missing, ping @kytrinyx
to get it added.
If the header of the page starts with TODO
, then submit a pull request to https://github.com/exercism/nim/blob/master/config.json with a blurb
key. Remember to get configlet and run configlet fmt .
from the root of the track before submitting.
If the "About" section feels a bit dry, then submit a pull request to https://github.com/exercism/nim/blob/master/docs/ABOUT.md with suggested tweaks.
In order to work well with the design of the new site, we're restricting the formatting of the ABOUT.md
. It can use:
Additionally:
<br/>
can be used to split a paragraph into lines without spacing between them, however this is discouraged.If the code example is too short or too wide or too long or too uninteresting, submit a pull request to https://github.com/exercism/nim/blob/master/docs/SNIPPET.txt with a suggested replacement.
Where the v1 site has a long, linear list of exercises, the v2 site has organized exercises into a small set of required exercises ("core").
If you update the track config, remember to get configlet and run configlet fmt .
from the root of the track before submitting.
Core exercises unlock optional additional exercises, which can be filtered by topic an difficulty, however that will only work if we add topics and difficulties to the exercises in the track config, which is in https://github.com/exercism/nim/blob/master/config.json
We've currently made any hello-world exercises auto-approved in the backend of v2. This means that you don't need mentor approval in order to move forward when you've completed that exercise.
Not all tracks have a hello-world, and some tracks might want to auto approve other (or additional) exercises.
There are no bullet points for this one :)
As we move towards the launch of the new version of Exercism we are going to be ramping up on actively recruiting people to help provide feedback. Our goal is to get to 100%: everyone who submits a solution and wants feedback should get feedback. Good feedback.
If you're interested in helping mentor the track, check out http://mentoring.exercism.io/
When all of the boxes are ticked off, please close the issue.
Tracking progress in exercism/meta#104
Each track needs a file that contains track-specific instructions on how to get help. The contents of this document are only presented to the student when using the CLI. This file lives at exercises/shared/.docs/help.md
. You almost certainly already have this information, but need to move it to the correct place.
For v2 tracks, this information was (usually) included in the readme template found at config/exercise_readme.go.tmpl
. As such, tracks can extract the help instructions from the config/exercise_readme.go.tmpl
file to the exercises/shared/.docs/help.md
file.
See https://github.com/exercism/csharp/pull/1557/files for an example PR.
In the triangle exercise, you note that "...the sum of the lengths of any two sides must be greater than or equal to the length of the third side.".
In the test "triangles violating triangle inequality are illegal 2", the expected answer is to raise ValueError
for the test kind(2,4,2)
.
I think the answer for this test must be Equilateral, and not an error, because 2 + 4 > 2
, 4 + 2 > 2
, and 2 + 2 == 4
.
As we move towards the launch of the new version of Exercism we are going to be ramping up on actively recruiting people to help provide feedback.
Our goal is to get to 100%: everyone who submits a solution and wants feedback should get feedback. Good feedback. You can read more about this aspect of the new site here: http://mentoring.exercism.io/
To do this, we're going to need a lot more information about where we can find language enthusiasts.
In other words: where do people care a lot and/or know a lot about Nim?
This is part of the project being tracked in exercism/meta#103
I've used Sarah Sharp's FOSS Heartbeat project to generate stats for each of the language track repositories, as well as the x-common repository.
The Exercism heartbeat data is published here: https://exercism.github.io/heartbeat/
When looking at the data, please disregard any activity from me (kytrinyx
), as I would like to get the language tracks to a point where they are entirely maintained by the community.
Please take a look at the heartbeat data for this track, and answer the following questions:
I've made up the following scale:
It would also be useful to know if there a lot of activity on the track, or just the occasional issue or comment.
Please report the current status of the track, including your best guess on the above scale, back to the top-level issue in the discussions repository: exercism/discussions#97
This issue is part of the migration to v3. You can read full details about the various changes here.
The configlet tool has a lint
command that checks if a track's configuration files are properly structured - both syntactically and semantically. Misconfigured tracks may not sync correctly, may look wrong on the website, or may present a suboptimal user experience, so configlet's guards play an important part in maintaining the integrity of Exercism.
We're updating configlet to work with v3 tracks, which have a different set of requirements than v2 tracks.
The full list of rules that will be checked by the linter can be found in this spec.
β Note that only a subset of the linting rules has been implemented at this moment. This means that while your track may be passing the checks at this moment, it might fail later. We thus strongly suggest you keep this issue open until we let you know otherwise.
Ensure that the track passes all the (v3 track) checks defined in configlet lint
.
To help verify that the track passes all the linting rules, the v3 preparation PR has added a GitHub Actions workflow that automatically runs configlet lint
.
It is also possible to run configlet lint
locally by running the ./bin/fetch-configlet
(or ./bin/fetch-configlet.ps1
) script to download a local copy of the configlet binary. Once downloaded, you can then do ./bin/configlet lint
to run the linting on your own machine.
The contents of the SETUP.md file gets included in
the README.md that gets delivered when a user runs the exercism fetch
command from their terminal.
At the very minimum, it should contain a link to the relevant
language-specific documentation on
help.exercism.io.
It would also be useful to explain in a generic way how to run the tests.
Remember that this file will be included with all the problems, so it gets
confusing if we refer to specific problems or files.
Some languages have very particular needs in terms of the solution: nested
directories, specific files, etc. If this is the case here, then it would be
useful to explain what is expected.
Thanks, @tejasbubane for suggesting that we add this documentation everywhere.
See exercism.io#2198.
The "react" exercise comes currently after the triangle exercise and before the binary exercise.
In my opinion, the "react" exercise is the most difficult exercise of all existing exercises. It's not only hard as exercise itself but requires probably also a lot of advanced Nim knowledge.
There is a reason, why it's in other tracks like http://exercism.io/languages/rust/exercises at the very end. I mean, due to no borrow checker in Nim, it might be easier than in Rust, but probably it's still the most difficult exercise.
See issue exercism/exercism#2092 for an overview of operation welcome contributors.
Provide instructions on how to contribute patches to the exercism test suites
and examples: dependencies, running the tests, what gets tested on Travis-CI,
etc.
The contributing document
in the x-api repository describes how all the language tracks are put
together, as well as details about the common metadata, and high-level
information about contributing to existing problems, or adding new problems.
The README here should be language-specific, and can point to the contributing
guide for more context.
From the OpenHatch guide:
Here are common elements of setting up a development environment youβll want your guide to address:
Preparing their computer
Make sure theyβre familiar with their operating systemβs tools, such as the terminal/command prompt. You can do this by linking to a tutorial and asking contributors to make sure they understand it. There are usually great tutorials already out there - OpenHatchβs command line tutorial can be found here.
If contributors need to set up a virtual environment, access a virtual machine, or download a specific development kit, give them instructions on how to do so.
List any dependencies needed to run your project, and how to install them. If there are good installation guides for those dependencies, link to them.Downloading the source
Give detailed instructions on how to download the source of the project, including common missteps or obstacles.How to view/test changes
Give instructions on how to view and test the changes theyβve made. This may vary depending on what theyβve changed, but do your best to cover common changes. This can be as simple as viewing an html document in a browser, but may be more complicated.Installation will often differ depending on the operating system of the contributor. You will probably need to create separate instructions in various parts of your guide for Windows, Mac and Linux users. If you only want to support development on a single operating system, make sure that is clear to users, ideally in the top-level documentation.
Try running git clone https://github.com/exercism/nim.git
It finishes instantly.
It takes a long time.
The repo download is about 5 MiB, which is roughly 20x larger than it should be.
Here is a useful shell script that finds large commits in git history.
git rev-list --objects --all \
| git cat-file --batch-check='%(objecttype) %(objectname) %(objectsize) %(rest)' \
| sed -n 's/^blob //p' \
| sort --numeric-sort --key=2 -r \
| cut -c 1-12,41- \
| $(command -v gnumfmt || echo numfmt) --field=2 --to=iec-i --suffix=B --padding=7 --round=nearest \
| head -10
The output:
4f22d18ccb2c 3.4MiB bin/configlet-linux-amd64
a4caae4f1ea3 3.3MiB bin/configlet-darwin-amd64
24734c89d647 3.2MiB bin/configlet-windows-amd64.exe
c4f3977347d3 2.8MiB bin/configlet-linux-386
aef8742a53d0 2.8MiB bin/configlet-darwin-386
8900361e1956 2.7MiB bin/configlet-windows-386.exe
32b71edb6b5a 38KiB img/icon.png
26bd0d630e4b 9.9KiB config.json
197aad6fdd6d 9.6KiB config.json
f40466484bcc 9.4KiB config.json
The cause is #6. It was successful in rewriting the history on master
, but there are still branches based on the old history.
Here is the early repo history:
* 42dbce8 2014-07-17 Katrina Owen Initialize skeleton repository <--- initial commit on current master
* 33dd085 2014-07-24 Simon Jakobi Add etl exercise (upstream/etl)
* d29eb02 2014-07-21 Simon Jakobi Add word-count exercise
* 56cfedb 2014-07-21 Simon Jakobi Travis: Fix build test script
* 9599b55 2014-07-21 Simon Jakobi Fix .gitignore
* d105b53 2014-07-21 Simon Jakobi Add anagram exercise
* 4317f81 2014-07-20 Simon Jakobi Add bob exercise
* d60655d 2014-07-20 Simon Jakobi Add leap exercise
* 2d76b23 2014-07-20 Simon Jakobi Add Travis build tests
* c0484c8 2014-07-20 Simon Jakobi Add .gitignore
| * 63c0915 2014-07-19 Simon Jakobi accumulate: First version, assert-based tests (upstream/accumulate)
|/
* 07dfe39 2014-07-17 Katrina Owen Initialize skeleton repository <--- initial commit on old master (contains configlet binaries)
We see the initial commit on current master
is 42dbce8
commit 42dbce82f3deeaeb7a26b987a7e626e60b7fde85
Author: Katrina Owen
Date: Thu Jul 17 20:48:10 2014 -0700
Initialize skeleton repository
.travis.yml | 4 ++++
README.md | 13 +++++++++++++
config.json | 18 ++++++++++++++++++
3 files changed, 35 insertions(+)
and the initial commit on the old (rewritten) master
is 07dfe39
commit 07dfe39b7804ae46ee519d713cb9f6acbf8fec91
Author: Katrina Owen
Date: Thu Jul 17 20:48:10 2014 -0700
Initialize skeleton repository
.travis.yml | 4 ++++
README.md | 13 +++++++++++++
bin/configlet-darwin-386 | Bin 0 -> 2899432 bytes
bin/configlet-darwin-amd64 | Bin 0 -> 3498272 bytes
bin/configlet-linux-386 | Bin 0 -> 2935096 bytes
bin/configlet-linux-amd64 | Bin 0 -> 3568968 bytes
bin/configlet-windows-386.exe | Bin 0 -> 2790912 bytes
bin/configlet-windows-amd64.exe | Bin 0 -> 3396096 bytes
config.json | 18 ++++++++++++++++++
9 files changed, 35 insertions(+)
Delete the stale accumulate
and etl
branches.
etl
is now implemented, and accumulate
has no canonical-data.json
. If desired, we should implement list-ops
instead. See exercism/problem-specifications#553 (comment)
That should fix the problem. Note that you can already download only the master
branch with
git clone --single-branch https://www.github.com/exercism/nim.git
and it doesn't download the bad history.
In fact, I think you can delete all the non-master
branches (except maybe normalize-travis
). All the other branches are merged.
I'm grateful that the fix is a lot simpler than for the OCaml track. See exercism/ocaml#300
The test suite is missing a test for this spec:
He answers 'Calm down, I know what I'm doing!' if you yell a question at him.
Note: the launch checklist was woefully outdated, since this got created over two years ago, so I deleted it.
If you are interested in leading the charge on getting this track launched, then comment below, and I'll help get this party started based on whatever is current at the time.
We have a launch guide that we'll keep up to date with changes here: https://github.com/exercism/docs/blob/master/language-tracks/launch/README.md
Some tracks have added assertions to the exercise test suites that ensure that the solution has a hard-coded version in it.
In the old version of the site, this was useful, as it let commenters see what version of the test suite the code had been written against, and they wouldn't accidentally tell people that their code was wrong, when really the world had just moved on since it was submitted.
If this track does not have any assertions that track versions in the exercise tests, please close this issue.
If this track does have this bookkeeping code, then please remove it from all the exercises.
See exercism/exercism#4266 for the full explanation of this change.
Weβve recently started a project to find the best way to design our tracks, in order to optimize the learning experience of students.
As a first step, weβll be examining the ways in which languages are unique and the ways in which they are similar. For this, weβd really like to use the knowledge of everyone involved in the Exercism community (students, mentors, maintainers) to answer the following questions:
Could you spare 5 minutes to help us by answering these questions? It would greatly help us improve the experience students have learning Nim :)
Note: this issue is not meant as a discussion, just as a place for people to post their own, personal experiences.
Want to keep your thoughts private but still help? Feel free to email me at [email protected]
Thank you!
Just to save @petertseng from having to create this issue.
See exercism/problem-specifications#1501
I'll handle this (and the other recent changes in the problem-specifications
repo) after the new exercises get merged.
I made a really stupid choice a while back to commit the cross-compiled
binaries for configlet (the tool that sanity-checks the config.json
against the implemented problems) into the repository itself.
Those binaries are HUGE, and every time they change the entire 4 or 5 megs get
recommitted. This means that cloning the repository takes a ridiculously long
time.
I've added a script that can be run on travis to grab the latest release from
the configlet repository (bin/fetch-configlet), and travis is set up to run
this now instead of using the committed binary.
I would really like to thoroughly delete the binaries from the entire git
history, but this will break all the existing clones and forks.
The commands I would run are:
# ensure this happens on an up-to-date master
git checkout master && git fetch origin && git reset --hard origin/master
# delete from history
git filter-branch --index-filter 'git rm -r --cached --ignore-unmatch bin/configlet-*' --prune-empty
# clean up
rm -rf .git/refs/original/
git reflog expire --all
git gc --aggressive --prune
# push up the new master, force override existing master branch
git push -fu origin master
If we do this everyone who has a fork will need to make sure that their master
is reset to the new upstream master:
git checkout master
git fetch upstream master
git reset --hard upstream/master
git push -fu origin master
We can at-mention (@) all the contributors and everyone who has a fork here in this
issue if we decide to do it.
The important question though, is: Is it worth doing?
Do you have any other suggestions of how to make sure this doesn't confuse people and break their
repository if we do proceed with this change?
I am stucked at wordcount
, so I downloaded this repository to run the example, and try to run the test with nim c -r word_count_test.nim
.
An error occured: word_count_test.nim(3, 8) Error: cannot open 'wordcount'
, because the code is in example.nim
.
If I change example.nim
to wordcount.nim
, and I run the tests again, 4 tests failed...
Hint: /Users/antonin/exercism/nim/nim/exercises/word-count/word_count_test [Exec]
[OK] count one word
[OK] count one of each
[OK] count multiple occurrences
word_count_test.nim(41,15): Check failed: result == expected
result was {: 12, as: 1, car: 1, carpet: 1, java: 1, javascript: 1}
expected was {as: 1, car: 1, carpet: 1, java: 1, javascript: 1}
[FAILED] ignore punctuation
word_count_test.nim(47,15): Check failed: result == expected
result was {: 2, 1: 1, 2: 1, testing: 2}
expected was {1: 1, 2: 1, testing: 2}
[FAILED] include numbers
[OK] normalize case
word_count_test.nim(59,15): Check failed: result == expected
result was {: 6, 1: 1, 2: 1, testing: 2}
expected was {1: 1, 2: 1, testing: 2}
[FAILED] prefix punctuation
word_count_test.nim(66,15): Check failed: result == expected
result was {: 1, broken: 1, hey: 1, is: 1, my: 1, spacebar: 1}
expected was {broken: 1, hey: 1, is: 1, my: 1, spacebar: 1}
[FAILED] symbols are separators
Error: execution of an external program failed: '/Users/antonin/exercism/nim/nim/exercises/word-count/word_count_test '
The problems api (x-api) now supports having exercises collected in a subdirectory
named exercises
.
That is to say that instead of having a mix of bin
, docs
, and individual exercises,
we can have bin
, docs
, and exercises
in the root of the repository, and all
the exercises collected in a subdirectory.
In other words, instead of this:
x{TRACK_ID}/
βββ LICENSE
βββ README.md
βββ bin
βΒ Β βββ fetch-configlet
βββ bowling
βΒ Β βββ bowling_test.ext
βΒ Β βββ example.ext
βββ clock
βΒ Β βββ clock_test.ext
βΒ Β βββ example.ext
βββ config.json
βββ docs
βΒ Β βββ ABOUT.md
βΒ Β βββ img
... etc
we can have something like this:
x{TRACK_ID}/
βββ LICENSE
βββ README.md
βββ bin
βΒ Β βββ fetch-configlet
βββ config.json
βββ docs
βΒ Β βββ ABOUT.md
βΒ Β βββ img
βββ exercises
βΒ Β βββ bowling
βΒ Β βΒ Β βββ bowling_test.ext
βΒ Β βΒ Β βββ example.ext
βΒ Β βββ clock
βΒ Β βββ clock_test.ext
βΒ Β βββ example.ext
... etc
This has already been deployed to production, so it's safe to make this change whenever you have time.
With version 0.10.2 the Nimrod language was renamed to Nim.
Can this repo also be renamed to xnim, @kytrinyx? It seems like only the owner can change the repo settings, and that's youβ¦
This issue is part of the migration to v3. You can read full details about the various changes here.
In Exercism v3, tracks can be annotated with tags. This allows searching for tracks with a certain tag combination, making it easy for students to find an interesting track to join.
Tags are specified in the top-level "tags"
field in the track's config.json
file and are defined as an array of strings, as specified in the spec.
The "tags"
field in the config.json
file should be updated to contain the tags that are relevant to this track. The list of tags that can be used is listed in the spec.
{
"tags": [
"runtime/jvm",
"platform/windows",
"platform/linux",
"paradigm/declarative",
"paradigm/functional",
"paradigm/object_oriented"
]
}
For the past three years, the ordering of exercises has been done based on gut feelings and wild guesses. As a result, the progression of the exercises has been somewhat haphazard.
In the past few months maintainers of several tracks have invested a great deal of time in analyzing what concepts various exercises require, and then reordering the tracks as a result of that analysis.
It would be useful to bake this data into the track configuration so that we can adjust it over time as we learn more about each exercise.
To this end, we've decided to add a new key exercises in the config.json file, and deprecate the problems
key.
See exercism/discussions#60 for details about this decision.
Note that we will not be removing the problems
key at this time, as this would break the website and a number of tools.
The process for deprecating the old problems
array will be:
In the new format, each exercise is a JSON object with three properties:
The difficulty rating can be a very rough estimate.
The topics array can be empty if this analysis has not yet been done.
Example:
"exercises": [
{
"slug": "hello-world" ,
"difficulty": 1,
"topics": [
"control-flow (if-statements)",
"optional values",
"text formatting"
]
},
{
"difficulty": 3,
"slug": "anagram",
"topics": [
"strings",
"filtering"
]
},
{
"difficulty": 10,
"slug": "forth",
"topics": [
"parsing",
"transforming",
"stacks"
]
}
]
It may be worth making the change in several passes:
This issue is part of the migration to v3. You can read full details about the various changes here.
In Exercism v3, we're introducing a new (optional) tool: the representer. The goal of the representer is to take a solution and returning a representation, which is an extraction of a solution to its essence with normalized names, comments, spacing, etc. but still uniquely identifying the approach taken. Two different ways of solving the same exercise must not have the same representation.
Each representer is track-specific. When a new solution is submitted, we run the track's representer, which outputs two JSON files that describe the representation.
Once we have a normalized representation for a solution, a team of vetted mentors will look at the solution and comment on it (if needed). These comments will then automatically be submitted to each new solution with the same representation. A notification will be sent for old solutions with a matching representation.
Each track should build a representer according to the spec. For tracks building a representer from scratch, we have a starting guide.
The representer is an optional tool though, which means that if a track does not have a representer, it will still function normally.
In Exercism v3, we are making increased use of our v2 analyzers. Analyzers automatically assess student's submissions and provide mentor-style commentary. They can be used to catch common mistakes and/or do complex solution analysis that can't easily be done directly in a test suite.
Each analyzer is track-specific. When a new solution is submitted, we run the track's analyzer, which outputs a JSON file that contains the analysis results.
In v2, analyzer comments were given to a mentor to pass to a student. In v3, the analyzers will normally output directly to students, although we have added an extra key to output suggestions to mentors. If your track already has an analyzer, the only requisite change is updating the outputted copy to be student-facing.
Each track should build an analyzer according to the spec. For tracks building an analyzer from scratch, we have a starting guide.
The analyzer is an optional tool though, which means that if a track does not have an analyzer, it will still function normally.
Build a representer for your track according to the spec. Check this page to help you get started with building a representer.
Note that the simplest representer is one that merely returns the solution's source code.
It can be very useful to check how other tracks have implemented their representer.
Build an analyzer for your track according to the spec. Check this page to help you get started with building an analyzer.
It can be very useful to check how other tracks have implemented their analyzer.
If you want to build both, we recommend starting by building the representer for the following reasons:
TL;DR; At the end of Jan 2021, all tracks will enter v3 staging mode. Updates will no longer sync with the current live website, but instead sync with the staging website. The Nim section of the v3 repo will be extracted and PR'd into this track (if appropriate). Further issues and information will follow over the coming weeks to prepare Nim for the launch of v3.
Over the last 12 months, we've all been hard at work developing Exercism v3. Up until this point, all v3 tracks have been under development in a single repository - the v3 repository. As we get close to launch, it is time for us to explode that monorepo back into the normal track repos. Therefore, at the end of this month (January 2021), we will copy the v3 tracks contents from the v3 repository back to the corresponding track repositories.
As v3 tracks are structured differently than v2 tracks, the current (v2) website cannot work with v3 tracks. To prevent the v2 website from breaking, we'll disable syncing between track repositories and the website. This will effectively put v2 in maintenance mode, where any changes in the track repos won't show up on the website. This will then allow tracks to work on preparing for the Exercism v3 launch.
Where possible, we will script the changes needed to prepare tracks for v3. For any manual changes that need to be happening, we will create issues on the corresponding track repositories. We will be providing lots of extra information about this in the coming weeks.
We're really excited to enter the next phase of building Exercism v3, and to finally get it launched! π
This issue is part of the migration to v3. You can read full details about the various changes here.
In Exercism v3, one of the biggest changes is that we'll automatically check if a submitted solution passes all the tests.
We'll check this via a new, track-specific tool: the Test Runner. Each test runner is track-specific. When a new solution is submitted, we run the track's test runner, which outputs a JSON file that describes the test results.
The test runner must be able to run the tests suites of both Concept Exercises and Practice Exercises. Depending on the test runner implementation, this could mean having to update the Practice Exercises to the format expected by the test runner.
Build a test runner for your track according to the spec.
If you are building a test runner from scratch, we have a starting guide and a generic test runner that can be used as the base for the new test runner.
If a test runner has already been built for this track, please check if it works on both Concept Exercises and Practice Exercises.
It can be very useful to check how other tracks have implemented their test runner.
For simplicity, new exercises are being implemented as bonus
exercises, meaning that unlocked_by
is set to null
in config.json
.
Therefore after the new exercises are merged, we should probably consider:
core
exercises (to include some of the current bonus
exercises, or some of the new exercises).bonus
exercises into side exercises.config.json
to follow this proposed ordering, which may become part of configlet
in the future. This should help reduce diff noise when reordering exercises later.topics
be more useful and specific to Nim.I don't currently have strong opinions on the track structure, but other tracks have put significant thought into it. We should consider (or steal) their ideas, at least as a starting point. I'll write some suggestions later. Let me know if you have any thoughts.
Some recent work by other tracks:
Regarding difficulty: most of our currently implemented exercises are relatively easy. Some tracks use a restricted set of ratings (such as: 1
, 4
, 7
and 10
). If we do this, I think nearly all our our current exercises should be difficulty 1
so that we can have a clear difference later between easy
and medium
difficulty.
It's also easier to implement good solutions in Nim than in many languages.
configlet tree config.json --with-difficulty
core
----
ββ hello-world [1]
β ββ pangram [1]
β ββ hamming [1]
β
ββ two-fer [1]
β ββ isogram [1]
β ββ acronym [1]
β
ββ leap [1]
β ββ difference-of-squares [1]
β ββ triangle [1]
β
ββ bob [1]
β ββ word-count [3]
β ββ anagram [1]
β ββ bracket-push [1]
β
ββ allergies [1]
β
ββ sum-of-multiples [1]
β ββ armstrong-numbers [1]
β ββ grains [2]
β ββ collatz-conjecture [1]
β ββ scrabble-score [1]
β
ββ grade-school [1]
β ββ atbash-cipher [1]
bonus
-----
reverse-string [1]
rna-transcription [1]
gigasecond [1]
run-length-encoding [1]
roman-numerals [1]
space-age [1]
nth-prime [1]
queen-attack [1]
all-your-base [1]
nucleotide-count [1]
raindrops [1]
react [8]
darts [1]
secret-handshake [1]
This issue is part of the migration to v3. You can read full details about the various changes here.
To get your track ready for Exercism v3, the following needs to be done:
This issue may be automatically added to over time. While track maintainers should check off completed items, please do not add/edit items in the list.
TL;DR: the problem specification for the Bob exercise has been updated. Consider updating the test suite for Bob to match. If you decide not to update the exercise, consider overriding description.md.
Details
The problem description for the Bob exercise lists four conditions:
There's an ambiguity, however, for shouted questions: should they receive the "asking" response or the "shouting" response?
In exercism/problem-specifications#1025 this ambiguity was resolved by adding an additional rule for shouted questions.
If this track uses exercise generators to update test suites based on the canonical-data.json file from problem-specifications, then now would be a good time to regenerate 'bob'. If not, then it will require a manual update to the test case with input "WHAT THE HELL WERE YOU THINKING?".
See the most recent canonical-data.json file for the exact changes.
Remember to regenerate the exercise README after updating the test suite:
configlet generate . --only=bob --spec-path=<path to your local copy of the problem-specifications repository>
You can download the most recent configlet at https://github.com/exercism/configlet/releases/latest if you don't have it.
If, as track maintainers, you decide that you don't want to change the exercise, then please consider copying problem-specifications/exercises/bob/description.md into this track, putting it in exercises/bob/.meta/description.md
and updating the description to match the current implementation. This will let us run the configlet README generation without having to worry about the bob README drifting from the implementation.
This issue is part of the migration to v3. You can read full details about the various changes here.
Concept Exercises can have a status specified in their "status"
field in their config.json
entry, as specified in the spec. This status can be one of four values:
"wip"
: A work-in-progress exercise not ready for public consumption. Exercises with this tag will not be shown to students on the UI or be used for unlocking logic. They may appear for maintainers."beta"
: This signifies active exercises that are new and which we would like feedback on. We show a beta label on the site for these exercise, with a Call To Action of "Please give us feedback.""active"
: The normal state of active exercises"deprecated"
: Exercises that are no longer shown to students who have not started them (not usable at this stage).The "status"
key can also be omitted, which is the equivalent of setting it to "active"
.
The "status"
field of Concept Exercises in the config.json
file should be updated to reflect the status of the Concept Exercises. See the spec for more information.
If your track doesn't have any Concept Exercises, this issue can be closed.
{
"exercises": {
"concept": [
{
"uuid": "93fbc7cf-3a7e-4450-ad22-e30129c36bb9",
"slug": "cars-assemble",
"name": "Cars, Assemble!",
"concepts": ["if-statements", "numbers"],
"prerequisites": ["basics"]
},
...
]
}
}
{
"exercises": {
"concept": [
{
"uuid": "93fbc7cf-3a7e-4450-ad22-e30129c36bb9",
"slug": "cars-assemble",
"name": "Cars, Assemble!",
"concepts": ["if-statements", "numbers"],
"prerequisites": ["basics"],
"status": "active"
},
...
]
}
}
The current logo for Nim has the crown a touch too big:
https://assets.exercism.io/tracks/nim-bordered-green.png
It would be nice if someone could adjust it to fit inside the borders a bit better (all the other language logos are significantly smaller).
In line with our new org-wide policy, the master
branch of this repo will be renamed to main
. All open PRs will be automatically repointed.
GitHub will show you a notification about this when you look at this repo after renaming:
In case it doesn't, this is the command it suggests:
git branch -m master main
git fetch origin
git branch -u origin/main main
You may like to update the primary branch on your forks too, which you can do under Settings->Branches and clicking the pencil icon on the right-hand-side under Default Branch:
We will post a comment below when this is done. We expect it to happen within the next 12 hours.
Nimrod places a lot of emphasis on efficiency - so some tool to measure the efficiency of a solution would be nice.
Idea: create a macro-based tool that turns the unittest.test
s into benchmarks.
Some exercise README templates contain links to pages which no longer exist in v2 Exercism.
For example, C++'s README template had a link to /languages/cpp for instructions on running tests. The correct URLs to use can be found in the 'Still stuck?' sidebar of exercise pages on the live site. You'll need to join the track and go to the first exercise to see them.
Please update any broken links in the 'config/exercise_readme.go.tmpl' file, and run 'configlet generate .' to generate new exercise READMEs with the fixes.
Instructions for generating READMEs with configlet can be found at:
https://github.com/exercism/docs/blob/master/language-tracks/exercises/anatomy/readmes.md#generating-a-readme
Instructions for installing configlet can be found at:
https://github.com/exercism/docs/blob/bc29a1884da6c401de6f3f211d03aabe53894318/language-tracks/launch/first-exercise.md#the-configlet-tool
Tracking exercism/exercism#4102
This issue is part of the migration to v3. You can read full details about the various changes here.
In Exercism v3, students can now choose to work on exercises directly from their browser, instead of having to download exercises to their local machine. The track-specific settings for the in-browser editor are defined in the top-level "online_editor"
field in the track's config.json
file. This field is defined as an object with two fields:
"indent_style"
: the indent style, either "space" or "tab"."indent_size"
: the indent size, which is an integer (e.g. 4).You can find a full description of these fields in the spec.
The "online_editor"
field should be updated to correspond to the track's best practices regarding indentation.
"online_editor": {
"indent_style": "space",
"indent_size": 4
}
The README for this problem needs to instruct the user to use the critbits
library and define the TWordCount
custom type using:
import critbits
type TWordCount* = CritBitTree[int]
User can add more imports as they need, but above needs to be the bare-minimum in their submission.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. πππ
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google β€οΈ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.