Giter Club home page Giter Club logo

erlang's Introduction

Exercism Erlang Track

Exercism exercises in Erlang

Contributing guide

For general information about how exercism works, please see the contributing guide.

If you create “claiming” PRs with obviously unfinished code, please provide an estimate in the PR description when you will continue to work on the PR or you think it will be finished.

Setting up your system for local development on the track

Please make sure you have installed erlang/OTP and rebar3 as described on Installing Erlang or docs/INSTALLATION.md in this repository. Also run bin/fetch-configlet to download the JSON-checker.

Please make sure you use one of the releases of erlang/OTP as specified in .github/workflows/main.yml (see the jobs.test_erlang.strategy.matrix.otp key), as these are the ones officially tested and supported by this track.

Feel free to use any feature that was introduced in the oldest version of the range, while also avoiding everything that has been removed or deprecated in the newest one.

Implementing an exercise

When there is a mention of "slug-name", it refers to the slug as used on exercism URLs. In contrast, "erlangified_slug_name" is the slug-name with all dashes (-) replaced by underscores (_) to make the name compatible with Erlang syntax.

  1. Create a folder exercises/<slug-name>.
  2. Set up folder structure (src, and test).
  3. Copy rebar.config and src/*.app.src from another exercise. 1. Leave rebar.config unchanged. 1. Rename src/*.app.src to src/<erlangified_slug_name>.app.src. 1. On the first line of this file change the old erlangified_slug_name to the new one. 1. On the second line change the old slug-name to the new one.
  4. In the src-folder, create two files: example.erl and <erlangified_slug_name>.erl. The first is for your example solution, the second is the 'empty' solution to give students a place to start. You might take the files from another exercise as your starting point. Ensure their module names match their (new) file names.
  5. In the test-folder, create one file: <erlangified_slug_name>_tests.erl and insert the boilerplate code shown below. This file is for the test cases.
  6. Implement/correct your solution in src/example.erl.
  7. Add tests to <erlangified_slug_name>_tests.erl.
  8. Run tests using rebar3 eunit.

Repeat steps 6, 7, and 8 until all tests are implemented and your example solution passes them all.

If there is a exercises/<slug-name>/canonical-data.json in problem-specifications, make sure to implement your tests and examples in a way that the canonical data is integrated and not violated.

You may add further tests, as long as they do not violate canonical data and add value to the exercise or are necessary for erlang specific things.

Also please make sure to add a HINTS.md with some hints for the students if the exercise becomes tricky or might not be obvious.

-module(<test module name>).

-include_lib("erl_exercism/include/exercism.hrl").
-include_lib("eunit/include/eunit.hrl").

You will need to add entry for the exercise in the track's config.json file, which you will find in the respository's root directory (two levels up). For details see Exercise configuration.

Before pushing

Please make sure, that all tests pass by running _test/check-exercises.escript. On windows you might need to call escript _test/check-exercises.escript. Also a run of bin/configlet lint should pass without error message.

Both programs will be run on CI and a merge is unlikely if tests fail.

erlang's People

Contributors

adolfopa avatar atelespaniscus avatar bnandras avatar chilikk avatar defndaines avatar dependabot[bot] avatar dkinzer avatar ee7 avatar erikschierboom avatar etrepum avatar exercism-bot avatar fmv1992 avatar ihid avatar jackhughesweb avatar jkrukoff avatar jordanadams avatar ksyu avatar kytrinyx avatar macintux avatar maco avatar magthe avatar milo-hyben avatar nobbz avatar petertseng avatar puritycontrol avatar robinhilliard avatar sjwarner avatar tmcgilchrist avatar xymbol avatar yurrriq avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

erlang's Issues

Delete configlet binaries from history?

I made a really stupid choice a while back to commit the cross-compiled
binaries for configlet (the tool that sanity-checks the config.json
against the implemented problems) into the repository itself.

Those binaries are HUGE, and every time they change the entire 4 or 5 megs get
recommitted. This means that cloning the repository takes a ridiculously long
time.

I've added a script that can be run on travis to grab the latest release from
the configlet repository (bin/fetch-configlet), and travis is set up to run
this now instead of using the committed binary.

I would really like to thoroughly delete the binaries from the entire git
history, but this will break all the existing clones and forks.

The commands I would run are:

# ensure this happens on an up-to-date master
git checkout master && git fetch origin && git reset --hard origin/master

# delete from history
git filter-branch --index-filter 'git rm -r --cached --ignore-unmatch bin/configlet-*' --prune-empty

# clean up
rm -rf .git/refs/original/
git reflog expire --all
git gc --aggressive --prune

# push up the new master, force override existing master branch
git push -fu origin master

If we do this everyone who has a fork will need to make sure that their master
is reset to the new upstream master:

git checkout master
git fetch upstream master
git reset --hard upstream/master
git push -fu origin master

We can at-mention (@) all the contributors and everyone who has a fork here in this
issue if we decide to do it.

The important question though, is: Is it worth doing?

Do you have any other suggestions of how to make sure this doesn't confuse people and break their
repository if we do proceed with this change?

Verify contents and format of track documentation

Each language track has documentation in the docs/ directory, which gets included on the site
on each track-specific set of pages under /languages.

We've added some general guidelines about how we'd like the track to be documented in exercism/exercism#3315
which can be found at https://github.com/exercism/exercism.io/blob/master/docs/writing-track-documentation.md

Please take a moment to look through the documentation about documentation, and make sure that
the track is following these guidelines. Pay particularly close attention to how to use images
in the markdown files.

Lastly, if you find that the guidelines are confusing or missing important details, then a pull request
would be greatly appreciated.

`roman-numerals` tests has expected and actual swapped

If you have a bad result while running eunit on the roman-numerals exercise, the error is reported with the expected value and the actual value swapped around, which can be confusing when you are dealing with roman numerals where it isn't always easy to spot the correct one.

For example, if I change the test result for 48 to "OOPS" I get this:

3> eunit:test(roman_numerals_tests).
roman_numerals_tests: convert_48_test...*failed*
in function roman_numerals_tests:'-expect_roman/2-fun-0-'/2 (/meta/p/pmr/src/ext/xerlang/exercises/roman-numerals/_build/test/lib/roman_numerals/test/roman_numerals_tests.erl, line 9)
**error:{assertEqual,[{module,roman_numerals_tests},
              {line,9},
              {expression,"Expected"},
              {expected,"OOPS"},
              {value,"XLVIII"}]}
  output:<<"">>

=======================================================
  Failed: 1.  Skipped: 0.  Passed: 18.
error

(it says expected is OOPS).

rna-transcription: don't transcribe both ways

I can't remember the history of this, but we ended up with a weird non-biological thing in the RNA transcription exercise, where some test suites also have tests for transcribing from RNA back to DNA. This makes no sense.

If this track does have tests for the reverse transcription, we should remove them, and also simplify the reference solution to match.

If this track doesn't have any tests for RNA->DNA transcription, then this issue can be closed.

See exercism/problem-specifications#148

[bank-account] possible race condition

There was a failed test in bankaccount on travis during #133 which hadn't failed after a manual restart.

It's a pity that I failed to remember which test it was exactly.

It does need further inspection!

Launch Erlang track?

We have enough exercises to launch:

accumulate
allergies
anagram
beer-song
bob
etl
luhn
nucleotide-count
phone-number
point-mutations
rna-transcription
trinary
word-count

In order to get this launched, we need to:

  1. Figure out roughly how to order these by increasing difficulty
  2. Stick that ordered list into config.json under "problems"
  3. Flip the "active" property to true in config.json
  4. Make sure we have a couple people around who know enough erlang to help nitpick in the early days

Verify that nothing links to help.exercism.io

The old help site was deprecated in December 2015. We now have content that is displayed on the main exercism.io website, under each individual language on http://exercism.io/languages.

The content itself is maintained along with the language track itself, under the docs/ directory.

We decided on this approach since the maintainers of each individual language track are in the best position to review documentation about the language itself or the language track on Exercism.

Please verify that nothing in docs/ refers to the help.exercism.io site. It should instead point to http://exercism.io/languages/:track_id (at the moment the various tabs are not linkable, unfortunately, we may need to reorganize the pages in order to fix that).

Also, some language tracks reference help.exercism.io in the SETUP.md file, which gets included into the README of every single exercise in the track.

We may also have referenced non-track-specific content that lived on help.exercism.io. This content has probably been migrated to the Contributing Guide of the x-common repository. If it has not been migrated, it would be a great help if you opened an issue in x-common so that we can remedy the situation. If possible, please link to the old article in the deprecated help repository.

If nothing in this repository references help.exercism.io, then this can safely be closed.

How to set up a local dev environment

See issue exercism/exercism#2092 for an overview of operation welcome contributors.


Provide instructions on how to contribute patches to the exercism test suites
and examples: dependencies, running the tests, what gets tested on Travis-CI,
etc.

The contributing document
in the x-api repository describes how all the language tracks are put
together, as well as details about the common metadata, and high-level
information about contributing to existing problems, or adding new problems.

The README here should be language-specific, and can point to the contributing
guide for more context.

From the OpenHatch guide:

Here are common elements of setting up a development environment you’ll want your guide to address:

Preparing their computer
Make sure they’re familiar with their operating system’s tools, such as the terminal/command prompt. You can do this by linking to a tutorial and asking contributors to make sure they understand it. There are usually great tutorials already out there - OpenHatch’s command line tutorial can be found here.
If contributors need to set up a virtual environment, access a virtual machine, or download a specific development kit, give them instructions on how to do so.
List any dependencies needed to run your project, and how to install them. If there are good installation guides for those dependencies, link to them.

Downloading the source
Give detailed instructions on how to download the source of the project, including common missteps or obstacles.

How to view/test changes
Give instructions on how to view and test the changes they’ve made. This may vary depending on what they’ve changed, but do your best to cover common changes. This can be as simple as viewing an html document in a browser, but may be more complicated.

Installation will often differ depending on the operating system of the contributor. You will probably need to create separate instructions in various parts of your guide for Windows, Mac and Linux users. If you only want to support development on a single operating system, make sure that is clear to users, ideally in the top-level documentation.

Pass explicit list of multiples in "Sum of Multiples" exercise rather than defaulting to 3 and 5

Hello, as part of exercism/problem-specifications#198 we'd like to make the sum of multiples exercise less confusing. Currently, the README specifies that if no multiples are given it should default to 3 and 5.

We'd like to remove this default, so that a list of multiples will always be specified by the caller. This makes the behavior explicit, avoiding surprising behavior and simplifying the problem.

Please make sure this track's tests for the sum-of-multiples problem do not expect such a default. Any tests that want to test behavior for multiples of [3, 5] should explicitly pass [3, 5] as the list of multiples.

After all tracks have completed this change, then exercism/problem-specifications#209 can be merged to remove the defaults from the README.

The reason we'd like this change to happen before changing the README is that it was very confusing for students to figure out the default behavior. It wasn't clear from simply looking at the tests that the default should be 3 and 5, as seen in exercism/exercism#2654, so some had to resort to looking at the example solutions (which aren't served by exercism fetch, so they have to find it on GitHub). It was added to the README to fix this confusion, but now we'd like to be explicit so we can remove the default line from the README.

You can find the common test data at https://github.com/exercism/x-common/blob/master/sum-of-multiples.json, in case that is helpful.

Make Hamming conform to official definition

From issue exercism/exercism#1867

Wikipedia says the Hamming distance is not defined for strings of different length.

I am not saying the problems cannot be different, but for such a well-defined concept it would make sense to stick to one definition, especially when the READMEs provide so little information about what is expected from the implementation.

Let's clean this up so that we're using the official definition.

Implement Erlang exercises

See exercism/exercism#874 and erlsyd/exercism.io#1


  • bob
  • word-count
  • anagram
  • beer-song
  • nucleotide-count
  • rna-transcription
  • point-mutations
  • phone-number
  • etl
  • grade-school
  • leap
  • meetup
  • space-age
  • grains
  • gigasecond
  • triangle
  • scrabble-score
  • roman-numerals
  • binary
  • prime-factors
  • raindrops
  • allergies
  • strain
  • atbash-cipher
  • accumulate
  • nth-prime
  • palindrome-products
  • sum-of-multiples
  • bank-account
  • minesweeper
  • parallel-letter-frequency
  • zipper
  • sieve
  • pythagorean-triplet
  • difference-of-squares
  • largest-series-product
  • queen-attack
  • saddle-points
  • ocr-numbers
  • pascals-triangle
  • say
  • crypto-square
  • trinary
  • simple-cipher
  • octal
  • luhn
  • pig-latin
  • series
  • secret-handshake
  • linked-list
  • wordy
  • hexadecimal
  • kindergarden-garden
  • binary-search-tree
  • matrix
  • robot

Build and test Erlang exercises with Rebar3

Now that Rebar 3 is the official build tool of Erlang should we use it here?

It would be a little friendlier as the command would be

rebar3 eunit

instead of

erl -make
erl -noshell -eval "eunit:test(accumulate, [verbose])" -s init stop

This would require changing the exercises to use the Erlang OTP project directory structure, which I think would be a good thing.

Move exercises to subdirectory

The problems api (x-api) now supports having exercises collected in a subdirectory
named exercises.

That is to say that instead of having a mix of bin, docs, and individual exercises,
we can have bin, docs, and exercises in the root of the repository, and all
the exercises collected in a subdirectory.

In other words, instead of this:

x{TRACK_ID}/
├── LICENSE
├── README.md
├── bin
│   └── fetch-configlet
├── bowling
│   ├── bowling_test.ext
│   └── example.ext
├── clock
│   ├── clock_test.ext
│   └── example.ext
├── config.json
└── docs
│   ├── ABOUT.md
│   └── img
... etc

we can have something like this:

x{TRACK_ID}/
├── LICENSE
├── README.md
├── bin
│   └── fetch-configlet
├── config.json
├── docs
│   ├── ABOUT.md
│   └── img
├── exercises
│   ├── bowling
│   │   ├── bowling_test.ext
│   │   └── example.ext
│   └── clock
│       ├── clock_test.ext
│       └── example.ext
... etc

This has already been deployed to production, so it's safe to make this change whenever you have time.

First timer impressions

I just got started in erlang via the futurelearn mooc and pulled up some exercism problems to solve.

I found some things confusing. I might be wrong in some cases here, but just thought of putting this here so we can make the process easier for other newcomers.

  • rebar?: No explaination of what rebar is - neither on exercism site nor on their homepage.
  • app.src: Some other tracks have boilerplate code given (atleast for first few exercises). My initial impression on seeing this file in /src was this was some boilerplate code, but later found out it was some kind of app description.
  • version: The test failure for version was cryptic:
======================== EUnit ========================
file "hello_world.app"
  application 'hello_world'
    module 'hello_world'
      module 'hello_world_tests'
        hello_world_tests: version_test...*failed*
in function hello_world:test_version/0
  called as test_version()
in call from hello_world_tests:'-version_test/0-fun-0-'/0 (/Users/tejas/exercism_submissions/erlang/hello-world/_build/test/lib/hello_world/include/exercism.hrl, line 11)
in call from hello_world_tests:version_test/0
**error:undef
  output:<<"">>

        hello_world_tests: say_hi_test...ok
        [done in 0.006 s]
      [done in 0.006 s]
    [done in 0.006 s]
  [done in 0.006 s]
=======================================================

Do we need the test for version number? Can we make the output more clear. I was not aware of test_version/0 until I saw the includes/exercim.hrl.

  • make it easy: Is rebar an overkill for single module exercises as opposed to using EUnit directly?

Intruducing a version test-case

In an exercise (http://exercism.io/submissions/05fb1f5acd00405aaa36844167c8d603) we assumed that an older test was used, and two of us thought it might be helpfull for thinks like this to have something that tells which version of the testsuite was used.

In the xruby path, there is one test in every suite which tests for the value of a constant of the module/class under test.

An example could be the following

-module(example_tests).

-define(TEST_VERSION, 1).

version_test() -> ?assertMatch(?TEST_VERSION, example:version()).

Setup instructions for an erlang environment, ready to follow the exercises

In #113 @paulnice revealed that the instructions to setup an erlang environment lack some necessary detail about requirements.

The setup instructions should reviewed and -verified to cover installation of a recent and complete erlang on a fresh installed basesystem where the following basesystems should be considered at least:

  • Ubuntu
  • latest Debian
  • latest CentOS
  • latest Fedora
  • Windows 10 using the installer

etl tests seem dubious

transform_multiple_keys_from_one_value_test violates the goals stated in the README, and generally the tests are confusing because they invert letters and scores (or abandon letters and scores entirely).

The problem with transform_multiple_keys_from_one_value_test: it's nonsensical to have the same Scrabble letter with two different scores (setting aside the language discussion from the end of the README; if you're going to collapse multiple languages into the same list of letters/scores, you need some way to differentiate them).

triangle: incorrect test in some tracks

Please check if there's a test that states that a triangle with sides 2, 4, 2 is invalid. The triangle inequality states that for any triangle, the sum of the lengths of any two sides must be greater than or equal to the length of the remaining side. If this doesn't affect this track, go ahead and just close the issue.

Update config.json to match new specification

For the past three years, the ordering of exercises has been done based on gut feelings and wild guesses. As a result, the progression of the exercises has been somewhat haphazard.

In the past few months maintainers of several tracks have invested a great deal of time in analyzing what concepts various exercises require, and then reordering the tracks as a result of that analysis.

It would be useful to bake this data into the track configuration so that we can adjust it over time as we learn more about each exercise.

To this end, we've decided to add a new key exercises in the config.json file, and deprecate the problems key.

See exercism/discussions#60 for details about this decision.

Note that we will not be removing the problems key at this time, as this would break the website and a number of tools.

The process for deprecating the old problems array will be:

  • Update all of the track configs to contain the new exercises key, with whatever data we have.
  • Simultaneously change the website and tools to support both formats.
  • Once all of the tracks have added the exercises key, remove support for the old key in the site and tools.
  • Remove the old key from all of the track configs.

In the new format, each exercise is a JSON object with three properties:

  • slug: the identifier of the exercise
  • difficulty: a number from 1 to 10 where 1 is the easiest and 10 is the most difficult
  • topics: an array of strings describing topics relevant to the exercise. We maintain
    a list of common topics at https://github.com/exercism/x-common/blob/master/TOPICS.txt. Do not feel like you need to restrict yourself to this list;
    it's only there so that we don't end up with 20 variations on the same topic. Each
    language is different, and there will likely be topics specific to each language that will
    not make it onto the list.

The difficulty rating can be a very rough estimate.

The topics array can be empty if this analysis has not yet been done.

Example:

"exercises": [
  {
    "slug": "hello-world" ,
    "difficulty": 1,
    "topics": [
        "control-flow (if-statements)",
        "optional values",
        "text formatting"
    ]
  },
  {
    "difficulty": 3,
    "slug": "anagram",
    "topics": [
        "strings",
        "filtering"
    ]
  },
  {
    "difficulty": 10,
    "slug": "forth",
    "topics": [
        "parsing",
        "transforming",
        "stacks"
    ]
  }
]

It may be worth making the change in several passes:

  1. Add the exercises key with the array of objects, where difficulty is 1 and topics is empty.
  2. Update the difficulty settings to reflect a more accurate guess.
  3. Add topics (perhaps one-by-one, in separate pull requests, in order to have useful discussions about each exercise).

Verify "Largest Series Product" exercise implementation

There was some confusion in this exercise due to the ambiguous use of the term consecutive in the README. This could be taken to mean contiguous, as in consecutive by position, or as in consecutive numerically. The the README has been fixed (exercism/problem-specifications#200).

Please verify that the exercise is implemented in this track correctly (that it finds series of contiguous numbers, not series of numbers that follow each other consecutively).

If it helps, the canonical inputs/outputs for the exercise can be found here:
https://github.com/exercism/x-common/blob/master/largest-series-product.json

If everything is fine, go ahead and just close this issue. If there's something to be done, then please describe the steps needed in order to close the issue.

Add stub solutions for all exercises

There should be a stub solution for every exercise, roughly like the following:

-module($module_name).

-export([fun_a/0, fun_b/2]).

fun_a() -> undefined.

fun_b(Param1, Param2) -> undefined.

test_version -> 1. % or whatever version is needed to pass the tests

Also we need a check in bin/journey-test that fails if it is missing.

if [[ ! -f "${exercism_exercises_dir}/erlang/${exercise}/src/${module}.erl ]]; then exit 1; fi

Should be a sufficient test, best put between current lines 201 (introduction of $module) and 202 (overwriting any stub that might exist with the example solution).

erl -make gives me "init terminating in do_boot"

Hello,

I'm trying to make a first task in the erlang track. When I execute 'erl -make' command it gives me the error.

ps@linux:~/exercism/erlang/hello-world$ erl -make
{"init terminating in do_boot",{undef,[{make,all_or_nothing,[],[]},{init,start_em,1,[]},{init,do_boot,3,[]}]}}

Crash dump is being written to: erl_crash.dump...done
init terminating in do_boot ()
ps@linux:~/exercism/erlang/hello-world$ erl
Erlang/OTP 19 [erts-8.0] [source] [64-bit] [smp:4:4] [async-threads:10] [kernel-poll:false]

Eshell V8.0 (abort with ^G)
1>

scrabble-score: replace 'multibillionaire' with 'oxyphenbutazone'

The word multibillionaire is too long for the scrabble board. Oxyphenbutazone, on the other hand, is legal.

Please verify that there is no test for multibillionaire in the scrabble-score in this track. If the word is included in the test data, then it should be replaced with oxyphenbutazone. Remember to check the case (if the original is uppercase, then the replacement also should be).

If multibillionaire isn't used, then this issue can safely be closed.

See exercism/problem-specifications#86

binary: improve tests for invalid numbers

We should have separate tests for:

  • alphabetic characters at the beginning of a valid binary number
  • alphabetic characters at the end of a valid binary number
  • alphabetic characters in the middle of an otherwise valid binary number
  • invalid digits (e.g. 2)

If the test suite for binary has test cases that cover these edge cases, this issue can safely be closed.

See exercism/problem-specifications#95

re-evaluate the track

After there has been the idea to use rebar3 in #106, I already mentioned a re-evaluation of the track in that thread.

For this re-evaluation, I do think, we need to achieve quite a lot, and I'd also like to put the further inclusion of exercises on hold until this re-evaluation has been finished.

Currently I do see the following sub-tasks which should be part of this re-evaluation:

  • re-order exercises
    • classify exercises by difficulty
    • classify exercises by "kind" (lists, recursion, math, concurrency, etc)
    • make them suitable to run using OTP 17 to 19 in the latest release respectively
  • use generators for exercises when there are .jsons in x-common available.
  • use rebar3 to run tests/compile exercises
    • keep eunit as testing framework
    • perhaps add dialyzer config for optional type checking

I'd like to start with the re-evaluation of the exercises, but I am near my personal limit after having exam phase at my university right now while also started a new job 2 weeks ago.

So I leave this open for discussion at first, and I'd also would be happy if @kytrinyx could ask for volunteers in the blog/newsletter that are eager to help and have different levels of experience with erlang (from totally new to professional) to get some opinions about the difficulty of the exercises.

@exercism/erlang

Beer!

Test is incompatible with README which is very explicit about this.

Only verse(1) should use "Take it down"; any higher number should read "Take one down".

atbash_cipher test suite is impossible(?) to satisfy

Unless I'm missing something, the requirements and the test are contradictory.

Specifically, you can't encode and then decode a sentence and expect the decoded sentence to match the original, since whitespace is not encoded, unless decoding involves guessing word boundaries, which is outside the scope.

I took a look at the Java tests to see how it evaluates this problem, and it provides expected encodings, not a full round trip.

Looking at the Nitpicks interface for Erlang, there are no submissions for this problem.

robot-simulator requires global state

I tried to do the robot-simulator exercise today and found it very difficult. It seems like one has to use global state for the example - functions to manipulate the robot don't have a return value, so the current state of the robot needs to be kept somewhere else. I found that very confusing.

As far as I could see, people doing this exercise have come up with 3 different solutions for this:

  • a) use a FSM with either message passing/receiving or the gen_fsm pattern
  • b) store state in an ets table
  • c) modify the tests to return a new robot each time there is a change in state.

a) is rather complicated and requires advanced knowledge of Erlang-specific patterns that I wouldn't assume most beginners have. Even if beginners manage to find out about these patterns, all these things distracts from the actual exercise (which is difficult enough on its own).

b) is more simple to implement (once you know what you are looking for), but it's actively discouraging people from using functional programming style like you'd usually do as an Erlang beginner, and as the other exercises have shown.

c) Seems to be the most natural way to do this in Erlang - generate a robot, and each time you manipulate it, return a new robot object with a new, appropriate state and continue working with that object. Currently that'd be a hack.

Long story short, I'd suggest to change the exercise so one doesn't need to worry about global state and/or fsm patterns to solve it. Making the robot behave as expected is challenging enough on its own - worries about 'where do I store the state of my robot' should not be such a large part of the exercise.

Investigate track health and status of the track

I've used Sarah Sharp's FOSS Heartbeat project to generate stats for each of the language track repositories, as well as the x-common repository.

The Exercism heartbeat data is published here: https://exercism.github.io/heartbeat/

When looking at the data, please disregard any activity from me (kytrinyx), as I would like to get the language tracks to a point where they are entirely maintained by the community.

Please take a look at the heartbeat data for this track, and answer the following questions:

  • To what degree is the track maintained?
  • Who (if anyone) is merging pull requests?
  • Who (if anyone) is reviewing pull requests?
  • Is there someone who is not merging pull requests, but who comments on issues and pull requests, has thoughtful feedback, and is generally helpful? If so, maybe we can invite them to be a maintainer on the track.

I've made up the following scale:

  • ORPHANED - Nobody (other than me) has merged anything in the past year.
  • ENDANGERED - Somewhere between ORPHANED and AT RISK.
  • AT RISK - Two people (other than me) are actively discussing issues and reviewing and merging pull requests.
  • MAINTAINED - Three or more people (other than me) are actively discussing issues and reviewing and merging pull requests.

It would also be useful to know if there a lot of activity on the track, or just the occasional issue or comment.

Please report the current status of the track, including your best guess on the above scale, back to the top-level issue in the discussions repository: exercism/discussions#97

Override probot/stale defaults, if necessary

Per the discussion in exercism/discussions#128 we
will be installing the probot/stale integration on the Exercism organization on
April 10th, 2017.

By default, probot will comment on issues that are older than 60 days, warning
that they are stale. If there is no movement in 7 days, the bot will close the issue.
By default, anything with the labels security or pinned will not be closed by
probot.

If you wish to override these settings, create a .github/stale.yml file as described
in https://github.com/probot/stale#usage, and make sure that it is merged
before April 10th.

If the defaults are fine for this repository, then there is nothing further to do.
You may close this issue.

Add helpful information to the SETUP.md

The contents of the SETUP.md file gets included in
the README.md that gets delivered when a user runs the exercism fetch
command from their terminal.

At the very minimum, it should contain a link to the relevant
language-specific documentation on
help.exercism.io.

It would also be useful to explain in a generic way how to run the tests.
Remember that this file will be included with all the problems, so it gets
confusing if we refer to specific problems or files.

Some languages have very particular needs in terms of the solution: nested
directories, specific files, etc. If this is the case here, then it would be
useful to explain what is expected.


Thanks, @tejasbubane for suggesting that we add this documentation everywhere.
See exercism.io#2198.

clock: canonical test data has been improved

The JSON file containing canonical inputs/outputs for the Clock exercise has gotten new data.

There are two situations that the original data didn't account for:

  • Sometimes people perform computation/mutation in the display method instead of in add. This means that you might have two copies of clock that are identical, and if you add 1440 minutes to one and 2880 minutes to the other, they display the same value but are not equal.
  • Sometimes people only account for one adjustment in either direction, meaning that if you add 1,000,000 minutes, then the clock would not end up with a valid display time.

If this track has a generator for the Clock exercise, go ahead and regenerate it now. If it doesn't, then please verify the implementation of the test suite against the new data. If any cases are missing, they should be added.

See exercism/problem-specifications#166

Name nucleobases, not nucleosides

The primary nucleobases are cytosine (DNA and RNA), guanine (DNA and RNA), adenine (DNA and RNA), thymine (DNA) and uracil (RNA), abbreviated as C, G, A, T, and U, respectively. Because A, G, C, and T appear in the DNA, these molecules are called DNA-bases; A, G, C, and U are called RNA-bases. - Wikipedia

In other words, we should rename the values in the RNA transcription problem to reflect the following:

  • cytidine -> cytosine
  • guanosine -> guanine
  • adenosine -> adenine
  • thymidine -> thymine
  • uridine -> uracil

Copy track icon into language track repository

Right now all of the icons used for the language tracks (which can be seen at http://exercism.io/languages) are stored in the exercism/exercism.io repository in public/img/tracks/. It would make a lot more sense to keep these images along with all of the other language-specific stuff in each individual language track repository.

There's a pull request that is adding support for serving up the track icon from the x-api, which deals with language-specific stuff.

In order to support this change, each track will need to

In other words, at the end of it you should have the following file:

./img/icon.png

See exercism/exercism#2925 for more details.

"point-mutations" is deprecated in favor of hamming

This happened a while back, and it was for really weird legacy reasons.

I've since fixed the underlying issues that caused the problem, but for consistency
it would be nice to rename point-mutation to hamming, so that all the tracks are using
the same exercise name.

Once the problem has been renamed, I can run a script on the website to point people's
existing point-mutations solutions to the new hamming exercise so that they'll be able
to review solutions to hamming, and people who solve the new hamming exercise can see
all the old ones.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.