Giter Club home page Giter Club logo

elm's Introduction

Exercism Elm Track

elm / pr configlet

This is the Elm track, one of the many tracks on Exercism. It holds all the Elm Concepts, Concept Exercises and Practice Exercises that are currently implemented and available for students to complete. They are all listed in the config.json track config file. This readme file is mainly targeted at people wishing to contribute, but feel free to take a look around if you're interested in how Exercism language tracks are set up.

Track Organization

The track is organized with the following main directories and files:

bin/               # executables required to manage the track
config/            # configuration files for the track
docs/              # documentation files for automatically generated web pages on exercism.io
.github/workflows/ # CI config for automatic build and tests
exercises/         # contains one directory per exercise
template/          # template used when generating a new exercise
config.json        # main track configuration file for all exercises metadata
package.json       # Node package configuration required for running builds and tests

Each exercise within the exercises/ directory has the following structure:

elm.json               # elm json config file for the exercise
src/
  <PascalCaseSlug>.elm # exercise template, where <PascalCaseSlug> is the name of the exercise using PascalCase.
tests/
  Tests.elm            # tests for exercise, imports function(s) from src/<PascalCaseSlug>.elm
.meta/
  Exemplar.elm         # exemplary / example solution for this exercise
  config.json          # name of exercise, prerequisite concepts, concepts taught and similar
  design.md            # describe the learning goals of the exercise
.docs/
   introduction.md     # introduce the concept(s) that the exercise teaches to the student
   instructions.md     # describe the tasks to complete the exercise
   hints.md            # provide hints to a student to help them get themselves unstuck in an exercise

Contributing

We welcome contributions of all sorts and sizes, from reporting issues to submitting patches or implementing missing exercises. At the moment we would particularly like some help implementing new concept exercises, and beta testing.

If you would like to help, the best thing at the moment is probably to create an issue in this repository, and then one of us will get in touch with you and discuss what to do.

If you are not familiar with git and GitHub, you can also have a look at GitHub's getting started documentation.

Setup

In order to contribute code to this track, you will probably want npm, elm, elm-test, and elm-format installed globally. The build and test script for this track lives at bin/build.sh, and uses npx, so can work without the rest of the tools being installed if required.

Adding Missing Concept Exercise

TODO: link to the step-by-step guide instead here.

Version 3 of Exercism introduced Concepts and Concept Exercises, which are a completely new thing. There is a dependency diagram showing all the Elm concepts. You can see all concepts currently defined in concepts/, and you can se all concept exercises defined in /exercises/concept.

We would love some help creating more of these concepts / concept exercises. To do so, it is probably easiest to copy and paste an existing Concept and Concept Exercise. You will also need to add the meta data for these in config.json, and again, the easiest way is to copy and edit an existing entry. The Concept and Concept Exercise documentation have further details.

Elm Packages

We have decided not to make any of the *.Extra packages available. They change quite regularly so would add maintenance work. Where one of these packages would make an exercise easier, we point this out in the instructions, so that students can copy and paste the code from the package, or can write it themselves if they want.

Elm icon

We were unable to find copyright information about the Elm logo. It is a tangram in the square configuration, and when you hover over the "Playground" on the official webpage, it changes in to a figure (a solution of the tangram game). The origins of tangram are in the distant past and not well known, so we assume that there is no copyright infringement by using it.

elm's People

Contributors

anuragsoni avatar benreyn avatar bnandras avatar ceddlyburge avatar churchie317 avatar clairethompson avatar dependabot[bot] avatar edgerunner avatar ee7 avatar erikschierboom avatar exercism-bot avatar iamvery avatar ihid avatar janiczek avatar jehoshua02 avatar jiegillet avatar kytrinyx avatar leojpod avatar michaelglass avatar michaelkpfeifer avatar mpizenberg avatar mrattner avatar nathanielknight avatar parkerl avatar phippsaurus avatar pwadsworth avatar rebelwarrior avatar siriusstarr avatar tgecho avatar tuxagon avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

elm's Issues

Initial exercise order

Now that we have a decent starting set of exercises (pending PR merges), I'm wondering about a good starting order. I've generally thrown them in haphazardly, somewhat based on the order in the other tracks I've pulled them from.

The current order, as of #31 is this:

  1. hello-world
  2. leap
  3. pangram
  4. rna-transcription
  5. hamming
  6. word-count
  7. bob
  8. run-length-encoding
  9. difference-of-squares
  10. anagram
  11. raindrops
  12. triangle

My questions for @kytrinyx, @parkerl and whoever else:

  1. Is there a canonical/recommended order of exercises?
  2. Is it a correct assumption that once launched it would be a bad idea to add new exercises anywhere other than the end?
  3. Assuming 2 is correct, might it be worthwhile to sort the first batch of exercises by some subjective measure of difficulty to provide a nice onramp?

Upgrading to Elm 0.18

Now that Elm 0.18 is out, is upgrading the exercises desirable? This is probably not critical, since by default we ask users to:
$ npm install --global [email protected] [email protected]
thus 0.17 is always the one run as far as exercism is concerned.
Regardless, I hope to outline the considerations about upgrading and start a discussion about how we handle it.

Issues

Version Compatibility

While the version bump in elm is minor, the accompanying packages all have major bumps
Thus, if we simply change the version line in elm-package.json to:
"elm-version": "0.18.0 <= v < 0.19.0

It works, but the other packages require major version bumps:

Error: I cannot find a set of packages that works with your constraints.

--> Your elm-package.json has the following dependency:
    
        "elm-lang/core": "4.0.0 <= v < 5.0.0"
    
    But none of the versions in that range work with Elm 0.18.0. I recommend
    removing that dependency by hand and adding it back with:
    
        elm-package install elm-lang/core 5.0.0

--> Your elm-package.json has the following dependency:
    
        "elm-community/elm-test": "2.0.0 <= v < 3.0.0"
    
    But none of the versions in that range work with Elm 0.18.0. I recommend
    removing that dependency by hand and adding it back with:
    
        elm-package install elm-community/elm-test 3.0.0

As far as I can tell, simply upgrading to 0.18 with the suggestions above would preclude backwards-compatibility with someone using 0.17, since the following change cannot be satisfied with 0.17:

        "elm-lang/core": "4.0.0 <= v < 5.0.0"
        "elm-lang/core": "5.0.0 <= v < 6.0.0"

Fair enough, to keep compatibility, we could do:

        "elm-lang/core": "4.0.0 <= v < 6.0.0",
[...]
        "elm-version": "0.17.0 <= v <  0.19.0

This will be satisfied with elm-lang/core 5.0.0 in 0.18 and elm-lang/core 4.0.5 in 0.17.

Test Compatibility

The above seems nice in keeping both 0.17 and 0.18 for people, but our *Test.elm files are still going to be written in one style or the other. While the syntax changes in Elm are small, the changes in the packages that receive major version bumps might be larger. This is already the case with function signature changes in elm-test. It seems that we would have to move to 0.18 as a whole.

Worked Example

The steps are pretty much described in https://github.com/elm-lang/elm-platform/blob/master/upgrade-docs/0.18.md

In addition to that:

  1. Bump elm-version, either to 0.18.0 as minimum or 0.17.0 as minimum
  2. Depending on our choice above, bump core, test and anything that complains about not being satisfiable with 0.18. If we're keeping 0.17, keep the lower bound as well.
  3. Test that everything works, essentially the test files.
  4. It might not be possible to have the test file run in both versions, due to package changes.

For instance, changing the pangram exercise to handle both:

{
    "version": "3.0.0",
    "summary": "Exercism problems in Elm.",
    "repository": "https://github.com/exercism/xelm.git",
    "license": "BSD3",
    "source-directories": [
        "."
    ],
    "exposed-modules": [],
    "dependencies": {
        "elm-lang/core": "4.0.0 <= v < 6.0.0",
        "elm-community/elm-test": "2.0.0 <= v < 4.0.0",
        "rtfeldman/node-test-runner": "2.0.0 <= v < 4.0.0"
    },
    "elm-version": "0.17.0 <= v < 0.19.0"
}

throws the error:

The definition of `main` does not match its type annotation.

52| main : Program Value
53| main =
54|>    run emit tests

The type annotation for `main` says it is a:

    Program Value

But the definition (shown above) is a:

    Test.Runner.Node.TestProgram

Sure enough, changing it makes it work in 0.18 and related packages, but breaks 0.17:

-- NAMING ERROR ----------------------------------------------- PangramTests.elm

Cannot find type `Test.Runner.Node.TestProgram`.

52| main : Test.Runner.Node.TestProgram
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The qualifier `Test.Runner.Node` is not in scope. 

I haven't run through all the exercises, but I expect to find errors like that in many of them.

Lingering Questions

  • You might argue that, for small changes, the update process is unnecessary. In that case, let's just have the discussion for the future.
  • Elm avoids major bumps to the language itself, but the core packages are quite liberal about major bumps. Do we always upgrade to the latest once a new Elm version is out?
  • How do we notify users to update their setup to 0.18, if we do decide to upgrade?
  • How do other exercism languages handle this?

This is recent, so I might have missed something; please be gentle if that is the case.
If we do decide to upgrade to 0.18, I'm willing to help!

Override probot/stale defaults, if necessary

Per the discussion in exercism/discussions#128 we
will be installing the probot/stale integration on the Exercism organization on
April 10th, 2017.

By default, probot will comment on issues that are older than 60 days, warning
that they are stale. If there is no movement in 7 days, the bot will close the issue.
By default, anything with the labels security or pinned will not be closed by
probot.

If you wish to override these settings, create a .github/stale.yml file as described
in https://github.com/probot/stale#usage, and make sure that it is merged
before April 10th.

If the defaults are fine for this repository, then there is nothing further to do.
You may close this issue.

Upgrade exercises to Elm 0.17

The only real code change that should be necessary is updating the module lines from module Main (..) where to module Main exposing (..) form. We're not doing any signal stuff yet, so the rest of the upgrade doc shouldn't affect us: https://github.com/elm-lang/elm-platform/blob/master/upgrade-docs/0.17.md

It also looks like we may need to switch to elm-community/elm-test to gain 0.17 support. I'm traveling with sketchy internet access at the moment, so I won't be able to sort this out in the next day or two.

Update to use the latest version of Elm Test

Hello!

Currently the Elm exercises are using an older version of Elm Test that is unmaintained and uses ugly hacks to report tests results to the user.
I believe we should update to use the latest version.

There are a few problems with this:

  • The current version has a new syntax for tests that we would need to update to.
  • Due to the nature of Elm there isn't an obvious way to do IO, so there are multiple solutions for returning test results to the user. We would need to decide how to run out tests in a fashion that is user friendly and does not pull too many dependencies into the exercises.

Add anagram

WIP tgecho@cee1332

I'm not crazy about the formatting in these tests with long lines. They should probably be more consistent, but it seems wasteful to make them all take up 4-5 lines.

[word-count] Tests fail because expectation and result are… equal!

So what did I wrong here?

Output of elm-test (slightly reformatted to make the obvious even more obvious!):

  count one word: passed.
  count one of each word: passed.
  multiple occurrences of a word: FAILED. 
    Expected: Dict.fromList [("blue",1),("fish",4),("one",1),("red",1),("two",1)]; 
    got:      Dict.fromList [("blue",1),("fish",4),("one",1),("red",1),("two",1)]
  ignore punctuation: FAILED.
    Expected: Dict.fromList [("as",1),("car",1),("carpet",1),("java",1),("javascript",1)];
    got:      Dict.fromList [("as",1),("car",1),("carpet",1),("java",1),("javascript",1)]
  include numbers: passed
  normalize case: FAILED.
    Expected: Dict.fromList [("go",3),("stop",2)]; 
    got:      Dict.fromList [("go",3),("stop",2)]

Since the results seem to be okay, I have already submitted my solution.

Downloaded exercises do not contain instructions on how to run tests

I just fetched the anagram exercise, and it did not contain instructions on how to run the tests. Here's the contents of the README as as I received it.

# Anagram

Write a program that, given a word and a list of possible anagrams, selects the correct sublist.

Given `"listen"` and a list of candidates like `"enlists" "google"
"inlets" "banana"` the program should return a list containing
`"inlets"`.

## Source

Inspired by the Extreme Startup game [https://github.com/rchatley/extreme_startup](https://github.com/rchatley/extreme_startup)

## Submitting Incomplete Problems
It's possible to submit an incomplete solution so you can see how others have completed the exercise.

TestRunner.elm

I can't actually figure out what this file is for. It doesn't seem to be used or referenced anywhere else. @parkerl ?

Verify that nothing links to help.exercism.io

The old help site was deprecated in December 2015. We now have content that is displayed on the main exercism.io website, under each individual language on http://exercism.io/languages.

The content itself is maintained along with the language track itself, under the docs/ directory.

We decided on this approach since the maintainers of each individual language track are in the best position to review documentation about the language itself or the language track on Exercism.

Please verify that nothing in docs/ refers to the help.exercism.io site. It should instead point to http://exercism.io/languages/:track_id (at the moment the various tabs are not linkable, unfortunately, we may need to reorganize the pages in order to fix that).

Also, some language tracks reference help.exercism.io in the SETUP.md file, which gets included into the README of every single exercise in the track.

We may also have referenced non-track-specific content that lived on help.exercism.io. This content has probably been migrated to the Contributing Guide of the x-common repository. If it has not been migrated, it would be a great help if you opened an issue in x-common so that we can remedy the situation. If possible, please link to the old article in the deprecated help repository.

If nothing in this repository references help.exercism.io, then this can safely be closed.

Investigate track health and status of the track

I've used Sarah Sharp's FOSS Heartbeat project to generate stats for each of the language track repositories, as well as the x-common repository.

The Exercism heartbeat data is published here: https://exercism.github.io/heartbeat/

When looking at the data, please disregard any activity from me (kytrinyx), as I would like to get the language tracks to a point where they are entirely maintained by the community.

Please take a look at the heartbeat data for this track, and answer the following questions:

  • To what degree is the track maintained?
  • Who (if anyone) is merging pull requests?
  • Who (if anyone) is reviewing pull requests?
  • Is there someone who is not merging pull requests, but who comments on issues and pull requests, has thoughtful feedback, and is generally helpful? If so, maybe we can invite them to be a maintainer on the track.

I've made up the following scale:

  • ORPHANED - Nobody (other than me) has merged anything in the past year.
  • ENDANGERED - Somewhere between ORPHANED and AT RISK.
  • AT RISK - Two people (other than me) are actively discussing issues and reviewing and merging pull requests.
  • MAINTAINED - Three or more people (other than me) are actively discussing issues and reviewing and merging pull requests.

It would also be useful to know if there a lot of activity on the track, or just the occasional issue or comment.

Please report the current status of the track, including your best guess on the above scale, back to the top-level issue in the discussions repository: exercism/discussions#97

Wrong test in Triangle exercise

Hi,

I think I've noticed a mistake inside the Triangle test suit:

test "triangles violating triangle inequality are illegal 2"
     (assertEqual (Err "Violates inequality") (triangleKind 2 4 2))

This one is a flat triangle and doesn't violate the inequality.

`elm-test` emits unexpected stack trace.

$ pwd
~/dev/exercism/elm/hello-world

$ npm install -g elm-test

$ elm-test HelloWorldTests.elm

Success! Compiled 1 module.
Successfully generated /var/folders/mv/dy0670cn3255t3fjlv5dwk1m0000gn/T/elm_test_116720-91005-5fq85q.m8bp22o6r.js
undefined:1929
            throw new Error(
            ^

Error: You are giving module `Main` an argument in JavaScript.
This module does not take arguments though! You probably need to change the
initialization code to something like `Elm.Main.fullscreen()`
    at init (eval at <anonymous> (/usr/local/lib/node_modules/elm-test/bin/elm-test:86:37), <anonymous>:1929:10)
    at Object.eval [as callback] (eval at <anonymous> (/usr/local/lib/node_modules/elm-test/bin/elm-test:86:37), <anonymous>:1973:17)
    at step (eval at <anonymous> (/usr/local/lib/node_modules/elm-test/bin/elm-test:86:37), <anonymous>:2613:39)
    at Timeout.work [as _onTimeout] (eval at <anonymous> (/usr/local/lib/node_modules/elm-test/bin/elm-test:86:37), <anonymous>:2671:15)
    at tryOnTimeout (timers.js:228:11)
    at Timer.listOnTimeout (timers.js:202:5)

I'm not sure where to go from here...
I recall that the exercises came with a .bat/.sh-script. Why have they been removed?

Also, elm-package.json defines "rtfeldman/node-test-runner" as a dependency. But I still need to $ npm install -g elm-test to have elm-test available as a command.
Is there a means to make locally installed binaries available within the project dir (i.e. elm-test)?

rna-transcription: don't transcribe both ways

I can't remember the history of this, but we ended up with a weird non-biological thing in the RNA transcription exercise, where some test suites also have tests for transcribing from RNA back to DNA. This makes no sense.

If this track does have tests for the reverse transcription, we should remove them, and also simplify the reference solution to match.

If this track doesn't have any tests for RNA->DNA transcription, then this issue can be closed.

See exercism/problem-specifications#148

[stdin]:12394

Since the new testing method I have not beenable to get my test to run on the accumulate exercise. I was receiving this error:

C:\Dev\exercism.io\elm\hello-world>elm-test HelloWorldTests.elm                        
Success! Compiled 45 modules.                                                          
Successfully generated C:\Users\Clark\AppData\Local\Temp\elm_test_116726-4760-c8vvvm.js

Successfully compiled HelloWorldTests.elm                                              
Running tests...                                                                       

[stdin]:12394                                                                          
    if (typeof Elm === "undefined") { throw "elm-io config error: Elm is not defined. M
ake sure you call elm-io with a real Elm output file"}                                 
                                      ^                                                
 elm-io config error: Elm is not defined. Make sure you call elm-io with a real Elm out
put file

I rolled back, deleted my old hello-world, and fetched again. I got the same error, but this was fixed on the hello-word exercise after I updated my Elm installation.

I switched back to my accumulate exercise and now I get a new error:

C:\Dev\exercism.io\elm\accumulate>elm-test AccumulateTests.elm                         
Success! Compiled 1 module.                                                            
Successfully generated C:\Users\Clark\AppData\Local\Temp\elm_test_116726-11332-k9pm22.js                                                                                      
undefined:1929                                                                         
                        throw new Error(                                               
                        ^                                                              

Error: You are giving module `Main` an argument in JavaScript.                         
This module does not take arguments though! You probably need to change the            
initialization code to something like `Elm.Main.fullscreen()`                          
    at init (eval at <anonymous> (C:\Users\Clark\AppData\Roaming\npm\node_modules\elm-t
est\bin\elm-test:86:37), <anonymous>:1929:10)                                          
    at Object.eval [as callback] (eval at <anonymous> (C:\Users\Clark\AppData\Roaming\n
pm\node_modules\elm-test\bin\elm-test:86:37), <anonymous>:1973:17)                     
    at step (eval at <anonymous> (C:\Users\Clark\AppData\Roaming\npm\node_modules\elm-t
est\bin\elm-test:86:37), <anonymous>:2613:39)                                          
    at work [as _onTimeout] (eval at <anonymous> (C:\Users\Clark\AppData\Roaming\npm\no
de_modules\elm-test\bin\elm-test:86:37), <anonymous>:2671:15)                          
    at Timer.listOnTimeout (timers.js:92:15)                                           

runtests.sh is not downloaded with executable permissions

$ ./runtests.sh
-bash: ./runtests.sh: Permission denied

I didn't even think to check/ask, but it looks like this behavior was changed a few months ago: exercism/cli#276 :(

So... seeing as in my haste I've failed to create a clean/generic wrapper on the first try, I see a few obvious options:

  1. Remove the wrappers, switch the instructions back and wait for an updated elm-test to ship. elm-test MyTests.elm actually works right now, it just has a bunch of extraneous garbage at the end of the output.
  2. Update the instructions to bash runtests.sh.
  3. Write a node.js based wrapper (which could actually be cross platform) and instruct them to run node runtests.js.

Thoughts @parkerl @lukewestby?

request for help

I want to design a new exercise that calculates the amount of seconds from the Sept 1 1970 moment till present, this would require getting the "date.now" functionality from JS through a port. And I've not idea how to design that. Anybody want to help?

Launch Checklist

Launch Checklist

In order to launch we should have:

  • Elm as a submodule in x-api
  • At least 10 problems
  • Documentation (see below)
  • One to a handful of people willing to check exercism regularly (daily?) to review. This will ensure that the track gets off on the right foot.
  • Add track implementors and other designated code reviewers as mentors to the track. This gives you access to all the solutions in Elm whether or not you've submitted the problem to the site.
  • Toggle "active" to true in config.json
  • Mention it in the next "behind the scenes" email (noted)

Documentation

The documentation lives in the docs/ directory here in this repository, and gets served to the site via the x-api. It should contain at minimim:

  • INSTALLATION.md - about how to get the language set up locally.
  • TESTS.md - about how to run the tests for the exercises.

Some nice to haves:

  • ABOUT.md - a short, friendly blurb about the language. What types of problems does it solve really well? What is it typically used for?
  • LEARNING.md - a few notes about where people might want to go to learn the language from scratch.
  • RESOURCES.md - references and other useful resources.

Successful Launches

Some tracks have been more successful than others, and I believe the key features of the successful tracks are:

  • Each submission receives feedback quickly, preferably within the first 24 hours.
  • The nitpicks do not direct users to do specific things, but rather ask questions challenging people to think about different aspects of their solution, or explore new aspects of the language.

For more about contributing to language tracks on exercism, check out the Problem API Contributing guide: https://github.com/exercism/x-api/blob/master/CONTRIBUTING.md

elm-reactor

I'm not sure if it's a problem with how things are set up in this repo, my own installation or if it's related to the current ongoing issues with elm-reactor, but it doesn't seem to be working for me here at all.

image

For reference, I have been able to use it with other projects, so it's not completely broken on my computer. Might it make sense to switch to elm-test as the recommended method for now?

@parkerl

Copy track icon into language track repository

Right now all of the icons used for the language tracks (which can be seen at http://exercism.io/languages) are stored in the exercism/exercism.io repository in public/img/tracks/. It would make a lot more sense to keep these images along with all of the other language-specific stuff in each individual language track repository.

There's a pull request that is adding support for serving up the track icon from the x-api, which deals with language-specific stuff.

In order to support this change, each track will need to

In other words, at the end of it you should have the following file:

./img/icon.png

See exercism/exercism#2925 for more details.

Is it possible to write tests for html?

One of the features of Elm I really like is all its front end magic, but I don't know how to write tests for that. I don't know if they are even possible (without say Selenium)?

elm-test doesn't work

I'm getting started with Elm exercises and am a complete newbie with Elm. Following the instructions, when I enter elm-test HelloWorldTests.elm I get Could not find Elm compiler "elm-make". Is it installed?

I can run elm make HelloWorldTests.elm successfully (it generates an index.html file), so I think that means elm-make is installed.

I'm on Windows if that makes a difference. I installed via npm install --global elm elm-test

Verify contents and format of track documentation

Each language track has documentation in the docs/ directory, which gets included on the site
on each track-specific set of pages under /languages.

We've added some general guidelines about how we'd like the track to be documented in exercism/exercism#3315
which can be found at https://github.com/exercism/exercism.io/blob/master/docs/writing-track-documentation.md

Please take a moment to look through the documentation about documentation, and make sure that
the track is following these guidelines. Pay particularly close attention to how to use images
in the markdown files.

Lastly, if you find that the guidelines are confusing or missing important details, then a pull request
would be greatly appreciated.

*.example files

Since things are still in an early state, does anyone have any objections to switching to ExerciseExample.elm as the example naming convention? I'm working on adding a few exercises and it's a bit tedious since editors don't really like the .example extension. Also, it would make it possible to dev on a test by simply changing the import line to import BobExample exposing (hey).

Any thoughts/objections? @parkerl @kytrinyx

Unclear how to get/run 'elm-test'

On the elm track, the 'Hello, World' exercise has a shell script that runs tests via elm-make. So far so good.

Then with 'Bob', you try the same thing to run the tests and you are referred to the 'Running the Tests' directions. Those say to run elm-test *Test.elm. I installed Elm in the way recommended on their website, but I don't have an elm-test anywhere. Looks like it is a third-party package that needs to be installed. Maybe we can indicate that and explain how to do it? Let me know if I can provide some text.

Thanks -- exercism is awesome!

[word-count] Tests fail because expectation and result are… equal!

So what did I wrong here?

Output of elm-test (slightly reformatted to make the obvious even more obvious!):

  count one word: passed.
  count one of each word: passed.
  multiple occurrences of a word: FAILED. 
    Expected: Dict.fromList [("blue",1),("fish",4),("one",1),("red",1),("two",1)]; 
    got:      Dict.fromList [("blue",1),("fish",4),("one",1),("red",1),("two",1)]
  ignore punctuation: FAILED.
    Expected: Dict.fromList [("as",1),("car",1),("carpet",1),("java",1),("javascript",1)];
    got:      Dict.fromList [("as",1),("car",1),("carpet",1),("java",1),("javascript",1)]
  include numbers: passed
  normalize case: FAILED.
    Expected: Dict.fromList [("go",3),("stop",2)]; 
    got:      Dict.fromList [("go",3),("stop",2)]

Since the results seem to be okay, I have already submitted my solution.

Move exercises to subdirectory

The problems api (x-api) now supports having exercises collected in a subdirectory
named exercises.

That is to say that instead of having a mix of bin, docs, and individual exercises,
we can have bin, docs, and exercises in the root of the repository, and all
the exercises collected in a subdirectory.

In other words, instead of this:

x{TRACK_ID}/
├── LICENSE
├── README.md
├── bin
│   └── fetch-configlet
├── bowling
│   ├── bowling_test.ext
│   └── example.ext
├── clock
│   ├── clock_test.ext
│   └── example.ext
├── config.json
└── docs
│   ├── ABOUT.md
│   └── img
... etc

we can have something like this:

x{TRACK_ID}/
├── LICENSE
├── README.md
├── bin
│   └── fetch-configlet
├── config.json
├── docs
│   ├── ABOUT.md
│   └── img
├── exercises
│   ├── bowling
│   │   ├── bowling_test.ext
│   │   └── example.ext
│   └── clock
│       ├── clock_test.ext
│       └── example.ext
... etc

This has already been deployed to production, so it's safe to make this change whenever you have time.

Run elm-format on CI

Because we are being strict about formatting we should fail the build if elm-format causes diffs.

Get the `docs` in order

We need the following:

  • INSTALLATION.md - should contain details about installing the language itself and any dependencies or configuration necessary to work on the exercises in that language.
  • TESTS.md - about how to run the tests for the exercises.
  • ABOUT.md - a short, friendly blurb about the language. What types of problems does it solve really well? What is it typically used for?
  • LEARNING.md - a few notes about where people might want to go to learn the language from scratch.
  • RESOURCES.md - references and other useful resources.

per https://github.com/exercism/todo/issues/1

Update config.json to match new specification

For the past three years, the ordering of exercises has been done based on gut feelings and wild guesses. As a result, the progression of the exercises has been somewhat haphazard.

In the past few months maintainers of several tracks have invested a great deal of time in analyzing what concepts various exercises require, and then reordering the tracks as a result of that analysis.

It would be useful to bake this data into the track configuration so that we can adjust it over time as we learn more about each exercise.

To this end, we've decided to add a new key exercises in the config.json file, and deprecate the problems key.

See exercism/discussions#60 for details about this decision.

Note that we will not be removing the problems key at this time, as this would break the website and a number of tools.

The process for deprecating the old problems array will be:

  • Update all of the track configs to contain the new exercises key, with whatever data we have.
  • Simultaneously change the website and tools to support both formats.
  • Once all of the tracks have added the exercises key, remove support for the old key in the site and tools.
  • Remove the old key from all of the track configs.

In the new format, each exercise is a JSON object with three properties:

  • slug: the identifier of the exercise
  • difficulty: a number from 1 to 10 where 1 is the easiest and 10 is the most difficult
  • topics: an array of strings describing topics relevant to the exercise. We maintain
    a list of common topics at https://github.com/exercism/x-common/blob/master/TOPICS.txt. Do not feel like you need to restrict yourself to this list;
    it's only there so that we don't end up with 20 variations on the same topic. Each
    language is different, and there will likely be topics specific to each language that will
    not make it onto the list.

The difficulty rating can be a very rough estimate.

The topics array can be empty if this analysis has not yet been done.

Example:

"exercises": [
  {
    "slug": "hello-world" ,
    "difficulty": 1,
    "topics": [
        "control-flow (if-statements)",
        "optional values",
        "text formatting"
    ]
  },
  {
    "difficulty": 3,
    "slug": "anagram",
    "topics": [
        "strings",
        "filtering"
    ]
  },
  {
    "difficulty": 10,
    "slug": "forth",
    "topics": [
        "parsing",
        "transforming",
        "stacks"
    ]
  }
]

It may be worth making the change in several passes:

  1. Add the exercises key with the array of objects, where difficulty is 1 and topics is empty.
  2. Update the difficulty settings to reflect a more accurate guess.
  3. Add topics (perhaps one-by-one, in separate pull requests, in order to have useful discussions about each exercise).

ListOps: Direction tests for foldl and foldr

Hi, I just did the ListOps exercise and I missed one test to cover the difference between foldl and foldr. Something on the lines of

, test "direction" (assertEqual [4,3,2,1] (foldl (::) [] [1..4]))

and

, test "direction" (assertEqual [1..4] (foldr (::) [] [1..4]))

What do you think?

New tests for the Pangram problem

We have found that the Pangram tests miss edge cases allowing students to pass all of the current tests with an incorrect implementation.

To cover these cases we have added new tests to the Pangram test set. Those new tests were added in this commit

Since this track implements Pangram, please take a look at the new pangram.json file and see if your track should update its tests.

If you do need to update your tests, please refer to this issue in your PR. That helps us see which tracks still need to update their tests.

If your track is already up to date, go ahead and close this issue.

More details on this change are available in x-common issue 222.

Thank you for your help!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.