Comments (22)
The ruby test suites are set up to start with only one failing test and everything else marked as "skip". At the time these were ported to Elixir, ExUnit had no facility for doing this, so we tried to achieve the same thing by commenting out the later tests. Now that ExUnit supports filtering by tags, we should uncomment these, tag with @pending
and update the instructions to describe filtering out pending test cases.
from elixir.
In this case I need some clarification why the tests are skipped in ruby by default?
Most users would look at the tests to see if they are ignored / pending but there would be some who test against only one case and if it is OK they submit the solution. Then there would be some angry nitpicks mentioning that the code did not pass all the tests...
from elixir.
That has happened. 😄 I believe the original intent was to not present a wall of errors to users who are new to programming and/or TDD, as that might be overwhelming. By focusing on a single test failure and then re-enabling just the next test when one is passing, the user has a path to follow.
from elixir.
OK :)
It is another approach to have the problem solvers look at the test cases too. But this should be done consistent across all the tracks, shouldn't it? I mean in the Java or Python tracks for example you get all the test failures right at the start so you can see that is wrong with the code -- or at least what cases you have to solve so you don't miss anything and shield yourself against angry nitpicks.
from elixir.
Yes, I would agree that it should be consistent. If the model is to have pending tests, enabled one at a time, then the suites should be updated to match (if each language / testing framework has a way to do that). Also, it should be clear in the instructions that the exercise is not complete until there are no more pending tests and no more failures.
from elixir.
I know of at least one track, the Go one, where there was an explicit choice to enable all tests because hiding errors is not the Go way.
The idea behind one test at a time seems to be a TDD-like red/green/refactor. However that doesn't work because often what the user will do is write all of the code, see that the first test passes and turn on all other tests.
The only real way in which I could see "staged tests" working is if you collect them into groups of equivalent level and have the exercism tool check for whether the tests pass and if so enable the next group. With group I mean something like for bob to have a group that tests the basic logic and then a second group that tests edge cases and stuff like unicode.
Barring this I would recommend just enabling all tests all the time.
from elixir.
But this should be done consistent across all the tracks, shouldn't it?
I don't think that there should be consistency across the tracks, since each language/culture is different both in terms of how they think about errors, and in terms of who typically submits code.
The idea behind one test at a time seems to be a TDD-like red/green/refactor.
Initially the idea was to provide a sort of simulated TDD environment. I was hoping to provide the feeling that TDD often can give you both in terms of having small "next steps" to follow, and in terms of the rhythm: fail, pass, fail pass.
Getting a wall of errors tends to freak some people out.
I don't necessarily think that exercism can (or should) help people learn TDD. The test suite is really only there to help give people a stopping point (for the first pass) and a starting point (for the conversation).
Ideally, I would like to see a way of making it so that people do not have to edit the test suite. In languages where there is a fail fast mechanism, this is easy. In others, I don't know what the answer is.
I think that angry nitpicks are unwelcome no matter what the person submitting code does, and I wish that I could find a way to address this. If someone didn't complete the exercise the way we expected then our instructions weren't good enough. That's no reason to be unkind.
from elixir.
Yesterday I've updated the bob tests marking them pending (as you can see), thanks for the merge @rubysolo
Now however I have a question about consistency in track itself. I went ahead to update the next assignment (word-count) but here all tests are working right from the start -- or better said they are not working because of the missing implementation.
What is the way for Elixir? Using pending tags and leave the first test case available at start or let every test run right from the start.
Or we shouldn't give it a hang, @kytrinyx ?
from elixir.
@ghajba I think consistency is useful, but I've not done any Elixir so I don't want to impose any solution that might not make sense for the language or track. I would lean towards using the pending tag.
from elixir.
I would say that the level of the exercise matters as well. In an advanced exercise I would expect the user to be able to deal with all tests enabled.
Katrina Owen [email protected] schreef op 27 mei 2015 18:32:17 CEST:
@ghajba I think consistency is useful, but I've not done any Elixir so
I don't want to impose any solution that might not make sense for the
language or track. I would lean towards using the pending tag.
Reply to this email directly or view it on GitHub:
#47 (comment)
from elixir.
In an advanced exercise I would expect the user to be able to deal with all tests enabled.
Agreed.
from elixir.
I do appreciate @tag :pending
a lot though. Personally, I think users advanced enough to not need something doesn't care enough one way or another. I think it's more important to help ensure struggling users are set up to be successful and having an enjoyable experience.
Hinting where to start first also emphasizes what you want them to learn. For example, I think it was very useful to start the list-ops
exercise by solving reduce
function first. Since once that's done, other methods I needed to implement could use the reduce function I just wrote. I've seen a number of solutions posted that implemented the same recursion logic for each individual method of counting, reducing, mapping, appending, and so on. Seems like a tedious and repetitive way to go about it.
Anyway, just my two-cents.
from elixir.
Agreed--we've started adding the pending tag for some solutions. Let's keep going in that direction.
from elixir.
Disclaimer: I'm a total Elixir n00b.
Would it be worth making the exercises mix
projects, so we can run mix test
and mix test --include pending
?
I had to google the whole :pending
tag to understand how best to deal with it (comment them out or what) and this SO question was most helpful.
I'd be happy to look into doing this for a PR, if it would help? I think it would just mean including a mix.exs
and test_helper.exs
file and updating the Help section, unless I'm missing something?
from elixir.
I've changed my own Bob directory into a mix project as an example:
https://github.com/bordeltabernacle/exercism/tree/master/elixir/bob
from elixir.
If it is a mix project, can it still be run without the mix
bit?
from elixir.
From my inexperienced forays into experimenting with this it looks like it can't, at least not easily. But I don't know enough to know if I'm missing something.
From what I can gather mix
appears to be the standard way of approaching Elixir projects, and their tests, in a similar way that lein
is used for Clojure. It would also mean that people would not have to edit the test suite. And for someone who has only completed one Elixir exercise, I've spent more time trying to understand the idiomatic way to deal with @tag :pending
than completing the exercise! :)
I can understand your comment "I don't necessarily think that exercism can (or should) help people learn TDD.", although I think exercism can present the language in the way it is used?
Again, I'm perhaps commenting wildly on things I don't know, so apologies if I'm talking rubbish!
from elixir.
I want Exercism to handle each language idiomatically, with as few dependencies as possible.
The idea behind the :pending tag is that you would be given one more test to deal with at a time. Using mix doesn't solve this, since if you run it with the flag to include pending tests, then it would run all of them, not just the next one.
The only language where this has been solved to my satisfaction so far is Go, where we create table tests and stop the execution after the first failure. That way if you get the first test passing, then it will automatically move on to the next one.
Perhaps some languages have a --fail-fast flag of some sort, but that doesn't really help in this case either, because tests are generally run in random order. That's the correct behavior, usually, but not in this context where we're simulating TDD.
@parkerl what's your take on this?
from elixir.
First, I think that making the exercises mix
projects is overkill. It adds one more layer to learn for people just coming to Elixir. The reality is that test code is really just elixir code and can always be run with elixir
. I think that the :pending
tag is an effective way to achieve the step-wise progress you are looking for, @kytrinyx. Having folks run mix test --include pending
will defeat that purpose. I also think that the knowledge gained from a need to research what "tags" are is of value to the person doing the exercise. I believe part of your goal is for people to not just learn the language but also how it is tested.
All that said I think perhaps a note in the READMEs might help. Also, I would suggest that trace: true
be added to the config like this ExUnit.configure(exclude: :pending, trace: true)
. This has a more verbose output that includes indicating what tests are skipped.
It changes from this:
○ > elixir word_count_test.exs
word_count.exs:8: warning: variable sentence is unused
Excluding tags: [:pending]
1) test count one word (WordsTest)
word_count_test.exs:13
Assertion with == failed
code: Words.count("word") == %{"word" => 1}
lhs: nil
rhs: %{"word" => 1}
stacktrace:
word_count_test.exs:14
Finished in 0.08 seconds (0.08s on load, 0.00s on tests)
9 tests, 1 failure, 8 skipped
to this
○ > elixir word_count_test.exs
word_count.exs:8: warning: variable sentence is unused
Excluding tags: [:pending]
WordsTest
* hyphens (skipped)
* count multiple occurrences (skipped)
* ignore punctuation (skipped)
* count one of each (skipped)
* ignore underscores (skipped)
* count one word (2.0ms)
1) test count one word (WordsTest)
word_count_test.exs:13
Assertion with == failed
code: Words.count("word") == %{"word" => 1}
lhs: nil
rhs: %{"word" => 1}
stacktrace:
word_count_test.exs:14
* include numbers (skipped)
* normalize case (skipped)
* German (skipped)
Finished in 0.07 seconds (0.07s on load, 0.00s on tests)
9 tests, 1 failure, 8 skipped
from elixir.
Thanks, @parkerl. It sounds like mix
would be the equivalent of adding rake files to the Ruby tests, and it still wouldn't help solve the problem we're trying to solve. Let's leave it out.
I think you're right that we should add trace: true
, and also a little something to the SETUP.md
file in the root of this repo, which gets included into the READMEs.
@bordeltabernacle if you have the urge, want to prepare a couple of pull requests with these changes?
from elixir.
This all makes a lot of sense. I completely understand keeping mix
out of the exercises keeps the barrier to entry low, keeping it something to pick up later on for those new to the language.
And something to explain how to deal with the :pending
tags will curtail any difficulties with accidentally breaking tests etc. Adding in trace
also makes what's going on more explicit to easily confused newbies like myself. 😄
I'd be more than happy to submit these changes as PR's.
from elixir.
Thank you, @bordeltabernacle! I see that the first PR got merged (congrats and welcome to open source :)), and that the other is ready.
from elixir.
Related Issues (20)
- Adjust all concepts to use charlist sigils
- test_exercises.sh print fails when test output includes `%`
- Lasagna exercise HOT 2
- Why does this repository have two identical github workflows? HOT 6
- Phone Number exercise: difference between tests in exercise folder vs online Test runner HOT 2
- Implement new practice exercise: bottle-song
- Too forgiving test in exercises/concept/german-sysadmin HOT 1
- Mistakes in `bottle-song` example solution typespecs HOT 3
- [RPG Character Sheet] Misleading the 5th question HOT 4
- Building a training set of tags for elixir HOT 22
- Run on Elixir 1.16
- Leap approaches for 48in24
- 48in24 Approaches for Raindrops HOT 2
- Darts exercise hint for sqrt workaround doesn't seem to work HOT 7
- Typo In Take-A-Number Deluxe Step 6 HOT 2
- Take-A-Number not enough information HOT 5
- Rational Numbers "abs"/1 function is colliding with Kernel.abs/1 HOT 1
- Pig Latin. Tests and Description seems to diverge HOT 1
- Lucas Numbers HOT 1
- Newsletter task instruction confusing HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from elixir.