Giter Club home page Giter Club logo

rust's Introduction


Exercism Rust Track

                          Discourse topics   Exercism_III   CI


Hi.  👋🏽  👋  We are happy you are here.  🎉 🌟


exercism/rust is one of many programming language tracks on exercism(dot)org. This repo holds all the instructions, tests, code, & support files for Rust exercises currently under development or implemented & available for students.

Some Exercism language tracks have a syllabus which is meant to teach the language step-by-step. The Rust track's syllabus is a work in progress and it's not activated yet. All exercises presented to students are practice exercises. Students are exepcted to learn the language themselves, for example with the official book, and practice with our exercises.



🌟🌟  Please take a moment to read our Code of Conduct  🌟🌟
It might also be helpful to look at Being a Good Community Member & The words that we use.
Some defined roles in our community: Contributors | Mentors | Maintainers | Admins


We 💛 💙   our community.
But our maintainers are not accepting community contributions at this time.
Please read this community blog post for details.


Here to suggest a new feature or new exercise?? Hooray!  🎉  
We'd love if you did that via our Exercism Community Forum.
Please read Suggesting Exercise Improvements & Chesterton's Fence.
Thoughtful suggestions will likely result faster & more enthusiastic responses from volunteers.


✨ 🦄  Want to jump directly into Exercism specifications & detail?
     Structure | Tasks | Concepts | Concept Exercises | Practice Exercises | Presentation
     Writing Style Guide | Markdown Specification (✨ version in contributing on exercism.org)



Exercism Rust Track License

This repository uses the MIT License.

rust's People

Contributors

andrewclarkson avatar bobahop avatar ccouzens avatar clashthebunny avatar coriolinus avatar cwhakes avatar dem4ron avatar dependabot[bot] avatar eduardobautista avatar ee7 avatar efx avatar emerentius avatar erikschierboom avatar etrepum avatar exercism-bot avatar ianwhitney avatar ihid avatar kytrinyx avatar lewisclement avatar navossoc avatar ocstl avatar pedantic79 avatar petertseng avatar pminten avatar razielgn avatar sacherjj avatar senekor avatar stevejb71 avatar workingjubilee avatar zapanton avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rust's Issues

test cases for nucleotide count should include handing of invalid sequences

In some implementations I've seen of this exercise, if I were to pass in a sequence of ATCGLL, we see include L: 2 in the results. There is no test to indicate if that behaviour is invalid and it doesn't seem right. Are we in favour of adding a test case to address invalid input or does that feel like it's outside the scope of this exercise?

Logic behind an allergies test

@adolfosilva

What is the logic behind this test?

#[test]
fn test_ignore_non_allergen_score_parts() {
    assert_eq!(vec![Allergen::Eggs], Allergies(257).allergies());
}

I'd understand if this test expected an empty vector since the score cannot be constructed out of a sum of Allergen values, by I can't figure out why Allergen::Eggs are expected to get through.

Function interfaces for tests

I was having a lot of issues trying to guess what kind of object is required to be passed in the allergies exercise.
I figured this is what it looked like:

pub struct Allergies {
    score: i32,
}

impl Allergies {
    pub fn new(n: i32) -> Allergies {
        Allergies { score: 0 }
    }
}

So I was basically trying to figure out what the Allergies(0) function did and where it came from.
In the end I had to resort to looking into the solution in example.rs, which is kind of upsetting.

Turns out the struct has the form pub struct Allergies(pub usize); and the Allergies(0) is... I still don't understand what it is and where it comes from.

Do you think there is a way to expose the interface without giving out the solution?

Updated tests for the Custom Set problem

In order to reduce the amount of code required to pass incremental tests (assuming that users pass tests starting from the top), the order of the tests was modified slightly.

Since this track implements Custom Set, please take a look at the new custom-set.json file and see if your track should update its tests.

If you do need to update your tests, please refer to this issue in your PR. That helps us see which tracks still need to update their tests.

If your track is already up to date, go ahead and close this issue.

More details on this change are available in exercism/problem-specifications#257.

Move exercises to subdirectory

The problems api (x-api) now supports having exercises collected in a subdirectory
named exercises.

That is to say that instead of having a mix of bin, docs, and individual exercises,
we can have bin, docs, and exercises in the root of the repository, and all
the exercises collected in a subdirectory.

In other words, instead of this:

x{TRACK_ID}/
├── LICENSE
├── README.md
├── bin
│   └── fetch-configlet
├── bowling
│   ├── bowling_test.ext
│   └── example.ext
├── clock
│   ├── clock_test.ext
│   └── example.ext
├── config.json
└── docs
│   ├── ABOUT.md
│   └── img
... etc

we can have something like this:

x{TRACK_ID}/
├── LICENSE
├── README.md
├── bin
│   └── fetch-configlet
├── config.json
├── docs
│   ├── ABOUT.md
│   └── img
├── exercises
│   ├── bowling
│   │   ├── bowling_test.ext
│   │   └── example.ext
│   └── clock
│       ├── clock_test.ext
│       └── example.ext
... etc

This has already been deployed to production, so it's safe to make this change whenever you have time.

Rust Exercise Proposal

I've recently taken on the challenge of creating a rust query string builder, and I found it to be a rewarding exercise.

I've learned more about Strings, Vectors, Tuples, the ownership system, as well as iterator traits like map and fold, and I thought this would be a great exercise on exercism.

Exercise:
Write a program that provided a list of tuples can return a http query string.

For example:

Input: 
let params = vec![("key1", "value1"), ("key2", "value2")];

Output: 
"?key1=value1&key2=value2"

Explain appeal of rust in ABOUT.md

Our current ABOUT has the design principles of Rust and explains the crates ecosystem, two good points to have.

Next we'd like to explain what kinds of projects one might use Rust for/people are already using Rust for, and why it might be interesting to learn. This arises from discussion in #58

Explanation of tests in anagram

I'm working on the anagram problem and my code is failing on these three tests:

#[test]
fn test_does_not_detect_a_word_as_its_own_anagram() {
    let inputs = ["banana"];
    let outputs: Vec<&str> = vec![];
    assert_eq!(anagram::anagrams_for("banana", &inputs), outputs);
}

#[test]
fn test_does_not_detect_a_differently_cased_word_as_its_own_anagram() {
    let inputs = ["bAnana"];
    let outputs: Vec<&str> = vec![];
    assert_eq!(anagram::anagrams_for("banana", &inputs), outputs);
}

#[test]
fn test_does_not_detect_a_differently_cased_unicode_word_as_its_own_anagram() {
    let inputs = ["ΑΒγ"];
    let outputs: Vec<&str> = vec![];
    assert_eq!(anagram::anagrams_for("ΑΒΓ", &inputs), outputs);
}

I think these tests should actually be panic because these assertions are not true if considering case insensitive code for the latter two and simply because there's no output for the trivial case of a self-describing anagram.

Perhaps I'm missing something, but I was hoping these could be addressed somehow.

Update Allergies to use canonical tests

Our tests currently differ from the canonical tests

Update the tests to the standard, while also being aware of the issue raised in #130. Many of the new canonical tests will also need to handle ordering. Either use the fix introduced in #134 for those tests, or derive some new way of comparing results.

SETUP.md links to help.exercism.io

That site no longer exists. What should it link to instead?

Are students seeing this on exercism.io or is this visible on the repo only?

automated feedback for Rust track submissions

Go and Ruby have automated feedback on their submissions powered by https://github.com/exercism/rikki . I am wondering if we might do the same for Rust.

My specific motivation is that often I see anagram submissions of the form

let mut result = Vec::new();
for input in inputs {
    if input.is_anagram_of(word) {
        result.push(input)
    }
}
result

I would really like to push them to use iterator methods, but I have not had the time to give people feedback recently so I've been letting a ton of them go by. If there were something automated to make this happen, it would be super interesting to me.

Might be a bit of a pie in the sky idea (and I certainly don't have time to work on it soon), but good to keep it in the backlog, I feel.

Revisiting Roman Numerals API

So, @IanWhitney agreed that API used for Roman Numerals is unfortunate. Test suite has this:

assert_eq!("I", Roman::from(1));

Which looks like trait From.This trait is supposed to convert some type to the type From implemented for (i32 to Roman in this case). Instead, the result of from is used like &str or String. It is still possible to use trait From if we combine it with, say, PartialEq. But then implementation looks verbose and unidiomatic.

The question is, how should a new API look like? Any suggestions?

Here is my proposal:

let roman = Roman::new(4);
assert_eq!("IV", roman.to_string());

Which hints to implement Roman::new() like many Rust structs do, and trait Display. Thoughts?

Sieve. Too much too soon? Gigasecond. Who struggles?

As a Rust beginner, I found the Sieve exercise to be quite tough at that point in the track.

My solution ended up being quite terse, but that doesn't convey the sense of struggle involved, and it feels like an unexpected holiday when you hit Gigasecond (aka HelloWorld 2: the revenge which is even less scary than the first time around).

Probably the biggest offender in my opinion is Gigasecond. The ordering of Sieve related to other subsequent exercises is quite hard to pinpoint, and perhaps I was having a bad brain day.

Changes to Custom Set tests

We recently rewrote the test suite for Custom Set. Since this track implements Custom Set, please take a look at the new custom_set.json file and see if your track should update its implementation or tests.

The new test suite reorders tests so that students can get to green quickly. It also reduces the number of tests so that students can focus on solving the interesting edge cases.

More details on this change are available in the pull request

rna-transcription: don't transcribe both ways

I can't remember the history of this, but we ended up with a weird non-biological thing in the RNA transcription exercise, where some test suites also have tests for transcribing from RNA back to DNA. This makes no sense.

If this track does have tests for the reverse transcription, we should remove them, and also simplify the reference solution to match.

If this track doesn't have any tests for RNA->DNA transcription, then this issue can be closed.

See exercism/problem-specifications#148

Make Hamming conform to official definition

From issue exercism/exercism#1867

Wikipedia says the Hamming distance is not defined for strings of different length.

I am not saying the problems cannot be different, but for such a well-defined concept it would make sense to stick to one definition, especially when the READMEs provide so little information about what is expected from the implementation.

Let's clean this up so that we're using the official definition.

Circular Buffer Question

I was wondering if there is a reason that the "circular buffer" exercise seems to be set up to discourage use of generics.

Problem Ordering

As brought up in #126 (and elsewhere), our problem ordering is weird. I've certainly put some problems in the wrong spots, for sure.

I ran through the example solutions for all of our problems made a table of what skills each problem (probably) requires. Implementations can vary, obviously.

problem topics
hello-world Some/None. Really? We did that in Hello World?
gigasecond Crates, type stuff.
leap Math, booleans
anagram lifetimes, str vs string, loops, iter, vector
difference-of-squares fold & map
allergies struct, enum, bitwise (probably), vectors, filter
word-count hashmap, str vs string, chars, entry api
hamming result, chars, filter
rna-transcription match, struct, str vs string
nucleotide-count filter, entry api, mutablity, match
nucleotide-codons struct, hash map, lifetimes, Result
scrabble-score chaining map/fold. Hashmap (maybe)
roman-numerals mutable, results, loops, struct, traits
robot-name struct, slices, randomness, lifetimes, self mut
etl btree?
raindrops case (or formatting). Mutable string
bob chars, string functions
grade-school struct, entry api, Vec, Option
phone-number option, format, unwrap_or, iters, match
hexadecimal Option, zip/fold/chars, map
queen-attack struct, trait (optional), Result
beer-song case, vector (?), loop
sieve vector, map, while let (optional)
minesweeper board state, Vec, heavy logic
dominoes I do not even know, man
parallel-letter-frequency multi threading? heavy
sublist enum, generic over type
custom-set generic over type, vector, equality, struct
tournament enum, file io, try!, result, hashmap, struct
rectangles traits and structs, enum
forth like, everything but lifetimes
circular-buffer same

So, some stuff is at the bottom that clearly should be (rectangles, forth, circular-buffer). But the start of our problem list is not well geared towards newcomers. Let's discuss better options.

handle different lengths error case in point-mutations

The current tests for the point-mutations exercise assumes that both strings are of the same length. hamming_distance() should return an error when strings of different lengths are passed. I think the idiomatic way to this in Rust would be to change the function signature to return a Result

pub fn hamming_distance(a : &str, b: &str) -> Result<u32, &'static str> 

For expected tests cases a call to unwrap() needs to be added, for error cases we can compare with the error.

#[test]
fn test_no_difference_between_empty_strands() {
    assert_eq!(dna::hamming_distance("", "").unwrap(), 0);
}

#[test]
fn test_second_string_is_longer() {
    assert_eq!(dna::hamming_distance("A", "AA"), Result::Err("inputs of different length"));
}

I will send a pull request for this issue.

Discuss tournament exercise

I see the following issues with the tournament exercise:

  • At first I had no idea what the columns supposed to mean | MP | W | D | L | P. OK, I'm kind of ignorant about sports and I made an educated guess that the W is for Win and L is Loss but still it baffled me at first.
  • No mention of a win worth 3 points, a draw 1 and a loss 0. I had to reverse engineer this from the test outputs. Again this could be do to my ignorance towards sports, maybe this is a common scoring system, but I believe exercism puzzles should be self-contained.
  • the file formats are not explained:
    • the input format can contain comments, lines starting with #
    • what is a valid line?
    • should I expect invalid lines?
    • if the input line team1;team2;win does it mean team1 or team2 won?
  • Why should the tally function return the number of parsed lines? What purpose does it serve? Why does it need a return value at all? I think the author intended the use of Result but it is not expressed anywhere and because tests use unwrap the Optional type can also be used.

And another issue, thinking about the "big picture": this exercise requires reading files, parsing them and writing data to files. I think smaller exercises with only reading and only writing files should precede this one.

"point-mutations" is deprecated in favor of hamming

This happened a while back, and it was for really weird legacy reasons.

I've since fixed the underlying issues that caused the problem, but for consistency
it would be nice to rename point-mutation to hamming, so that all the tracks are using
the same exercise name.

Once the problem has been renamed, I can run a script on the website to point people's
existing point-mutations solutions to the new hamming exercise so that they'll be able
to review solutions to hamming, and people who solve the new hamming exercise can see
all the old ones.

Exercises run into sharp corners and do not demonstrate rust's value

I decided I wanted to learn rust due to it's marketing of being a memory safe systems language with the concept of lifetimes baked in.

I've done the first 8 exercises and I feel like I've been lead into some random sharp corners of rust with no guidance (allergies in particular and the #[derive] bits) and I still haven't really been able to evaluate the interesting parts of the language.

I just started looking into the nucleotide-codons exercise, and there seems to be more work there wrt figuring out what the actual rules are than actually implementing them.

Maybe I haven't fully understood the point of this site, but I don't feel like I'm learning much about rust here beyond the syntax.

Domino test case clarification of chain

What does Correct Chain mean in domino?

Couldn't the below test case output ?
41 12 23
not valid?

fn invalid_input() {
    let input = vec!((1, 2), (4, 1), (2, 3));
    assert_eq!(dominoes::chain(&input), None);
}

Add helpful information to the SETUP.md

The contents of the SETUP.md file gets included in
the README.md that gets delivered when a user runs the exercism fetch
command from their terminal.

At the very minimum, it should contain a link to the relevant
language-specific documentation on
help.exercism.io.

It would also be useful to explain in a generic way how to run the tests.
Remember that this file will be included with all the problems, so it gets
confusing if we refer to specific problems or files.

Some languages have very particular needs in terms of the solution: nested
directories, specific files, etc. If this is the case here, then it would be
useful to explain what is expected.


Thanks, @tejasbubane for suggesting that we add this documentation everywhere.
See exercism.io#2198.

Provide function signatures

When solving Rust exercises the first thing I do is figuring out the name and the right type signature and I'm wondering about the usefulness of this. This process is busywork, I doubt anybody is learning from it and in some cases it requires reverse engineering the thought process of the exercise creator.
I think the the signatures could be added to the readme or maybe a skeleton src/lib.rs could be provided with the exercises.

Look at the Parallel letter frequency exercise for example. The solution has the signature of frequency(&[&str], u32) -> HashMap<char, u32> but nowhere in the readme or in the test cases are mentioned what is the second parameter supposed to be. It can be assumed from the name of one test case that it is the number of workers.
I was confused by it at first. This confusing could have been removed if the readme had a line similar to this:

Your task is to implement the following function: fn frequency(text: &[&str], workers: u32) -> HashMap<char, u32>

What is your opinion on this?

How to set up a local dev environment

See issue exercism/exercism#2092 for an overview of operation welcome contributors.


Provide instructions on how to contribute patches to the exercism test suites
and examples: dependencies, running the tests, what gets tested on Travis-CI,
etc.

The contributing document
in the x-api repository describes how all the language tracks are put
together, as well as details about the common metadata, and high-level
information about contributing to existing problems, or adding new problems.

The README here should be language-specific, and can point to the contributing
guide for more context.

From the OpenHatch guide:

Here are common elements of setting up a development environment you’ll want your guide to address:

Preparing their computer
Make sure they’re familiar with their operating system’s tools, such as the terminal/command prompt. You can do this by linking to a tutorial and asking contributors to make sure they understand it. There are usually great tutorials already out there - OpenHatch’s command line tutorial can be found here.
If contributors need to set up a virtual environment, access a virtual machine, or download a specific development kit, give them instructions on how to do so.
List any dependencies needed to run your project, and how to install them. If there are good installation guides for those dependencies, link to them.

Downloading the source
Give detailed instructions on how to download the source of the project, including common missteps or obstacles.

How to view/test changes
Give instructions on how to view and test the changes they’ve made. This may vary depending on what they’ve changed, but do your best to cover common changes. This can be as simple as viewing an html document in a browser, but may be more complicated.

Installation will often differ depending on the operating system of the contributor. You will probably need to create separate instructions in various parts of your guide for Windows, Mac and Linux users. If you only want to support development on a single operating system, make sure that is clear to users, ideally in the top-level documentation.

Remove the `as_ref` check in RNA?

This might be because I'm new to Rust, but I don't see the value of this test in the RNA transcription exercise. It doesn't seem germane to the problem, and the commit that introduced it doesn't really explain why it's there.

Its presence seems to require the solver to implement AsRef for their Struct (as shown in the example solution) just so that the test can be passed. Implementing AsRef doesn't add any other value, as far as I can see. All the other tests are passable without implementing AsRef.

But, again, new to Rust here so I'm probably missing something. My solution, which passes all tests but the as_ref one is here: https://gist.github.com/IanWhitney/b2ac83ae8195f5fc1a75

Need ABOUT.md

We need a short, friendly introduction to Rust for the website as specified here.

The description should at minimum reference Rust's focus on safety and the combination of low-level control with functional features such as pattern matching.

An ABOUT.md file should be made and put into docs/.

Exercise `Pangram` should define which alphabet is expected

While doing this exercise I've noticed that the README.md does not mention which alphabet one should be checking against. I originally assumed an English alphabet, yet the last test in German being considered a pangram made me reconsider. I guess in summary the alphabet / language to be tested against should be defined exercise problem description, otherwise this is a rather unbounded problem.

Can `Anagram` be changed to either require or prevent Lifetimes?

In #126, Anagram was the main reason we restructured our problem track. In that restructuring I grouped all of the "Lifeteme" problems together and moved them to be much later in the track.

But you can pass the Anagram test suite without lifetimes. Which, if I'd thought about it for more than 2 seconds, I may have realized.

I'm wondering if there's a way to change the Anagram test suite to either require or prevent a solution that uses lifetimes. And, if so, would we want to.

Pros & Cons of Forcing Lifetimes (or No Lifetimes)

Pros

  • Lets us position the problem more exactly
  • Lets us use the problem as a way to teach Lifetimes (assuming we force lifetimes)

Cons

  • Forces an implementation

If this is possible, then I'm also wondering if we could provide 2 test suites, one with Lifetimes & one without, so that students could see both in action.

Add test case to Hamming that shows the difference between `len()` and `chars().count()`

I see this solution a lot

let x = "Hi";
let y = "Yo";

if x.len() == y.len() {
//do something
} else {
//do something else
}

But that's tricky in Rust because all strings are UTF-8 and the number of characters != the length of the string slice.

An example: https://play.rust-lang.org/?gist=b0fb8157fd7903333d9441164e0fffaf&version=stable&backtrace=0

I just saw this pop up in Hamming, so maybe exposing this difference via a test in that problem is a good idea.

Name nucleobases, not nucleosides

The primary nucleobases are cytosine (DNA and RNA), guanine (DNA and RNA), adenine (DNA and RNA), thymine (DNA) and uracil (RNA), abbreviated as C, G, A, T, and U, respectively. Because A, G, C, and T appear in the DNA, these molecules are called DNA-bases; A, G, C, and U are called RNA-bases. - Wikipedia

In other words, we should rename the values in the RNA transcription problem to reflect the following:

  • cytidine -> cytosine
  • guanosine -> guanine
  • adenosine -> adenine
  • thymidine -> thymine
  • uridine -> uracil

Launch Checklist

In order to launch we should have:

  • rust as a submodule in x-api
  • at least 10 problems
  • a "how to get started" topic in the help repo repo (app/pages/languages/getting-started-with-rust.md)
  • one to a handful of people willing to check exercism regularly (daily?) for nitpicks to ensure that the track gets off on the right foot
  • toggle "active" to true in config.json

Some tracks have been more successful than others, and I believe the key features of the successful tracks are:

  • Each submission receives feedback quickly, preferably within the first 24 hours.
  • The nitpicks do not direct users to do specific things, but rather ask questions challenging people to think about different aspects of their solution, or explore new aspects of the language.

For more about contributing to language tracks on exercism, check out the Problem API Contributing guide: https://github.com/exercism/x-api/blob/master/CONTRIBUTING.md

[NOTE: edited original to reflect current state of the exercism ecosystem]

Modifying all exercises to work with cargo

It looks like cargo will be the preferred way to run tests going forward. Since it comes with Rust, we should probably set up the exercises so that cargo run and cargo test can be used. I can help by making the changes.

Allergies: don't test for the order of allergies

The order of allergies should be irrelevant, but since Allergies.allergies() returns a vector, and order is important when comparing vectors, fn test_allergic_to_everything() fails when it shouldn't.

Maybe expecting Allergies.allergies() to return a HashSet is a better choice here?

Delete configlet binaries from history?

I made a really stupid choice a while back to commit the cross-compiled
binaries for configlet (the tool that sanity-checks the config.json
against the implemented problems) into the repository itself.

Those binaries are HUGE, and every time they change the entire 4 or 5 megs get
recommitted. This means that cloning the repository takes a ridiculously long
time.

I've added a script that can be run on travis to grab the latest release from
the configlet repository (bin/fetch-configlet), and travis is set up to run
this now instead of using the committed binary.

I would really like to thoroughly delete the binaries from the entire git
history, but this will break all the existing clones and forks.

The commands I would run are:

# ensure this happens on an up-to-date master
git checkout master && git fetch origin && git reset --hard origin/master

# delete from history
git filter-branch --index-filter 'git rm -r --cached --ignore-unmatch bin/configlet-*' --prune-empty

# clean up
rm -rf .git/refs/original/
git reflog expire --all
git gc --aggressive --prune

# push up the new master, force override existing master branch
git push -fu origin master

If we do this everyone who has a fork will need to make sure that their master
is reset to the new upstream master:

git checkout master
git fetch upstream master
git reset --hard upstream/master
git push -fu origin master

We can at-mention (@) all the contributors and everyone who has a fork here in this
issue if we decide to do it.

The important question though, is: Is it worth doing?

Do you have any other suggestions of how to make sure this doesn't confuse people and break their
repository if we do proceed with this change?

Anagram exercise placement

Hi there!

First of all, I'd like to say "thank-you" for the effort that has gone into creating the Exercism exercises for Rust, so far they've been a great facilitator for me getting used to the language.

I recently started working on the "anagram" exercise (exercise 3) in the Rust Exercism suite and I've found the learning curve to be vastly steeper than I have encountered so far. The first two exercises took me around 45 minutes to an hour, combined, to complete and required reading some simple documentation. However, I've been working on the "anagram" exercise for approximately 3-4 hours now!

I must be clear: I, personally, am not complaining about the difficulty since I appreciate challenges and they motivate me to work harder. However, I think that exercise 3 may introduce too many new and difficult concepts specific to Rust that may cause people to become demotivated and unwilling to invest the time required to complete the exercise. Ultimately, this may result in them giving-up learning the language. Currently, the exercise seems to introduce the following concepts "all-at-once":

  1. Lifetimes (as far as I'm aware, a Rust idiom and from what I've heard from other people, difficult to understand)
  2. Ownership of variables
  3. String/&str and all other associated String "quirkiness" (number of chars != number of bytes etc.)
  4. Loops
  5. Vectors

Some anecdotal evidence of the learning curve: simply implementing the main function declaration took me around an hour due to the fact that lifetimes have to be declared and one has to declare variables as being borrowed! Following on from that, I've managed to hit a complete wall when it comes to implementing the anagram checking functionality since I'm used to using Strings as indexes where string[0] is the first character in the String.

Essentially, I think the exercise may produce cognitive overload in people and I would like to suggest that the individual concepts be given their own, simpler, exercises. The "anagram" exercise could then be a concatenation of the concepts learned in the previous exercises. I'd propose this since I believe it would help keep users motivated since, if anyone else has had the same experience I've had, comparing the relative amount of time taken to complete exercise 3 compared to the previous 2 may cause some people to "give-up" and that would be a shame!

If anyone has had a similar experience, I'd be interested in hearing your thoughts :)

Decide on an order for the first 10 exercises

These exercises have been implemented:

anagram
beer-song
bob
grade-school
leap
nucleotide-count
phone-number
point-mutations
rna-transcription
robot-name
word-count

What would be a best guess at ordering these by difficulty?

Once we have that, we can stick them in config.json (under "problems"), and the API will know in which order to serve them up to people.

Large jump in difficulty when tournament challenge is reached

Exercises previous to tournament only really include one problem, while this one includes quite a few and includes a test suite that doesn't break the challenge down into the component parts.

This results in a rather sudden and uncomfortable jump in difficulty. I feel that the tournament challenge should be perhaps moved later in the series or simplified.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.