Giter Club home page Giter Club logo

fsharp's Introduction

Exercism F# Track

Test

Exercism exercises in F#

Support and Discussion

We have an F# subcategory on the Exercism forum where you can get support for any issues you might be facing (build setup, failing tests, etc.) or brainstorm with other people for the solution.

Contributing Guide

Please see the contributing guide

Local Tools

PowerShell, Fantomas, and FSharpLint are are available in this repo as local tools. (This requires .NET Core >=3.0) Example usage:

> dotnet tool restore
Tool 'dotnet-fsharplint' (version '0.12.3') was restored. Available commands: dotnet-fsharplint
Tool 'fantomas-tool' (version '2.9.2') was restored. Available commands: fantomas
Tool 'powershell' (version '6.2.3') was restored. Available commands: pwsh

Restore was successful.

> dotnet fsharplint -sf generators/Track.fs
========== Linting generators/Track.fs ==========
========== Finished: 0 warnings ==========
========== Summary: 0 warnings ==========

> dotnet fantomas generators/Track.fs
generators/Track.fs has been written.

> dotnet pwsh ./test.ps1
Linting config.json
-> An implementation for 'bracket-push' was found, but config.json does not reference this exercise.
-> The implementation for 'bracket-push' is missing a README.
-> The implementation for 'bracket-push' is missing an example solution.
-> The implementation for 'bracket-push' is missing a test suite.

F# icon

The F# Software Foundation logo for F# is an asset of the F# Software Foundation. We have adapted it with permission.

fsharp's People

Contributors

aage avatar angelikatyborska avatar balazsbotond avatar colinleach avatar davidelettieri avatar dependabot-preview[bot] avatar dependabot-support avatar dependabot[bot] avatar erikschierboom avatar exercism-bot avatar grenkin1988 avatar jrr avatar jwood803 avatar kytrinyx avatar mattiasdrp avatar mhabedank avatar nemesv avatar onpikono avatar rdipardo avatar ricemery avatar rmjohnson avatar rmunn avatar robkeim avatar roman-shuhov avatar saschamann avatar teadrivendev avatar theprash avatar tushartyagi avatar valentin-p avatar vrnithinkumar avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fsharp's Issues

Should we use libraries in our examples?

For some exercises, it makes sense to use an external library to do the heavy lifting. A prime example of this is when you need to do some more advanced text parsing, for example in the wordy exercise. For that specific exercise, it might make sense to use the FParsec library.

Should we allow ourselves to use libraries in our example exercises (only when it makes sense of course)? The Haskell track does take this approach. They even put some comments in the exercise's test file that indicate that the user might consider using a specific library to implement it. We could thus point the user to FParsec. We should of course not force users to use these libraries.

What do you think?

Pass explicit list of multiples in "Sum of Multiples" exercise rather than defaulting to 3 and 5

Hello, as part of exercism/problem-specifications#198 we'd like to make the sum of multiples exercise less confusing. Currently, the README specifies that if no multiples are given it should default to 3 and 5.

We'd like to remove this default, so that a list of multiples will always be specified by the caller. This makes the behavior explicit, avoiding surprising behavior and simplifying the problem.

Please make sure this track's tests for the sum-of-multiples problem do not expect such a default. Any tests that want to test behavior for multiples of [3, 5] should explicitly pass [3, 5] as the list of multiples.

After all tracks have completed this change, then exercism/problem-specifications#209 can be merged to remove the defaults from the README.

The reason we'd like this change to happen before changing the README is that it was very confusing for students to figure out the default behavior. It wasn't clear from simply looking at the tests that the default should be 3 and 5, as seen in exercism/exercism#2654, so some had to resort to looking at the example solutions (which aren't served by exercism fetch, so they have to find it on GitHub). It was added to the README to fix this confusion, but now we'd like to be explicit so we can remove the default line from the README.

You can find the common test data at https://github.com/exercism/x-common/blob/master/sum-of-multiples.json, in case that is helpful.

gigasecond: use times (not dates) for inputs and outputs

A duration of a gigasecond should be measured in seconds, not
days.

The gigasecond problem has been implemented in a number of languages,
and this issue has been generated for each of these language tracks.
This may already be fixed in this track, if so, please make a note of it
and close the issue.

There has been some discussion about whether or not gigaseconds should
take daylight savings time into account, and the conclusion was "no", since
not all locations observe daylight savings time.

Updated tests for the Custom Set problem

In order to reduce the amount of code required to pass incremental tests (assuming that users pass tests starting from the top), the order of the tests was modified slightly.

Since this track implements Custom Set, please take a look at the new custom-set.json file and see if your track should update its tests.

If you do need to update your tests, please refer to this issue in your PR. That helps us see which tracks still need to update their tests.

If your track is already up to date, go ahead and close this issue.

More details on this change are available in exercism/problem-specifications#257.

Tests for all-your-base look wrong

I just got the new all-your-base exercise for the first time. And as I was looking through the unit tests to see the spec I'm implementing against, I noticed that an empty list [] is considered to be invalid input, rather than 0. That's a debatable issue (and I would disagree), but that's a place where opinion can validly differ -- if you have a string with no digits in it at all, it's fair enough to consider that to be "not a number" rather than representing 0, so a list with no digits could also represent 0. It will make my code slightly less elegant than it could have been, but fair enough.

But then I saw the next test, where the input is [0] and the output is None. That one's not debatable; it's just wrong. An input of a single 0 digit should produce the number 0 in the output base, and should not be an error.

And the next two tests after that, where the input is either multiple zeroes ([0; 0; 0]) or contains a leading zero ([0; 6; 0]), also expect an output of None. And here also, I believe the tests are wrong, and the correct answer should be 0, and "60" in whatever the input base is (in that particular test it's 7, so 60 is 42 in decimal).

Looking at https://github.com/exercism/x-common/blob/master/all-your-base.json for a bit, I figured out that the tests were automatically converted to F#, and the null results in that JSON file were turned into None. But the comment at the top of that JSON file says that it's those are tests where the right behavior could vary by language, and it's up to each language track to determine what the right behavior should be. I.e., they say that the correct representation of zero might be an empty list, or a list with a single zero, or a list with multiple zeroes... and that's why all those tests have a defined result of null. But at least one of them should be possible; as it stands, the F# version of the all-your-base exercise does not allow zero to be represented at all.

Which, in a test that was named after Zero Wing, just seems... well, wrong somehow.

The tests should be fixed... for great justice.

NUnit failures on list-ops test

While working on the list-ops exercise, I was getting some failures on a foldr implementation that I was certain was correct. The failures looked like this:

1) SetUp Error : ListOpsTest.foldr as append
   SetUp : System.NullReferenceException : Object reference not set to an instance of an object
  at NUnit.Core.NUnitFramework.GetResultState (System.Exception ex) <0x40d86ce0 + 0x0004c> in <filename unknown>:0 
  at NUnit.Core.TestMethod.RecordException (System.Exception exception, NUnit.Core.TestResult testResult, FailureSite failureSite) <0x40d86c10 + 0x00073> in <filename unknown>:0 
  at NUnit.Core.TestMethod.RunTestCase (NUnit.Core.TestResult testResult) <0x40d54130 + 0x00110> in <filename unknown>:0 
  at NUnit.Core.TestMethod.RunTest () <0x40d53640 + 0x0012f> in <filename unknown>:0 

After some troubleshooting, I discovered that my implementation was correct, and the NullReferenceException was happening inside NUnit 2.6.3 when it tried to compare two very large F# lists. I was able to prove this by creating a minimal example that fails:

module ListOpsTest
open NUnit.Framework
let big = 100000
[<Test>]
let ``torture nunit`` () =
    let l1 = [1 .. big]
    let l2 = [1 .. big]
    Assert.That(l1, Is.EqualTo(l2))

This also produces a NullReferenceException:

1) SetUp Error : ListOpsTest.torture nunit
   SetUp : System.NullReferenceException : Object reference not set to an instance of an object
  at NUnit.Core.NUnitFramework.GetResultState (System.Exception ex) <0x40fea4f0 + 0x0004c> in <filename unknown>:0 
  at NUnit.Core.TestMethod.RecordException (System.Exception exception, NUnit.Core.TestResult testResult, FailureSite failureSite) <0x40fea420 + 0x00073> in <filename unknown>:0 
  at NUnit.Core.TestMethod.RunTestCase (NUnit.Core.TestResult testResult) <0x40fd0110 + 0x00110> in <filename unknown>:0 
  at NUnit.Core.TestMethod.RunTest () <0x40fcf620 + 0x0012f> in <filename unknown>:0 

By changing the assertion to Assert.That((l1 = l2), Is.True), I was able to make my minimal test case pass rather than throw NullReferenceException during the comparison.

So I replaced the foldr tests in the list-ops exercise with the following two tests, and NUnit stopped throwing NullReferenceExceptions and started passing instead:

[<Test>]
let ``foldr as id`` () =
    let result = foldr (fun item acc -> item :: acc) [] [1 .. big]
    let expected = [1 .. big]
    Assert.That((result = expected), Is.True)

[<Test>]
let ``foldr as append`` () =
    let result = foldr (fun item acc -> item :: acc) [100 .. big] [1 .. 99]
    let expected = [1 .. big]
    Assert.That((result = expected), Is.True)

This loses NUnit's nice list-comparison features where it tells you what items were different and at what index they were found... but this is the only way I've found so far to get the tests to run correctly.

Another test suggested for Parallel Letter Frequency exercise

It might be a good idea to add a test to the Parallel Letter Frequency exercise that runs the submitted function against 3000 texts: 1000 copies of each of the 3 texts already supplied in the unit tests (e.g., by doing let lotsOfTexts = List.replicate 1000 [textA; textB; textC] |> List.concat). I mentioned this idea in the comments to my solution, but I'm only now getting around to submitting a Github issue to track that suggestion.

clock: canonical test data has been improved

The JSON file containing canonical inputs/outputs for the Clock exercise has gotten new data.

There are two situations that the original data didn't account for:

  • Sometimes people perform computation/mutation in the display method instead of in add. This means that you might have two copies of clock that are identical, and if you add 1440 minutes to one and 2880 minutes to the other, they display the same value but are not equal.
  • Sometimes people only account for one adjustment in either direction, meaning that if you add 1,000,000 minutes, then the clock would not end up with a valid display time.

If this track has a generator for the Clock exercise, go ahead and regenerate it now. If it doesn't, then please verify the implementation of the test suite against the new data. If any cases are missing, they should be added.

See exercism/problem-specifications#166

Using FsUnit for more expressive tests

At the moment, the exercises used NUnit's Assert class' methods to verify behavior. While this works, it's not really a nice functional API. Perhaps it would make sense to use FsUnit, which provides a functional wrapper around the tests.

As an example, this is how an assertion looks like today:

Assert.That(1, Is.EqualTo(1))

And this is how it would look with FsUnit:

1 |> should equal 1

I much prefer the second option. It would require people to install an additional package before the tests can be run, but that won't really be a problem as I'm working on creating a pre-defined project for each exercise, which would already contain the required libraries.

cc @jwood803 @rmunn @robkeim @rebelwarrior @pminten

Multi-line string exercises

There are quite some exercises in which the input consists of a string comprised of several lines (separated by a newline character). This virtually forces the user to always do a Split() call before coding the actual algorithm. We could prevent this by passing in the data as a list of lines. What do you think?

@rmunn @jwood803 @jovaneyck

Use functional-first approach

Some exercises still use an OO approach, such as the Clock exercise. These exercises should be converted to functional versions. Also see: #144

Exercises to be converted:

  • [bank-account]
  • [binary-search-tree]
  • [circular-buffer]
  • [clock]
  • [deque]
  • [robot-name]

Update config.json to match new specification

For the past three years, the ordering of exercises has been done based on gut feelings and wild guesses. As a result, the progression of the exercises has been somewhat haphazard.

In the past few months maintainers of several tracks have invested a great deal of time in analyzing what concepts various exercises require, and then reordering the tracks as a result of that analysis.

It would be useful to bake this data into the track configuration so that we can adjust it over time as we learn more about each exercise.

To this end, we've decided to add a new key exercises in the config.json file, and deprecate the problems key.

See exercism/discussions#60 for details about this decision.

Note that we will not be removing the problems key at this time, as this would break the website and a number of tools.

The process for deprecating the old problems array will be:

  • Update all of the track configs to contain the new exercises key, with whatever data we have.
  • Simultaneously change the website and tools to support both formats.
  • Once all of the tracks have added the exercises key, remove support for the old key in the site and tools.
  • Remove the old key from all of the track configs.

In the new format, each exercise is a JSON object with three properties:

  • slug: the identifier of the exercise
  • difficulty: a number from 1 to 10 where 1 is the easiest and 10 is the most difficult
  • topics: an array of strings describing topics relevant to the exercise. We maintain
    a list of common topics at https://github.com/exercism/x-common/blob/master/TOPICS.txt. Do not feel like you need to restrict yourself to this list;
    it's only there so that we don't end up with 20 variations on the same topic. Each
    language is different, and there will likely be topics specific to each language that will
    not make it onto the list.

The difficulty rating can be a very rough estimate.

The topics array can be empty if this analysis has not yet been done.

Example:

"exercises": [
  {
    "slug": "hello-world" ,
    "difficulty": 1,
    "topics": [
        "control-flow (if-statements)",
        "optional values",
        "text formatting"
    ]
  },
  {
    "difficulty": 3,
    "slug": "anagram",
    "topics": [
        "strings",
        "filtering"
    ]
  },
  {
    "difficulty": 10,
    "slug": "forth",
    "topics": [
        "parsing",
        "transforming",
        "stacks"
    ]
  }
]

It may be worth making the change in several passes:

  1. Add the exercises key with the array of objects, where difficulty is 1 and topics is empty.
  2. Update the difficulty settings to reflect a more accurate guess.
  3. Add topics (perhaps one-by-one, in separate pull requests, in order to have useful discussions about each exercise).

Use idiomatic F#

Many of the example implementations use classes to implement the desired functionality. However, I'm not sure this is idiomatic F#. Wouldn't just having some functions be more idiomatic? Especially when the class only has one method. In general, should our examples use object-oriented code or functional code? I think we should do the latter.

As an example, consider the binary exercise's example implementation:

module Binary

type Binary(input: string) =
    let rec toDecimalLoop acc input = 
        if input = "" then acc
        else 
            let head = input.Chars 0
            let tail = input.Substring 1
            match head with
                | '0' -> toDecimalLoop (acc * 2) tail
                | '1' -> toDecimalLoop (acc * 2 + 1) tail
                | _   -> 0   

    member this.toDecimal() = toDecimalLoop 0 input

Testing the functionality is done as follows:

Binary(input).toDecimal()

Wouldn't this be a more idiomatic, more functional F# implementation?

module Binary

let toDecimal (input: string) = toDecimalLoop 0 input

let private rec toDecimalLoop acc input = 
        if input = "" then acc
        else 
            let head = input.Chars 0
            let tail = input.Substring 1
            match head with
                | '0' -> toDecimalLoop (acc * 2) tail
                | '1' -> toDecimalLoop (acc * 2 + 1) tail
                | _   -> 0 

The test is also a bit more functional:

toDecimal input

What do you think?

Move exercises to subdirectory

The problems api (x-api) now supports having exercises collected in a subdirectory
named exercises.

That is to say that instead of having a mix of bin, docs, and individual exercises,
we can have bin, docs, and exercises in the root of the repository, and all
the exercises collected in a subdirectory.

In other words, instead of this:

x{TRACK_ID}/
├── LICENSE
├── README.md
├── bin
│   └── fetch-configlet
├── bowling
│   ├── bowling_test.ext
│   └── example.ext
├── clock
│   ├── clock_test.ext
│   └── example.ext
├── config.json
└── docs
│   ├── ABOUT.md
│   └── img
... etc

we can have something like this:

x{TRACK_ID}/
├── LICENSE
├── README.md
├── bin
│   └── fetch-configlet
├── config.json
├── docs
│   ├── ABOUT.md
│   └── img
├── exercises
│   ├── bowling
│   │   ├── bowling_test.ext
│   │   └── example.ext
│   └── clock
│       ├── clock_test.ext
│       └── example.ext
... etc

This has already been deployed to production, so it's safe to make this change whenever you have time.

scrabble-score: replace 'multibillionaire' with 'oxyphenbutazone'

The word multibillionaire is too long for the scrabble board. Oxyphenbutazone, on the other hand, is legal.

Please verify that there is no test for multibillionaire in the scrabble-score in this track. If the word is included in the test data, then it should be replaced with oxyphenbutazone. Remember to check the case (if the original is uppercase, then the replacement also should be).

If multibillionaire isn't used, then this issue can safely be closed.

See exercism/problem-specifications#86

Implement F# Exercises

@kytrinyx I hope you don't mind me adding this.

I figured that since this hasn't been started yet, I'd take the initiative. I've been looking at doing some F# for a while, anyway. :]

Is there a requirement to how many or which exercises it should have to be considered for launch?

Linked list exercise naming

Is there a reason that the Linked list exercise module is named Deque and the test file DequeTest? I'm wondering if the name of the exercise was changed at some point because roughly have of the languages that implemented this problem use the name Deque and the other half with Linked list.

New test for the Isogram problem

We have found that the Isogram tests miss an edge case allowing students to pass all of the current tests with an incorrect implementation.

To cover these cases we have added a new test to the Isogram test set. This new test was added in pull request #265, which also describes the reason for the new test.

Since this track implements Isogram, please take a look at the new isogram.json file and see if your track should update its tests.

If you do need to update your tests, please refer to this issue in your PR. That helps us see which tracks still need to update their tests.

If your track is already up to date, go ahead and close this issue.

More details on this change are available in x-common issue 272.

Thank you for your help!

Delete configlet binaries from history?

I made a really stupid choice a while back to commit the cross-compiled
binaries for configlet (the tool that sanity-checks the config.json
against the implemented problems) into the repository itself.

Those binaries are HUGE, and every time they change the entire 4 or 5 megs get
recommitted. This means that cloning the repository takes a ridiculously long
time.

I've added a script that can be run on travis to grab the latest release from
the configlet repository (bin/fetch-configlet), and travis is set up to run
this now instead of using the committed binary.

I would really like to thoroughly delete the binaries from the entire git
history, but this will break all the existing clones and forks.

The commands I would run are:

# ensure this happens on an up-to-date master
git checkout master && git fetch origin && git reset --hard origin/master

# delete from history
git filter-branch --index-filter 'git rm -r --cached --ignore-unmatch bin/configlet-*' --prune-empty

# clean up
rm -rf .git/refs/original/
git reflog expire --all
git gc --aggressive --prune

# push up the new master, force override existing master branch
git push -fu origin master

If we do this everyone who has a fork will need to make sure that their master
is reset to the new upstream master:

git checkout master
git fetch upstream master
git reset --hard upstream/master
git push -fu origin master

We can at-mention (@) all the contributors and everyone who has a fork here in this
issue if we decide to do it.

The important question though, is: Is it worth doing?

Do you have any other suggestions of how to make sure this doesn't confuse people and break their
repository if we do proceed with this change?

Implement F# exercises

Minimum Exercises to Implement

  • anagram
  • bob
  • etl
  • grade-school
  • hamming
  • leap
  • nucleotide-count
  • phone-number (JW)
  • robot-name
  • word-count

Additional Exercises to Implement

After the initial set of exercises the following should be implemented to have a more complete test suite.

  • accumulate
  • allergies
  • atbash-cipher
  • beer-song
  • binary
  • binary-search-tree
  • crypto-square
  • difference-of-squares
  • gigasecond
  • hexadecimal
  • kindergarten-garden
  • largest-series-product
  • linked-list
  • luhn
  • matrix
  • meetup
  • nth-prime
  • ocr-numbers
  • octal
  • palindrome-products
  • pascals-triangle
  • pig-latin
  • prime-factors
  • pythagorean-triplet
  • queen-attack
  • raindrops
  • rna-transcription
  • robot-simulator
  • roman-numerals
  • saddle-points
  • scrabble-score
  • secret-handshake
  • series
  • sieve
  • simple-cipher
  • simple-linked-list
  • space-age
  • strain
  • sum-of-multiples
  • triangle
  • trinary
  • twelve-days
  • wordy

Order exercises by increasing difficulty

At the moment, the exercises are kinda haphazardly ordered. Ideally, we'd list the exercises in progressive difficulty order, as that proveds the best experience for the user.

In the Rust track, they went about this problem by looking at the concepts that are required. This might be a bit infeasable for this track, as there are so many exercises.

I suggest we first start by categorizing the exercises in three categories:

  1. Easy
  2. Moderate
  3. Difficult

We can then look at the individual categories to determine the ordering within that category. What do you think about this approach?

binary: improve tests for invalid numbers

We should have separate tests for:

  • alphabetic characters at the beginning of a valid binary number
  • alphabetic characters at the end of a valid binary number
  • alphabetic characters in the middle of an otherwise valid binary number
  • invalid digits (e.g. 2)

If the test suite for binary has test cases that cover these edge cases, this issue can safely be closed.

See exercism/problem-specifications#95

rna-transcription: don't transcribe both ways

I can't remember the history of this, but we ended up with a weird non-biological thing in the RNA transcription exercise, where some test suites also have tests for transcribing from RNA back to DNA. This makes no sense.

If this track does have tests for the reverse transcription, we should remove them, and also simplify the reference solution to match.

If this track doesn't have any tests for RNA->DNA transcription, then this issue can be closed.

See exercism/problem-specifications#148

How to set up a local dev environment

See issue exercism/exercism#2092 for an overview of operation welcome contributors.


Provide instructions on how to contribute patches to the exercism test suites
and examples: dependencies, running the tests, what gets tested on Travis-CI,
etc.

The contributing document
in the x-api repository describes how all the language tracks are put
together, as well as details about the common metadata, and high-level
information about contributing to existing problems, or adding new problems.

The README here should be language-specific, and can point to the contributing
guide for more context.

From the OpenHatch guide:

Here are common elements of setting up a development environment you’ll want your guide to address:

Preparing their computer
Make sure they’re familiar with their operating system’s tools, such as the terminal/command prompt. You can do this by linking to a tutorial and asking contributors to make sure they understand it. There are usually great tutorials already out there - OpenHatch’s command line tutorial can be found here.
If contributors need to set up a virtual environment, access a virtual machine, or download a specific development kit, give them instructions on how to do so.
List any dependencies needed to run your project, and how to install them. If there are good installation guides for those dependencies, link to them.

Downloading the source
Give detailed instructions on how to download the source of the project, including common missteps or obstacles.

How to view/test changes
Give instructions on how to view and test the changes they’ve made. This may vary depending on what they’ve changed, but do your best to cover common changes. This can be as simple as viewing an html document in a browser, but may be more complicated.

Installation will often differ depending on the operating system of the contributor. You will probably need to create separate instructions in various parts of your guide for Windows, Mac and Linux users. If you only want to support development on a single operating system, make sure that is clear to users, ideally in the top-level documentation.

AppVeyor build status not automatically being shown

Yesterday, I looked into the issue where PR's automatically showed the Travis status but not the AppVeyor status. I logged an issue about this on the AppVeyor site:

http://help.appveyor.com/discussions/problems/5147-appveyor-build-status-does-not-automatically-show-on-github-pr

Apparently, there is some permissions problem, but as I don't own the AppVeyor account, I cannot fix it. @kytrinyx Would you be willing to check the permissions for AppVeyor, just as I did in the logged issue above?

Copy track icon into language track repository

Right now all of the icons used for the language tracks (which can be seen at http://exercism.io/languages) are stored in the exercism/exercism.io repository in public/img/tracks/. It would make a lot more sense to keep these images along with all of the other language-specific stuff in each individual language track repository.

There's a pull request that is adding support for serving up the track icon from the x-api, which deals with language-specific stuff.

In order to support this change, each track will need to

In other words, at the end of it you should have the following file:

./img/icon.png

See exercism/exercism#2925 for more details.

Add helpful information to the SETUP.md

The contents of the SETUP.md file gets included in
the README.md that gets delivered when a user runs the exercism fetch
command from their terminal.

At the very minimum, it should contain a link to the relevant
language-specific documentation on
help.exercism.io.

It would also be useful to explain in a generic way how to run the tests.
Remember that this file will be included with all the problems, so it gets
confusing if we refer to specific problems or files.

Some languages have very particular needs in terms of the solution: nested
directories, specific files, etc. If this is the case here, then it would be
useful to explain what is expected.


Thanks, @tejasbubane for suggesting that we add this documentation everywhere.
See exercism.io#2198.

Implement Travis CI Build

After looking at a few other projects here on GitHub I've noticed that they were able to find a way to use Travis CI with .NET projects. After a bit of searching it seems there is a way to implement this.

I can definitely take a look at implementing this with F# and C#, but I'll admit I've never messed with these kinds of builds before.

Additional test suggested for Parallel Letter Frequency

The README for the Parallel Letter Frequency exercise is quite terse; TOO terse, in fact, because I couldn't tell if letters with diacritics (ö) were supposed to be counted along with the same letter without a diacritic (o). Since one of the supplied texts in the unit tests is in German, this mattered to the results. I eventually ran the three supplied texts through a letter-frequency counter. Since Python has a built-in function for that in the collections.Counter class (also available in Python 3), I used Python to check. I got the expected results when I did not match ö with o, so I knew how to write the code.

So that the next person who goes through this exercise won't have to go through the same verification, I suggest that the following test be added to the exercise's test suite:

[<Test>]
let ``Letters with and without diacritics aren't the same letter`` () =
    Assert.That(frequency ["aä"], Is.EqualTo(Map.ofList [('a', 1); ('ä', 1)]))

Tests for transpose exercise need updating

In exercism/problem-specifications@778a548, the transpose test was updated to remove two trailing spaces in the final line of the expected output, so that the test would conform to what the README says (pad to the left with spaces, but don't pad to the right). But the test for F# still has the two extra, erroneous trailing spaces.

There are also a few other tests that have been added, like the "two characters in a column" test. Refreshing the F# transpose exercise from x-common would pick those up too.

Verify that nothing links to help.exercism.io

The old help site was deprecated in December 2015. We now have content that is displayed on the main exercism.io website, under each individual language on http://exercism.io/languages.

The content itself is maintained along with the language track itself, under the docs/ directory.

We decided on this approach since the maintainers of each individual language track are in the best position to review documentation about the language itself or the language track on Exercism.

Please verify that nothing in docs/ refers to the help.exercism.io site. It should instead point to http://exercism.io/languages/:track_id (at the moment the various tabs are not linkable, unfortunately, we may need to reorganize the pages in order to fix that).

Also, some language tracks reference help.exercism.io in the SETUP.md file, which gets included into the README of every single exercise in the track.

We may also have referenced non-track-specific content that lived on help.exercism.io. This content has probably been migrated to the Contributing Guide of the x-common repository. If it has not been migrated, it would be a great help if you opened an issue in x-common so that we can remedy the situation. If possible, please link to the old article in the deprecated help repository.

If nothing in this repository references help.exercism.io, then this can safely be closed.

New tests for the Pangram problem

We have found that the Pangram tests miss edge cases allowing students to pass all of the current tests with an incorrect implementation.

To cover these cases we have added new tests to the Pangram test set. Those new tests were added in this commit

Since this track implements Pangram, please take a look at the new pangram.json file and see if your track should update its tests.

If you do need to update your tests, please refer to this issue in your PR. That helps us see which tracks still need to update their tests.

If your track is already up to date, go ahead and close this issue.

More details on this change are available in x-common issue 222.

Thank you for your help!

Missing question mark in one test

The Wordy test suite is missing a ? in the "What is 4 minus -12" question. Since I had written a "is the question well-formed?" test into my solution that (among other things) tested for a question mark at the end, that meant that my code failed on that one unit test. My guess is that that difference wasn't intended, and that the "What is 4 minus -12" test should have been "What is 4 minus -12?" instead.

Implement AppVeyor build

Having an AppVeyor build means we get to test the code on Windows. Now that our build script is almost finished, it seems like a no-brainer.

@jwood803 I have done this before, shall I assign myself to it?

triangle: incorrect test in some tracks

Please check if there's a test that states that a triangle with sides 2, 4, 2 is invalid. The triangle inequality states that for any triangle, the sum of the lengths of any two sides must be greater than or equal to the length of the remaining side. If this doesn't affect this track, go ahead and just close the issue.

Update to NUnit 3

To keep parity with the C# track, the tests should be updated to NUnit 3.

Idea for new exercise

There are already three exercises with the basic idea of "Here is some text with a repeating pattern; write a function to print out that text". So a fourth one may not be useful. However, if a fourth such exercise would be useful, this Code Golf challenge could be turned into such an exercise:

http://codegolf.stackexchange.com/questions/85746/polar-bear-polar-bear-what-do-you-hear

The one thing this exercise would contain that the others wouldn't is that the "a / an" distinction in English would be relevant, and you can't easily "cheat" by baking it into the stored data. So you need a helper function to turn "Polar Bear" into "a polar bear", but "Elephant" into "an elephant".

There's a Python answer to that Code Golf challenge where the original, non-golfed version is presented, so converting that to an F# exercise should be quite quick. I could also see this as an exercise for many other languages; is there a "master" exercism repo where new exercises are suggested? Or do different language tracks just copy exercises from each other?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.