Giter Club home page Giter Club logo

dart's Introduction

Exercism Dart Track

Test

Exercism exercises in Dart.

Requirements

  • Dart 2.0.0 or higher

Setup

The simplest way to install Dart can be found here.

Clone the repo and run dart pub get to download all the dependencies for this project.

To run all the tests execute: dart test To run the tests for a single exercise, execute EXERCISE=slug dart test.

To create a new exercise, use the executable in the tool directory: dart bin/create_exercise.dart

Exercise Tests

At the most basic level, Exercism is all about the tests. They drive the user's implementation forward and tell them when the exercise is complete.

The utmost care and attention should be used when adding or making changes to the tests for an exercise. When implementing an exercise test suite, we want to provide a good user experience for the people writing a solution to the exercise. People should not be confused or overwhelmed.

We simulate Test-Driven Development (TDD) by implementing the tests in order of increasing complexity. We try to ensure that each test either

  • helps triangulate a solution to be more generic, or
  • requires new functionality incrementally.

Test files should use the following format:

  test("says hello world with no name", () {
    final String result = helloWorld.hello();
    expect(result, equals("Hello, World!"));
  }, skip: false);

Contributing

Thank you so much for contributing! 🎉

We welcome pull requests of all kinds. No contribution is too small.

We encourage contributions that provide fixes and improvements to existing exercises. Please note that this track's exercises must conform to the standards determined in the exercism/problem-specifications repo. Changes to the tests or documentation of a common exercise will often warrant a PR in that repo before it can be incorporated into this track's exercises. If you're unsure, then go ahead and open a GitHub issue, and we'll discuss the change.

Please read our Contribution guidelines on how to help this track!

dart's People

Contributors

alexeybukin avatar amscotti avatar andrewfgarrison avatar bnandras avatar dependabot[bot] avatar devkabiir avatar ee7 avatar eeppuj avatar erikschierboom avatar exercism-bot avatar ferhatelmas avatar fwip avatar glennj avatar jackhughesweb avatar jerold avatar johnngugi avatar jvarness avatar kotp avatar kytrinyx avatar lakshya8066 avatar msoup avatar nywilken avatar saschamann avatar sgoettschkes avatar sisminnmaw avatar stargator avatar superpaintman avatar wiredancer avatar ymadd avatar zureka avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dart's Issues

Convert bash script fetch-configlet to Dart

While I think this script can currently be used on all platforms, since it seems to be written well enough, this is the Dart track and I think it may be a good idea to have it written in Dart.

This may mean turning it into a utility class that is stored in lib/src/ and imported by create_exercise and other scripts that would need to ensure the user has configlet or the latest version of configlet.

Or the script could stay in the bin/ but just written in Dart.

Create scripts to to reduce commands used to validate Pull Requests

Stemming from this comment, the following commands can be run from a script:

  1. pub run test - All the tests for Dart exercises can be run from the top level of the repo with this command
  2. bin/configlet lint . - Checks the config.json for formatting issues
  3. bin/configlet fmt . - Formats the config.json
  4. bin/check_formatting - Checks All the Dart files for formatting issues
  5. pub run dart_style:format -l 120 -w . - Formats all the Dart files

I recommend 2 and 4 can be in a script that just checks formatting of the project files.

Then 1, 3, and 5 can be put in a script that validates tests and automatically formats configlet and dart files.

Then there are two files that need to be updated with the new scripts:

  • .travis.yml
  • README.me

The scripts should be able to be run in Mac, Linux, and Windows with as little duplication as possible. Since we are using Dart, the scripts could written in Dart, but it's up to the author.

Implement continuous integration

Implement a track test suite that can run both locally and on Travis CI. The track test suite should verify that each exercise makes sense, by running the exercise tests against the example solution.

Definition of terms

  • exercise test suite: the test suite that is delivered to Exercism users as part of an Exercism exercise
  • track test suite: the test suite that helps ensure that all of the exercise test suites in a language track are solvable

Background

When implementing an exercise test suite, we want to provide a good user experience for the people writing a solution to the exercise. People should not be confused or overwhelmed.

In most Exercism language tracks, we simulate Test-Driven Development (TDD) by implementing the tests in order of increasing complexity. We try to ensure that each test either

  • helps triangulate a solution to be more generic, or
  • requires new functionality incrementally.

Many test frameworks will randomize the order of the tests when running them. This is an excellent practice, which helps ensure that subsequent tests are not dependent on side effects from earlier tests. However, in order to simulate TDD we want tests to run in the order that they are defined, and we want them to fail fast, that is to say, as soon as the test suite encounters a failure, we want the execution to stop. This ensures that the person implementing the solution sees only one error or failure message at a time, unless they make a change which causes prior tests to fail.

This is the same experience that they would get if they were implementing each new test themselves.

Most testing frameworks do not have the necessary configuration options to get this behavior directly, but they often do have a way of marking tests as skipped or pending. The mechanism for this will vary from language to language and from test framework to test framework.

Whatever the mechanism—functions, methods, annotations, directives, commenting out tests, or some other approach—these are changes made directly to the test file. The person solving the exercise will need to edit the test file in order to "activate" each subsequent test.

Any tests that are marked as skipped will not be verified by the track test suite unless special care is taken.

Additionally, in some programming languages, the name of the file containing the solution is hard-coded in the test suite, and the example solution is not named in the way that we expect people to name their files.

We will need to temporarily (and programmatically) edit the exercise test suites to ensure that all of their tests are active. We may also need to rename the example solution file(s) in order for the exercise test suite to run against it.

Avoiding accidental git check-ins

It's important that if we rewrite files in any way during a test run, that these changes do not accidentally get checked in to the git repository.

Therefore, many language tracks write the track test suite in such a way that it copies the exercise to a temporary location outside of the git repository before editing or rewriting the exercise files during a test run.

Working around long-running track test suites

Usually as people are developing the track, they're focused on a single exercise. If running the entire track test suite against all of the exercises takes a long time, it is often worth making it possible to verify just one exercise at a time.

Example build file

The PHP track has created a Makefile. The Ruby track uses Rake, which is a tool written in Ruby, allowing the track maintainers to write custom code in the language of the track to customize the build with a Rakefile.

Make travis-ci tests faster by only testing changed files.

The following git command prints names of changed files in the latest commit.

git diff --name-only HEAD~1

Latest commit corresponds to the latest commit of the PR.
This command outputs the list of files to the stdin
This works even for squashed merges.
Here are my proposed changes:

  • Check if any *.dart files have been changed
    • Run pub get, pub run test & pub run dart_style:format -l 120 -n . only for that exercise.
  • If there are no *.dart files changed (could be *.md or *.json)
    • perform configlet lint . if *.json file has been changed (only for that exercise directory)
    • do nothing and return if *.md file have been changed.

I'll start working on this as soon as I get some time, until then if anyone is interested in this, create an empty PR and grab this issue.

Create a script to be used by Travis for testing exercises against reference solution

Stemming out of the conversation at #3.

We need a script that Travis can execute, along with the current scripts/commands, that will do the following:

  • Copy the exercises directory to a temp directory.
  • Go into each exercise and modify the test suite's import path.
    • The modification will be replacing the normal import for "bob.dart", for example, to "example.dart". The example.dart file in each exercise acts as our reference solution. The answer key to the problems in the exercise.
  • Run pub run test on each exercise.

Update ABOUT.md to mention our desire to gradually introduce useful concepts

From #1, @kytrinyx stated:

I think it would be helpful to explain in the intro of the issue that we want to gradually introduce useful concepts in the test framework and tools, and that this issue is about introducing Setup/Teardown as a part of that.

I believe this change would go into the ABOUT.md file and I think there may be other exercises that we could include. For example, we talked about gradually introducing groups as well. In that case, we need to know what kind of order the first 10 exercises will be so we can ensure the introductions build off one another.

This issue will be updated as that is fleshed out.

For the Gigasecond exercise, introduce the concept of "setup" and "teardown" for the test framework

For the Dart exercises, we want to gradually introduce concepts related to the Dart language, test framework, and the built-in tools.

A lot of this is already done, by exposing users to problems written in Dart and asking for a solution. By working to understand the syntax of the Dart language, users will gradually learn more about Dart's features.

But if we keep with the simple setup for test suites from the first exercise, we risk not exposing other capabilities within the test framework. Like groups and the setup/teardown pattern.

So this issue will be just one step in that goal.

Judging by how the test suite for Gigasecond is laid out, I would recommend adding "setup" function to the test suite with comments explaining it's purpose and use. Also, add teardown as well.

There should be comments above the setup and teardown describing it's purpose in the simplest form.

For information about the test framework, check out the link.

rna-transcription: Update based on changes to specification by removing some test cases

exercism/problem-specifications#1027 Removes tests from the specification, now we can choose to keep them we don't have to be one-to-one with the specification. We can ignore some cases or use them all and then build off from there.

And exercism/problem-specifications#1028 is just a cleanup of the former. It doesn't impact the specification's implementation.

Personally, I would opt to keep the cases, but I want to throw it out to the @exercism/dart maintainers

Update Test Implementation of Exercises to use types (Dart 2.0 preparedness)

Some exercises need to be updated to prepare for Dart 2.0.

As outlined by the Dart team, Types will be mandatory in Dart 2.0. So replace uses of var and dynamic with the intended type and when type is not defined, include it.

These can be tackled together or one exercise at a time if there are multiple people interested in helping.

  • Hello World (Just the test suite)

  • Leap (Just the test suite)

  • Bob (Just the test suite)

  • Hamming (example file and test suite)

  • Gigasecond (Just the test suite)

  • RNA Transcription (Just the test suite)

  • Anagram (Just the example file)

Consider consistency for the exercises

  • Is there a style guide for Dart?
  • Are these styles encouraged or enforced?
  • Are there any conventions that we should adopt on this track for the sake of consistency?
  • Can we enforce these?
  • Is there a linter? Are there many? Should we use one?
  • When you add a linter, edit the pull request template [link]
  • Update the pull request template with checks that are appropriate for this track
  • Is there a common convention for filenames? If not, what should our convention be?

Note that this is about the exercises (the test suites and code examples), not people's solutions.

Revise tool create-exercise to handle new policy

In exercism/problem-specifications#996, it was decided that the canonical data should have an explicit input key whose value is an object containing the input(s) to the code under test.

For example:

{
  "description": "year not divisible by 4: common year",
  "property": "leapYear",
  "input": {
    "year": 2015
  },
  "expected": false
}

Task:

Update create-exercise to properly handle the occurrence of this in an exercise's canonical data set.

Though given not all specifications meet this new schema, we will need to handle cases where it is missing and probably treat all other properties as input variable names except when the propery is named "input" and is a Map object.

@exercism/dart

Tests are not consistent and some are not skipping some tests

I noticed this while going through the rna exercise. It seems that none of the tests are skipped. Should this be changed? I wasn't sure if there may be an additional parser that would made the change before giving it to a user.

My thinking was perhaps they are not set to true in order to run the tests in this project.

Additionally, the test files are organized differently from each other. I think we want consistency in the format and to skip all but the first test by default.

Where are the Dart communities and enthusiasts?

As we move towards the launch of the new version of Exercism we are going to be ramping up on actively recruiting people to help provide feedback.

Our goal is to get to 100%: everyone who submits a solution and wants feedback should get feedback. Good feedback. You can read more about this aspect of the new site here: http://mentoring.exercism.io/

To do this, we're going to need a lot more information about where we can find language enthusiasts.

  • Is Dart supported by one or more large organizations?
  • Does Dart have an official community manager?
  • Do you know of specific communities (online or offline) that are enthusiastic about Dart? (Chat communities, forums, meetups, student clubs, etc)
  • Are there popular conferences for Dart? (If so, what are some examples?)
  • Are there any organizations who are targeted specifically at getting certain subgroups or demographics interested in Dart? (e.g. kids, teenagers, career changers, people belonging to various groups that are typically underrepresented in tech?)
  • Are there specific groups or programs dedicated to mentoring people in Dart?
  • Are there popular newsletters for Dart?
  • Is Dart taught at programming bootcamps? (If so, what are some examples?)
  • Is Dart taught at universities? (If so, what are some examples?)

In other words: where do people care a lot and/or know a lot about Dart?

This is part of the project being tracked in exercism/meta#103

Revisit starter implementation policy for nextercism

@exercism/dart

The next version of Exercism breaks exercises into core exercises and branch exercises. It will therefore be possible for users to complete exercises in many different orders.

In light of this, we should revisit our current policy regarding starter implementations, which "assumes" a fixed order of exercise completion, and decide whether/how to update it. Discussion should occur in this issue thread.

This discussion is stemming out of exercism/discussions

Current policy for reference:

We currently just create the implementation file with an empty class most of the time. Should we stub other methods and classes? I think a few exercises also stub a method.

I know our intent was to have the user generate or implement each required piece that was missing until all that was left was for them to figure out the logic.

peterseng summed it up nicely with this:

In general, it seems stubs may be provided for at least two reasons:

  1. If the implementation file needs to be placed in a certain directory due to the language or build tool's project structure.
    • Having a stub file (empty or not) saves the student the busy-work of having to create the directory structure then create the file.
    • Note that this busy-work has been described as supremely annoying.
    • I think this is always a good reason to have a stub file.
  2. The stub file could possibly provide the expected signatures.
    • There is healthy debate about whether it is a useful learning experience for the student to figure out the expected signatures versus just busy-work.
    • As a middle ground, some tracks may decide to place the expected signatures in a comment in the test file, or in HINTS.md such that it becomes included in the README.
    • Some tracks choose to only do this for the first few exercise, to get the student started on the track, but then wean off.
    • Of course, tracks that do not wish to do this can just include an empty stub file, which at least will cause the directory structure to be preserved.
    • For statically-typed languages, typically the entire test suite must type-check before it can be run. Therefore, to make the test suite runnable, tracks of statically-typed languages tend to choose one of the following options:
      • nothing (as a consequence, students must figure out the signatures from reading the tests and write them all before any work can be done on the first test)
      • provide stubs with the signatures: saves student the above work, but also removes that part of the learning process
      • use conditional compilation to make the compiler not perform type-checking for later tests. This is not possible in all languages.

Rework the Hello World exercise test suite to be a template for other exercises

Stemming out of #24, based on the discussion there the plan is to rework the Hello World exercise's test file to use typing since Dart 1.0 is optional typing so our test files are prepared for Dart 2.0.

Additionally, some test suites are structured differently. For now, I think the first 5 exercises should not have any groups. Then we can slowly add and explain uses of the group function in the second set of exercises.

Once this is done for Hello World, other issues will be created to update the other exercises and follow suite.

Ensure Dart track is ready for v2 launch

There are a number of things we're going to want to check before the v2 site goes live. There are notes below that flesh out all the checklist items.

  • The track has a page on the v2 site: https://v2.exercism.io/tracks/dart
  • The track page has a short description under the name (not starting with TODO)
  • The "About" section is a friendly, colloquial, compelling introduction
  • The "About" section follows the formatting guidelines
  • The code example gives a good taste of the language and fits within the boundaries of the background image
  • There are exercises marked as core
  • Exercises have rough estimates of difficulty
  • Exercises have topics associated with them
  • The first exercise is auto_approve: true

Track landing page

The v2 site has a landing page for each track, which should make people want to join it. If the track page is missing, ping @kytrinyx to get it added.

Blurb

If the header of the page starts with TODO, then submit a pull request to https://github.com/exercism/dart/blob/master/config.json with a blurb key. Remember to get configlet and run configlet fmt . from the root of the track before submitting.

About section

If the "About" section feels a bit dry, then submit a pull request to https://github.com/exercism/dart/blob/master/docs/ABOUT.md with suggested tweaks.

Formatting guidelines

In order to work well with the design of the new site, we're restricting the formatting of the ABOUT.md. It can use:

  • Bold
  • Italics
  • Links
  • Bullet lists
  • Number lists

Additionally:

  • Each sentence should be on its own line
  • Paragraphs should be separated by an empty line
  • Explicit <br/> can be used to split a paragraph into lines without spacing between them, however this is discouraged.

Code example

If the code example is too short or too wide or too long or too uninteresting, submit a pull request to https://github.com/exercism/ocaml/blob/master/docs/SNIPPET.txt with a suggested replacement.

Exercise metadata

Where the v1 site has a long, linear list of exercises, the v2 site has organized exercises into a small set of required exercises ("core").

If you update the track config, remember to get configlet and run configlet fmt . from the root of the track before submitting.

Topic and difficulty

Core exercises unlock optional additional exercises, which can be filtered by topic an difficulty, however that will only work if we add topics and difficulties to the exercises in the track config, which is in https://github.com/exercism/dart/blob/master/config.json

Auto-approval

We've currently made any hello-world exercises auto-approved in the backend of v2. This means that you don't need mentor approval in order to move forward when you've completed that exercise.

Not all tracks have a hello-world, and some tracks might want to auto approve other (or additional) exercises.

Track mentors

There are no bullet points for this one :)

As we move towards the launch of the new version of Exercism we are going to be ramping up on actively recruiting people to help provide feedback. Our goal is to get to 100%: everyone who submits a solution and wants feedback should get feedback. Good feedback.

If you're interested in helping mentor the track, check out http://mentoring.exercism.io/

When all of the boxes are ticked off, please close the issue.

Tracking progress in exercism/meta#104

Research why builds with a dev version of Dart (2.0.0-dev.14.0) fail

The latest builds for #71 have failed

There are two builds per PR build trigger. One for a dev version of Dart (pre-release version) and one for a stable version of Dart (latest stable release 1.24.3).

The stable version passes, but the build with 2.0.0-dev.14.0 fails. It should be noted a previous build that used dev version 2.0.0-dev.10.0 had a successful build result.

I recommend we research what is breaking the dev build for versions 2.0.0-dev.11.0 to 2.0.0-dev.14.0+ and determine if we can introduce a fix or if we will need to disable the dev build process until we transition to Dart 2.0.

This ticket should resolve running the tests in the dev version and restore --dev to the travis config.

bob: Update to clarify ambiguity regarding shouted questions

TL;DR: the problem specification for the Bob exercise has been updated. Consider updating the test suite for Bob to match. If you decide not to update the exercise, consider overriding description.md.


Details

The problem description for the Bob exercise lists four conditions:

  • asking a question
  • shouting
  • remaining silent
  • anything else

There's an ambiguity, however, for shouted questions: should they receive the "asking" response or the "shouting" response?

In exercism/problem-specifications#1025 this ambiguity was resolved by adding an additional rule for shouted questions.

If this track uses exercise generators to update test suites based on the canonical-data.json file from problem-specifications, then now would be a good time to regenerate 'bob'. If not, then it will require a manual update to the test case with input "WHAT THE HELL WERE YOU THINKING?".

See the most recent canonical-data.json file for the exact changes.

Remember to regenerate the exercise README after updating the test suite:

configlet generate . --only=bob --spec-path=<path to your local copy of the problem-specifications repository>

You can download the most recent configlet at https://github.com/exercism/configlet/releases/latest if you don't have it.

If, as track maintainers, you decide that you don't want to change the exercise, then please consider copying problem-specifications/exercises/bob/description.md into this track, putting it in exercises/bob/.meta/description.md and updating the description to match the current implementation. This will let us run the configlet README generation without having to worry about the bob README drifting from the implementation.

Maintainer bios: normalize to first person

Currently, there's only one maintainer with a bio and it's fairly generic. Kytrinyx had an issue opened to suggest changing the bios to first person because:

it feels more friendly/human and less corporate/official

Since that has been back burnered on her end, I figured I'd ask @exercism/dart whether any of you would be willing to update your bios to first person or add a bio that is written in first person.

Track configuration contains improperly locked exercises

In the upcoming release of Configlet v3.8.0, the lint command will now verify that locked exercises meet the unlocked_by criteria, as defined by the track configuration spec:

  • Core exercises can not be unlocked by other exercises.
  • Non-core exercises can only be unlocked by core exercises.

Before cutting a release of Configlet I am opening issues on all tracks found to contain one or more unlocked_by violations so that maintainers of the track can validate and remedy the violations.


-> The exercise 'leap' is marked as core and unlocked by another exercise. A core exercise should not be unlocked by another. 
-> The exercise 'difference-of-squares' is marked as core and unlocked by another exercise. A core exercise should not be unlocked by another. 
-> The exercise 'word-count' is marked as core and unlocked by another exercise. A core exercise should not be unlocked by another. 
-> The exercise 'bob' is marked as core and unlocked by another exercise. A core exercise should not be unlocked by another. 
-> The exercise 'hamming' is marked as core and unlocked by another exercise. A core exercise should not be unlocked by another. 
-> The exercise 'gigasecond' is marked as core and unlocked by another exercise. A core exercise should not be unlocked by another. 
-> The exercise 'raindrops' is marked as core and unlocked by another exercise. A core exercise should not be unlocked by another. 
-> The exercise 'rna-transcription' is marked as core and unlocked by another exercise. A core exercise should not be unlocked by another. 
-> The exercise 'anagram' is marked as core and unlocked by another exercise. A core exercise should not be unlocked by another.

travis.yml: Remove presubmit from config and ensure builds fail when tests fail.

During the Scrabble Score Pull Request, it was revealed that the Travis builds were passing even when files needed to be formatted. Additionally, the presubmit script was being executed, but that only needs to happen before submitting a PR, not when a PR is being built.

Additionally, when tests either fail or time out the Travis Build still passes when it should fail.

Dart 2.0 will remove optional typing from language

As the Dart team announced Dart 2.0, they did so by speaking about making Dart a typing system and removing the optional typing in phases.

This month, they encouraged folks to get their code ready for Dart 2.0 by enabling strong mode and utilizing the dartdevc compiler.

For exercism, this means code written in Dart 1.0 to pass our tests, may not work for Dart 2.0.

On the plus side, we have not officially supported the Dart exercise. So we can opt to make the code written use the dartdec compiler and enable strong mode by updating the pubspec.yaml file and adding an analysis_options.yaml file, respectively.

However, users may still try to write the code with optional typing. So maybe each exercise's README can be updated to highlight these requirements.

We could also include it somewhere in the docs.

Thoughts? @kytrinyx @iHiD @SuperPaintman

Need More Exercises implemented for Core Path

See #38 for what a PR that adds a new exercise covers.

General Overview:

  • Choose a problem from problem specifications repo
  • See this list sorted by exercises implemented by most other Exercism tracks
  • Recommend looking for a problem that would fit for someone new to the language and complements the other existing problems (ie: tackles different topics)
  • Create a directory in exercises/ for the new exercise
  • Update the config.json by including the new problem, outlining the topics it covers, labeling it as "core", assigning an initial difficulty score (See this discussion for how the current scores were considered, and place it after the last exercise.
  • To run the tests, execute pub run test from the root directory of exercism/dart

A way to validate create-exercise tool

Our create-exercise tool is created in dart, but setup like a executable.

Even though it is getting love updates recently, there are no tests for it or ways to validate it works properly.

So I don't know if the best course of action would be to convert the file to a pure dart file so it would be run like dart create_exercise.dart since whoever wants to create a Dart implemented exercise should have Dart install. Then tests could be easily created.

Or if anyone has other ideas or thoughts.

Hamming exercise needs to be updated due to changes in specification

The specification for the Hamming exercise can be found here.

There are a few exercises that were either added or changed.

The changes should occur in these files:

  • exercises/hamming/test/hamming_test.dart
    • This file will reflect the content of canonical-data.json
  • exercises/hamming/lib/example.dart
    • This file will have to be updated in case the changes in hamming_test.dart render the solution outdated with the new/changed tests.

To run the tests, execute pub run test from the root directory of exercism/dart

anagram: Revise test cases

The specification was updated with this pull request, which revolves around removing duplicate test cases. I recommend we implement the changes where appropriate in our anagram test suite.

Discussion: What order should the exercises be?

Currently, the order of the exercises are defined in the config.json. Further documentation about that file can be found here.

There are only 7 exercises and I think it's important to stop and think about where we are now before blindly implementing three more exercises to reach the minimal of 10.

None of the exercises block another. And they are all rated at the same difficulty.

So we need to discuss the following topics:

  • Which exercise is part of the core set of exercises necessary to complete the track (For example, I think Hello-World is a given, but it's not listed that way in the config.
  • If we should have any exercise dependent on another in order to be "unlocked"
    • I think the core exercises should definitely have a order to how they are unlocked
  • Provide a rough estimate of the difficulty level of an exercise on a scale of 1 to 10

The current exercises are as follows (listed as ordered in the config.json):

  • Hello World
  • Leap
  • Hamming
  • RNA Transcription
  • Bob
  • Gigasecond
  • Difference of Squares

Preparing for Dart 2

@exercism/dart So Dart 2.0 is expected to release sometime this summer, with it comes a lot of changes to the API which impacts a few exercises and some of our scripts (as noted in the dart-2 branch.

The exercises affected are:

  • anagram
  • difference of squares
  • gigasecond (due to a change in the API)

In their current state, this means these exercises can only support Dart 1 or Dart 2. Whereas the remaining exercises are compatible with Dart 1 and Dart 2.

With some changes, I think anagram and difference of squares could be considered compatible with both 1 & 2.

Gigasecond's use of the DateTime API means we will have to replace the Enums used with actual int values. But otherwise, our current exercises should be good.

The repository on the other hand requires updates to create_exercise and exercise_test in order for them to run in Dart 2.0. Changing the repo to use the Dart 2 SDK, shouldn't impact users implemenation of the tests.

Having said all of that, these are my proposed changes:

  • Apply minor fixes to anagram, difference of squares, and gigasecond to make them Dart 2.0 compatible as well as Dart 1.0 compatible.
  • Set the SDK limitations for all the exercises's to be between Dart 1.24.0 and Dart 2.0.1.
  • Set the SDK constraint for the repo's pubspec.yaml to be at or above Dart 2.0.0 and below Dart 2.1.0 to avoid any API changes after 2.0's release.
  • Update the create-exercise script to include sdk constraints for new exercises
  • Use dart2_constant as a polyfill to handle the transition between Dart 1 and Dart 2. The names of many constants were changed to fit a new naming convention, this library will resolve that.

Simplify Hello World test suite

The tests for the Hello World exercise in Dart test a helloWorld.hello() function that can be called with a parameter name. The description of the exercise doesn't mention any function parameters, and looking at the same exercise in other languages, none of them test for function parameters. I suggest sticking to the exercise description, and removing the tests that expect a name. After all, the Hello World should be the simplest, most basic exercise to introduce the language to new coders.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.