Giter Club home page Giter Club logo

fortran's Introduction

Exercism Fortran Track

configlet fortran / main Gitpod Ready-to-Code

Exercism exercises in Fortran.

Setup

Assuming that you have CMake and Fortran running you should be able to run the following commands in a shell:

(in case of Intel Fortran source ifortvars first)

Linux and MacOS

mkdir build
cd build
cmake ..
make
ctest -V

Windows

mkdir build
cd build
cmake -G"NMake Makefiles" ..
nmake
ctest -V

For more information see Installation instructions

Contributing

Thank you so much for contributing! 🎉

Please read about how to get involved in a track. Be sure to read the Exercism Code of Conduct.

We welcome pull requests of all kinds. No contribution is too small.

We encourage contributions that provide fixes and improvements to existing exercises. Please note that this track's exercises must conform to the standards determined in the exercism/x-common repo. Changes to the tests or documentation of a common exercise will often warrant a PR in that repo before it can be incorporated into this track's exercises. If you're unsure, then go ahead and open a GitHub issue, and we'll discuss the change.

Exercise Tests

At the most basic level, Exercism is all about the tests. They drive the user's implementation forward and tell them when the exercise is complete.

The utmost care and attention should be used when adding or making changes to the tests for an exercise. When implementing an exercise test suite, we want to provide a good user experience for the people writing a solution to the exercise. People should not be confused or overwhelmed.

We simulate Test-Driven Development (TDD) by implementing the tests in order of increasing complexity. We try to ensure that each test either

  • helps triangulate a solution to be more generic, or
  • requires new functionality incrementally.

Fortran Track

Test files for the Fortran track should be created with the Python3 script bin/create_fortran_test.py which is documented here.

Submitting a Pull Request

Please keep the following in mind:

  • Pull requests should be focused on a single exercise, issue, or change.

  • We welcome changes to code style, and wording. Please open a separate PR for these changes if possible.

  • Please open an issue before creating a PR that makes significant (breaking) changes to an existing exercise or makes changes across many exercises. It is best to discuss these changes before doing the work. Discussions related to exercises that are not track specific can be found in exercism/discussions.

  • Follow the coding standards for Fortran. (If there is a formatter for the track's language, add instructions for using it here.)

  • Watch out for trailing spaces, extra blank lines, and spaces in blank lines.

  • All the tests for Fortran exercises can be run from the top level of the repo with ... Please run this command before submitting your PR.

Contributing a New Exercise

  • All Exercism exercises must be defined in x-common before they are implemented for a specific track. Please submit a PR there if your exercise is new to Exercism.

  • Please make sure the new exercise conforms to specifications in the exercism/x-common repo.

  • Each exercise must stand on its own. Do not reference files outside the exercise directory. They will not be included when the user fetches the exercise.

  • Exercises should use only the Fortran core libraries.

  • Please do not add a README or README.md file to the exercise directory. The READMEs are constructed using shared metadata, which lives in the exercism/x-common repository. Further explanation can be found in fixing-exercise-readmes

  • Each exercise should have a test suite, an example solution, a template file for the real implementation and ... (anything else that needs to go with each exercise for this track). The CI build expects files to be named using the following convention: (describe the Fortran convention for naming the various files that make up an exercise).

  • Please do not commit any configuration files or directories inside the exercise other than ...

  • Be sure to add it to the appropriate place in the config.json file. Also, please run bin/fetch-configlet && bin/configlet to ensure the exercise is configured correctly.

fortran's People

Contributors

adrien-ludwig avatar avysk avatar booniepepper avatar cmccandless avatar d3usxmachina avatar dependabot[bot] avatar ee7 avatar enmiligi avatar erikschierboom avatar exercism-bot avatar glennj avatar houhoulis avatar isaacg avatar jackhughesweb avatar kotp avatar kytrinyx avatar onassif avatar pclausen avatar saschamann avatar sentientmonkey avatar simadovakin avatar simisc avatar sjwarner avatar trellixvulnteam avatar zmoon avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

fortran's Issues

Issues with matrix exercise

There are some issues with the stub:

  • m declaration doesn't have the dimension that it needs (:)
  • the declared dimensions of the result (r or c) are wrong for both row and column (they should be switched as it stands)
  • the A declaration is not needed

Also, I feel that the dims should be reversed in the argument/tests. Although Fortran uses column-major storage order, I think most people still think of the first dimension as the "rows" dimension if you have a two-dimensional array (matrix). Wikipedia agrees:

in Fortran, arrays are stored in column-major order, while the array indexes are still written row-first (colexicographical access order)

testlib missing

Hello,

I thought I'd have a look at the Fortran track, and started the hello world program according to the instructions on the website:
exercism download --exercise=hello-world --track=fortran

Following the instructions in hello_world_test.f90, I get the following error:

❯ cmake ..
CMake Error at CMakeLists.txt:31 (file):
  file COPY cannot find
  "/Users/funnellt/Exercism/fortran/hello-world/../../testlib".


CMake Error at CMakeLists.txt:34 (add_subdirectory):
  add_subdirectory given source "testlib" which is not an existing directory.


-- Configuring incomplete, errors occurred!
See also "/Users/funnellt/Exercism/fortran/hello-world/Debug/CMakeFiles/CMakeOutput.log".

And testlib is of course no where to be found. I could try to fix this if someone points me in the right direction.

[v3] Configure online editor

This issue is part of the migration to v3. You can read full details about the various changes here.

In Exercism v3, students can now choose to work on exercises directly from their browser, instead of having to download exercises to their local machine. The track-specific settings for the in-browser editor are defined in the top-level "online_editor" field in the track's config.json file. This field is defined as an object with two fields:

  • "indent_style": the indent style, either "space" or "tab".
  • "indent_size": the indent size, which is an integer (e.g. 4).

You can find a full description of these fields in the spec.

Goal

The "online_editor" field should be updated to correspond to the track's best practices regarding indentation.

Example

"online_editor": {
  "indent_style": "space",
  "indent_size": 4
}

Tracking

exercism/v3-launch#2

Building a training set of tags for fortran

Hello lovely maintainers 👋

We've recently added "tags" to student's solutions. These express the constructs, paradigms and techniques that a solution uses. We are going to be using these tags for lots of things including filtering, pointing a student to alternative approaches, and much more.

In order to do this, we've built out a full AST-based tagger in C#, which has allowed us to do things like detect recursion or bit shifting. We've set things up so other tracks can do the same for their languages, but its a lot of work, and we've determined that actually it may be unnecessary. Instead we think that we can use machine learning to achieve tagging with good enough results. We've fine-tuned a model that can determine the correct tags for C# from the examples with a high success rate. It's also doing reasonably well in an untrained state for other languages. We think that with only a few examples per language, we can potentially get some quite good results, and that we can then refine things further as we go.

I released a new video on the Insiders page that talks through this in more detail.

We're going to be adding a fully-fledged UI in the coming weeks that allow maintainers and mentors to tag solutions and create training sets for the neural networks, but to start with, we're hoping you would be willing to manually tag 20 solutions for this track. In this post we'll add 20 comments, each with a student's solution, and the tags our model has generated. Your mission (should you choose to accept it) is to edit the tags on each issue, removing any incorrect ones, and add any that are missing. In order to build one model that performs well across languages, it's best if you stick as closely as possible to the C# tags as you can. Those are listed here. If you want to add extra tags, that's totally fine, but please don't arbitrarily reword existing tags, even if you don't like what Erik's chosen, as it'll just make it less likely that your language gets the correct tags assigned by the neural network.


To summarise - there are two paths forward for this issue:

  1. You're up for helping: Add a comment saying you're up for helping. Update the tags some time in the next few days. Add a comment when you're done. We'll then add them to our training set and move forward.
  2. You not up for helping: No problem! Just please add a comment letting us know :)

If you tell us you're not able/wanting to help or there's no comment added, we'll automatically crowd-source this in a week or so.

Finally, if you have questions or want to discuss things, it would be best done on the forum, so the knowledge can be shared across all maintainers in all tracks.

Thanks for your help! 💙


Note: Meta discussion on the forum

Create stub files for all exercises

We have decided to require all file-based tracks to provide stubs for their exercises.

The lack of stub file generates an unnecessary pain point within Exercism, contributing a significant proportion of support requests, making things more complex for our students, and hindering our ability to automatically run test-suites and provide automated analysis of solutions.

We believe that it’s essential to understand error messages, know how to use an IDE, and create files. However, getting this right as you’re just getting used to a language can be a frustrating distraction, as it can often require a lot of knowledge that tends to seep in over time. At the start, it can be challenging to google for all of these details: what file extension to use, what needs to be included, etc. Getting people up to speed with these things are not Exercism’s focus, and we’ve decided that we are better served by removing this source of confusion, letting people get on with actually solving the exercises.

The original discussion for this is at exercism/discussions#238.

Therefore, we’d like this track to provide a stub file for each exercise.

  • If this track already provides stub files for all exercises, please close this issue.
  • If this track already has an open issue for creating stubs, then my apologies. Please close one as a duplicate.
  • Otherwise, please respond to this issue with useful details about what needs to be done to complete this task in this track so that people who are not familiar with the track may easily contribute.

The master branch will be renamed to main

In line with our new org-wide policy, the master branch of this repo will be renamed to main. All open PRs will be automatically repointed.

GitHub will show you a notification about this when you look at this repo after renaming:

Screenshot 2021-01-27 at 15 31 45

In case it doesn't, this is the command it suggests:

git branch -m master main
git fetch origin
git branch -u origin/main main

You may like to update the primary branch on your forks too, which you can do under Settings->Branches and clicking the pencil icon on the right-hand-side under Default Branch:

Screenshot 2021-01-27 at 18 50 08

We will post a comment below when this is done. We expect it to happen within the next 12 hours.

Errors in exercises CMakeLists.txt

if(CMAKE_Fortran_COMPILER_ID MATCHES "Intel") # Intel fortran
  if(WIN32)
   set (CCMAKE_Fortran_FLAG ${CCMAKE_Fortran_FLAGS} "/warn:all")
  else()
  set (CMAKE_Fortran_FLAGS ${CCMAKE_Fortran_FLAGS} "-warn all")
endif()
  1. CCMAKE should be CMAKE, in three places.
  2. FLAG should be FLAGS .
  3. But: the code should really use FFLAGS instead of internal cmake CMAKE_Fortran_FLAGS. Setting the latter effectively blocks FFLAGS from being used at all.

Lower in the file there is a comment which reads GFrotran (yes, with a typo). Should be fixed as well.

Travis build broken

https://travis-ci.org/pclausen/fortran/builds/658030086?utm_medium=notification&utm_source=email

Not quite sure what is going wrong here... Seems to be something more central to the travis script. Fortran files build and execute succesfully.

gzip: stdin: not in gzip format

tar: Child returned status 1

tar: Error is not recoverable: exiting now

The command "bin/fetch-configlet" exited with 2.

0.00s$ bin/configlet lint .

/home/travis/.travis/functions: line 109: bin/configlet: No such file or directory

The command "bin/configlet lint ." exited with 127.

Build Test Runner

This issue is part of the migration to v3. You can read full details about the various changes here.

In Exercism v3, one of the biggest changes is that we'll automatically check if a submitted solution passes all the tests.

We'll check this via a new, track-specific tool: the Test Runner. Each test runner is track-specific. When a new solution is submitted, we run the track's test runner, which outputs a JSON file that describes the test results.

The test runner must be able to run the tests suites of both Concept Exercises and Practice Exercises. Depending on the test runner implementation, this could mean having to update the Practice Exercises to the format expected by the test runner.

Goal

Build a test runner for your track according to the spec.

If you are building a test runner from scratch, we have a starting guide and a generic test runner that can be used as the base for the new test runner.

If a test runner has already been built for this track, please check if it works on both Concept Exercises and Practice Exercises.

It can be very useful to check how other tracks have implemented their test runner.

Tracking

exercism/v3-launch#4

Ensure Fortran track is ready for v2 launch

There are a number of things we're going to want to check before the v2 site goes live. There are notes below that flesh out all the checklist items.

  • The track has a page on the v2 site: https://v2.exercism.io/tracks/fortran
  • The track page has a short description under the name (not starting with TODO)
  • The "About" section is a friendly, colloquial, compelling introduction
  • The "About" section follows the formatting guidelines
  • The code example gives a good taste of the language and fits within the boundaries of the background image
  • There are exercises marked as core
  • Exercises have rough estimates of difficulty
  • Exercises have topics associated with them
  • The first exercise is auto_approve: true

Track landing page

The v2 site has a landing page for each track, which should make people want to join it. If the track page is missing, ping @kytrinyx to get it added.

Blurb

If the header of the page starts with TODO, then submit a pull request to https://github.com/exercism/fortran/blob/master/config.json with a blurb key. Remember to get configlet and run configlet fmt . from the root of the track before submitting.

About section

If the "About" section feels a bit dry, then submit a pull request to https://github.com/exercism/fortran/blob/master/docs/ABOUT.md with suggested tweaks.

Formatting guidelines

In order to work well with the design of the new site, we're restricting the formatting of the ABOUT.md. It can use:

  • Bold
  • Italics
  • Links
  • Bullet lists
  • Number lists

Additionally:

  • Each sentence should be on its own line
  • Paragraphs should be separated by an empty line
  • Explicit <br/> can be used to split a paragraph into lines without spacing between them, however this is discouraged.

Code example

If the code example is too short or too wide or too long or too uninteresting, submit a pull request to https://github.com/exercism/ocaml/blob/master/docs/SNIPPET.txt with a suggested replacement.

Exercise metadata

Where the v1 site has a long, linear list of exercises, the v2 site has organized exercises into a small set of required exercises ("core").

If you update the track config, remember to get configlet and run configlet fmt . from the root of the track before submitting.

Topic and difficulty

Core exercises unlock optional additional exercises, which can be filtered by topic an difficulty, however that will only work if we add topics and difficulties to the exercises in the track config, which is in https://github.com/exercism/fortran/blob/master/config.json

Auto-approval

We've currently made any hello-world exercises auto-approved in the backend of v2. This means that you don't need mentor approval in order to move forward when you've completed that exercise.

Not all tracks have a hello-world, and some tracks might want to auto approve other (or additional) exercises.

Track mentors

There are no bullet points for this one :)

As we move towards the launch of the new version of Exercism we are going to be ramping up on actively recruiting people to help provide feedback. Our goal is to get to 100%: everyone who submits a solution and wants feedback should get feedback. Good feedback.

If you're interested in helping mentor the track, check out http://mentoring.exercism.io/

When all of the boxes are ticked off, please close the issue.

Tracking progress in exercism/meta#104

Fix lint issue

Run configlet lint
The lint command is under development.
Please re-run this command regularly to see if your track passes the latest linting rules.

Missing file:
/home/runner/work/fortran/fortran/docs/RESOURCES.md

Missing file:
/home/runner/work/fortran/fortran/docs/TESTS.md

Configlet detected at least one problem.
For more information on resolving the problems, please see the documentation:
https://github.com/exercism/docs/blob/main/building/configlet/lint.md
Error: Process completed with exit code 1.

Check docs are up to date

Please check if your documentation files are still up-to-date.

The key documentation files to check are:

  • docs/ABOUT.md
  • docs/INSTALLATION.md
  • docs/LEARNING.md
  • docs/RESOURCES.md
  • docs/TESTS.md
  • exercises/shared/.docs/help.md
  • exercises/shared/.docs/tests.md

There might be more.

Link check report

To help identify invalid links, we've automatically checked the links of all *.md files in this repo.
This is the report of that check:

📝 Summary
---------------------
🔍 Total...........49
✅ Successful......49
⏳ Timeouts.........0
🔀 Redirected.......0
👻 Excluded.........0
🚫 Errors...........0

Tracking

exercism/v3-launch#54

Update status of Concept Exercises

This issue is part of the migration to v3. You can read full details about the various changes here.

Concept Exercises can have a status specified in their "status" field in their config.json entry, as specified in the spec. This status can be one of four values:

  • "wip": A work-in-progress exercise not ready for public consumption. Exercises with this tag will not be shown to students on the UI or be used for unlocking logic. They may appear for maintainers.
  • "beta": This signifies active exercises that are new and which we would like feedback on. We show a beta label on the site for these exercise, with a Call To Action of "Please give us feedback."
  • "active": The normal state of active exercises
  • "deprecated": Exercises that are no longer shown to students who have not started them (not usable at this stage).

The "status" key can also be omitted, which is the equivalent of setting it to "active".

Goal

The "status" field of Concept Exercises in the config.json file should be updated to reflect the status of the Concept Exercises. See the spec for more information.

If your track doesn't have any Concept Exercises, this issue can be closed.

Example: removed wip status

{
  "exercises": {
    "concept": [
      {
        "uuid": "93fbc7cf-3a7e-4450-ad22-e30129c36bb9",
        "slug": "cars-assemble",
        "name": "Cars, Assemble!",
        "concepts": ["if-statements", "numbers"],
        "prerequisites": ["basics"]
      },
      ...
    ]
  }
}

Example: replaced wip status with active

{
  "exercises": {
    "concept": [
      {
        "uuid": "93fbc7cf-3a7e-4450-ad22-e30129c36bb9",
        "slug": "cars-assemble",
        "name": "Cars, Assemble!",
        "concepts": ["if-statements", "numbers"],
        "prerequisites": ["basics"],
        "status": "active"
      },
      ...
    ]
  }
}

Tracking

exercism/v3-launch#14

Add 6 more exercises to have 20 exercises

exercises to add

currently 14, target is to have 20. Following looks fairly easy to do with Fortran (few string operations)

  • Saddel point Returns variable lenght list which is kind of painful...
  • Tree Building Mssing tests in problem-specifications/exercises/tree-building
  • Triangle
  • Queen Attack
  • Change Harder than I thought...
  • Rational Numbers
  • High Scores

Suggestions from @SaschaMann and Angelika Tyborska

  • pascal's triangle,
  • accumulate,
  • all-your-base,
  • grains,
  • isbn-verifier (maybe),
  • leap,
  • luhn (maybe),
  • pythagorean-triplet Tricky because returns variable lists
  • prime-factors Tricky because returns variable lists
  • perfect-numbers,
  • sieve
  • project euler
  • space age,
  • bowling
  • sum of multiples
  • clock
  • dnd character

Add prerequisites to Practice Exercises

This issue is part of the migration to v3. You can read full details about the various changes here.

Exercism v3 introduces a new type of exercise: Concept Exercises. All existing (V2) exercises will become Practice Exercises.

Concept Exercises and Practice Exercises are linked to each other via Concepts. Concepts are taught by Concept Exercises and practiced in Practice Exercises. Each Exercise (Concept or Practice) has prerequisites, which must be met to unlock an Exercise - once all the prerequisite Concepts have been "taught" by a Concept Exercise, the exercise itself becomes unlocked.

For example, in some languages completing the Concept Exercises that teach the "String Interpolation" and "Optional Parameters" concepts might then unlock the two-fer Practice Exercise.

Each Practice Exercise has two fields containing concepts: a practices field and a prerequisites field.

Practices

The practices key should list the slugs of Concepts that this Practice Exercise actively allows a student to practice.

  • These show up in the UI as "Practice this Concept in: TwoFer, Leap, etc"
  • Try and choose 3 - 8 Exercises that practice each Concept.
  • Try and choose at least two Exercises that allow someone to practice the basics of a Concept.
  • Some Concepts are very common (for example strings). In those cases we recommend choosing a few good exercises that make people think about those Concepts in interesting ways. For example, exercises that require UTF-8, string concatenation, char enumeration, etc, would all be good examples.
  • There should be one or more Concepts to practice per exercise.

Prerequisites

The prerequisites key lists the Concept Exercises that a student must have completed in order to access this Practice Exercise.

  • These show up in the UI as "Learn Strings to unlock TwoFer"
  • It should include all Concepts that a student needs to have covered to be able to complete the exercise in at least one idiomatic way. For example, for the TwoFer exercise in Ruby, prerequisites might include strings, optional-params, implicit-return.
  • For Exercises that can be completed using alternative Concepts (e.g. an Exercise solvable by loops or recursion), the maintainer should choose the one approach that they would like to unlock the Exercise, considering the student's journey through the track. For example, the loops/recursion example, they might think this exercise is a good early practice of loops or that they might like to leave it later to teach recursion. They can also make use of an analyzer to prompt the student to try an alternative approach: "Nice work on solving this via loops. You might also like to try solving this using Recursion."
  • There should be one or more prerequisites Concepts per exercise.

Although ideally all Concepts should be taught by Concept Exercises, we recognise that it will take time for tracks to achieve that. Any Practice Exercises that have prerequisites which are not taught by Concept Exercises, will become unlocked once the final Concept Exercise has been completed.

Goal

Practices

The "practices" field of each element in the "exercises.practice" field in the config.json file should be updated to contain the practice concepts. See the spec.

To help with identifying the practice concepts, the "topics" field can be used (if it has any contents). Once prerequisites have been defined for a Practice Exercise, the "topics" field should be removed.

Each practice concept should have its own entry in the top-level "concepts" array. See the spec.

Prerequisites

The "prerequisites" field of each element in the "exercises.practice" field in the config.json file should be updated to contain the prerequisite concepts. See the spec.

To help with identifying the prerequisites, the "topics" field can be used (if it has any contents). Once prerequisites have been defined for a Practice Exercise, the "topics" field should be removed.

Each prerequisite concept should have its own entry in the top-level "concepts" array. See the spec.

Example

{
  "exercises": {
    "practice": [
      {
        "uuid": "8ba15933-29a2-49b1-a9ce-70474bad3007",
        "slug": "leap",
        "name": "Leap",
        "practices": ["if-statements", "numbers", "operator-precedence"],
        "prerequisites": ["if-statements", "numbers"],
        "difficulty": 1
      }
    ]
  }
}

Tracking

exercism/v3-launch#6

ABOUT Fortran page

On the tracks page, when you open the Fortran track it doesn't have any description about the language. It's just an empty space

fortran

CMake and CTest for fortran track

So I have now setup a branch fortran-cmake in fork:
https://github.com/pclausen/fortran/tree/fortran-cmake

The main CMakelist.txt is a modified version of the one from https://github.com/exercism/cpp

First build only includes hello-world and kind of completes OK
https://travis-ci.org/pclausen/fortran/jobs/418407246

But I get an error in The command "bin/configlet lint ." exited with 1. Do we need this configlet lint ?

The next step is to convert leap or bob to use ctest and some asserts which I am thinking a bit about. When it gets a bit further I will create a pull request(?)

I also updated installation doc https://github.com/pclausen/fortran/blob/fortran-cmake/docs/INSTALLATION.md

Suggestions for improvements are very welcome. I dont know if you can see or commit to my branch. let me know if you need access and please also write how I can give you access.

What was it like to learn Fortran?

We’ve recently started a project to find the best way to design our tracks, in order to optimize the learning experience of students.

As a first step, we’ll be examining the ways in which languages are unique and the ways in which they are similar. For this, we’d really like to use the knowledge of everyone involved in the Exercism community (students, mentors, maintainers) to answer the following questions:

  1. How was your experience learning Fortran? What was helpful while learning Fortran? What did you struggle with? How did you tackle problems?
  2. In what ways did Fortran differ from other languages you knew at the time? What was hard to learn? What did you have to unlearn? What syntax did you have to remap? What concepts carried over nicely?

Could you spare 5 minutes to help us by answering these questions? It would greatly help us improve the experience students have learning Fortran :)

Note: this issue is not meant as a discussion, just as a place for people to post their own, personal experiences.

Want to keep your thoughts private but still help? Feel free to email me at [email protected]

Thank you!

Update status of track

This issue is part of the migration to v3. You can read full details about the various changes here.

There are several new features in Exercism v3 for tracks to build. To selectively enable these features on the Exercism v3 website, each track must keep track of the status of the following features:

The status of these features is specified in the top-level "status" field in the track's config.json, as specified in the spec.

Goal

The "status" field in the config.json file should be updated to indicate the status of the features for this track. The list of features is defined in the spec.

Example

{
  "status": {
    "concept_exercises": true,
    "test_runner": true,
    "representer": false,
    "analyzer": false
  }
}

Tracking

exercism/v3-launch#12

Extract track-specific help instructions from `config/exercise_readme.go.tmpl`

Each track needs a file that contains track-specific instructions on how to get help. The contents of this document are only presented to the student when using the CLI. This file lives at exercises/shared/.docs/help.md. You almost certainly already have this information, but need to move it to the correct place.

For v2 tracks, this information was (usually) included in the readme template found at config/exercise_readme.go.tmpl. As such, tracks can extract the help instructions from the config/exercise_readme.go.tmpl file to the exercises/shared/.docs/help.md file.

See https://github.com/exercism/csharp/pull/1557/files for an example PR.

Tracking

exercism/v3-launch#50

Extract track-specific test instructions from `config/exercise_readme.go.tmpl`

Each track needs a file that contains track-specific instructions on how to manually run the tests. The contents of this document are only presented to the student when using the CLI. This file lives at exercises/shared/.docs/tests.md. You almost certainly already have this information, but need to move it to the correct place.

For v2 tracks, this information was (usually) included in the readme template found at config/exercise_readme.go.tmpl. As such, tracks can extract the test instructions from the config/exercise_readme.go.tmpl file to the exercises/shared/.docs/tests.md file.

See https://github.com/exercism/csharp/pull/1557/files for an example PR.

Tracking

exercism/v3-launch#51

[v3] Add key features

This issue is part of the migration to v3. You can read full details about the various changes here.

In Exercism v3, each track must specify exactly six "key features". Exercism uses these features to highlight the most interesting, unique or "best" features of a language to a student.

Key features are specified in the top-level "key_features" field in the track's config.json file and are defined as an array of objects, as specified in the spec.

Goal

The "key_features" field in the config.json file should be updated to describe the six "key features" of this track. See the spec.

Example

{
  "key_features": [
    {
      "icon": "features-oop",
      "title": "Modern",
      "content": "C# is a modern, fast-evolving language."
    },
    {
      "icon": "features-strongly-typed",
      "title": "Cross-platform",
      "content": "C# runs on almost any platform and chipset."
    },
    {
      "icon": "features-functional",
      "title": "Multi-paradigm",
      "content": "C# is primarily an object-oriented language, but also has lots of functional features."
    },
    {
      "icon": "features-lazy",
      "title": "General purpose",
      "content": "C# can be used for a wide variety of workloads, like websites, console applications, and even games."
    },
    {
      "icon": "features-declarative",
      "title": "Tooling",
      "content": "C# has excellent tooling, with linting and advanced refactoring options built-in."
    },
    {
      "icon": "features-generic",
      "title": "Documentation",
      "content": "Documentation is excellent and exhaustive, making it easy to get started with C#."
    }
  ]
}

Tracking

exercism/v3-launch#5

Moving from Travis to GitHub Actions

Hello 🙂

Over the last few months we've been transferring all our CI from Travis to GitHub Actions (GHA). We've found that GHA are easier to work with, more reliable, and much much faster.

Based on our success with GHA and increasing intermittent failures on Travis, we have now decided to try and remove Travis from Exercism's org altogether and shift everything to GHA. This issue acts as a call to action if your track is still using Travis.

For most CI checks this should be a transposing from Travis' syntax to GHA syntax, and hopefully quite straightforward (see this PR for an example). However, if you do encounter any issues doing this, please ask on Slack where lots of us now have experience with GHA, or post a comment here and I'll tag relevant people. This would also make a good Hacktoberfest issue for anyone interested in making their first contribution 🙂

If you've already switched this track to GHA, please feel free to close this issue and ignore it.

Thanks!

[Important] The current website is about to enter maintenance mode to aid with v3 launch

TL;DR; At the end of Jan 2021, all tracks will enter v3 staging mode. Updates will no longer sync with the current live website, but instead sync with the staging website. The Fortran section of the v3 repo will be extracted and PR'd into this track (if appropriate). Further issues and information will follow over the coming weeks to prepare Fortran for the launch of v3.

Over the last 12 months, we've all been hard at work developing Exercism v3. Up until this point, all v3 tracks have been under development in a single repository - the v3 repository. As we get close to launch, it is time for us to explode that monorepo back into the normal track repos. Therefore, at the end of this month (January 2021), we will copy the v3 tracks contents from the v3 repository back to the corresponding track repositories.

As v3 tracks are structured differently than v2 tracks, the current (v2) website cannot work with v3 tracks. To prevent the v2 website from breaking, we'll disable syncing between track repositories and the website. This will effectively put v2 in maintenance mode, where any changes in the track repos won't show up on the website. This will then allow tracks to work on preparing for the Exercism v3 launch.

Where possible, we will script the changes needed to prepare tracks for v3. For any manual changes that need to be happening, we will create issues on the corresponding track repositories. We will be providing lots of extra information about this in the coming weeks.

We're really excited to enter the next phase of building Exercism v3, and to finally get it launched! 🙂

Implement continuous integration

Implement a track test suite that can run both locally and on Travis CI. The track test suite should verify that each exercise makes sense, by running the exercise tests against the example solution.

Definition of terms

  • exercise test suite: the test suite that is delivered to Exercism users as part of an Exercism exercise
  • track test suite: the test suite that helps ensure that all of the exercise test suites in a language track are solvable

Background

When implementing an exercise test suite, we want to provide a good user experience for the people writing a solution to the exercise. People should not be confused or overwhelmed.

In most Exercism language tracks, we simulate Test-Driven Development (TDD) by implementing the tests in order of increasing complexity. We try to ensure that each test either

  • helps triangulate a solution to be more generic, or
  • requires new functionality incrementally.

Many test frameworks will randomize the order of the tests when running them. This is an excellent practice, which helps ensure that subsequent tests are not dependent on side effects from earlier tests. However, in order to simulate TDD we want tests to run in the order that they are defined, and we want them to fail fast, that is to say, as soon as the test suite encounters a failure, we want the execution to stop. This ensures that the person implementing the solution sees only one error or failure message at a time, unless they make a change which causes prior tests to fail.

This is the same experience that they would get if they were implementing each new test themselves.

Most testing frameworks do not have the necessary configuration options to get this behavior directly, but they often do have a way of marking tests as skipped or pending. The mechanism for this will vary from language to language and from test framework to test framework.

Whatever the mechanism—functions, methods, annotations, directives, commenting out tests, or some other approach—these are changes made directly to the test file. The person solving the exercise will need to edit the test file in order to "activate" each subsequent test.

Any tests that are marked as skipped will not be verified by the track test suite unless special care is taken.

Additionally, in some programming languages, the name of the file containing the solution is hard-coded in the test suite, and the example solution is not named in the way that we expect people to name their files.

We will need to temporarily (and programmatically) edit the exercise test suites to ensure that all of their tests are active. We may also need to rename the example solution file(s) in order for the exercise test suite to run against it.

Avoiding accidental git check-ins

It's important that if we rewrite files in any way during a test run, that these changes do not accidentally get checked in to the git repository.

Therefore, many language tracks write the track test suite in such a way that it copies the exercise to a temporary location outside of the git repository before editing or rewriting the exercise files during a test run.

Working around long-running track test suites

Usually as people are developing the track, they're focused on a single exercise. If running the entire track test suite against all of the exercises takes a long time, it is often worth making it possible to verify just one exercise at a time.

Example build file

The PHP track has created a Makefile. The Ruby track uses Rake, which is a tool written in Ruby, allowing the track maintainers to write custom code in the language of the track to customize the build with a Rakefile.

Pass linting checks

This issue is part of the migration to v3. You can read full details about the various changes here.

The configlet tool has a lint command that checks if a track's configuration files are properly structured - both syntactically and semantically. Misconfigured tracks may not sync correctly, may look wrong on the website, or may present a suboptimal user experience, so configlet's guards play an important part in maintaining the integrity of Exercism.

We're updating configlet to work with v3 tracks, which have a different set of requirements than v2 tracks.

The full list of rules that will be checked by the linter can be found in this spec.

⚠ Note that only a subset of the linting rules has been implemented at this moment. This means that while your track may be passing the checks at this moment, it might fail later. We thus strongly suggest you keep this issue open until we let you know otherwise.

Goal

Ensure that the track passes all the (v3 track) checks defined in configlet lint.

To help verify that the track passes all the linting rules, the v3 preparation PR has added a GitHub Actions workflow that automatically runs configlet lint.

It is also possible to run configlet lint locally by running the ./bin/fetch-configlet (or ./bin/fetch-configlet.ps1) script to download a local copy of the configlet binary. Once downloaded, you can then do ./bin/configlet lint to run the linting on your own machine.

Tracking

exercism/v3-launch#3

Build analyzer

In Exercism v3, we are making increased use of our v2 analyzers. Analyzers automatically assess student's submissions and provide mentor-style commentary. They can be used to catch common mistakes and/or do complex solution analysis that can't easily be done directly in a test suite.

Each analyzer is track-specific. When a new solution is submitted, we run the track's analyzer, which outputs a JSON file that contains the analysis results.

In v2, analyzer comments were given to a mentor to pass to a student. In v3, the analyzers will normally output directly to students, although we have added an extra key to output suggestions to mentors. If your track already has an analyzer, the only requisite change is updating the outputted copy to be student-facing.

The analyzer is an optional tool though, which means that if a track does not have an analyzer, it will still function normally.

Goal

Build an analyzer for your track according to the spec. Check this page to help you get started with building an analyzer.

It can be very useful to check how other tracks have implemented their analyzer.

If your track already has a working analyzer, please close this issue and ensure that the .status.analyzer key in the track config.json file is set to true.

Choosing between representer and analyzer

There is some overlap between the goals of the representer and the analyzer. If you want to build both, we recommend starting by building the representer for the following reasons:

  • Representers are usually (far) easier to implement
  • Representers can have a far bigger impact on the mentoring load than analyzers by empowering mentors
  • Representers apply to all exercises, whereas analyzers usually target specific exercises or a subset

Tracking

exercism/v3-launch#53

[v3] Add tags

This issue is part of the migration to v3. You can read full details about the various changes here.

In Exercism v3, tracks can be annotated with tags. This allows searching for tracks with a certain tag combination, making it easy for students to find an interesting track to join.

Tags are specified in the top-level "tags" field in the track's config.json file and are defined as an array of strings, as specified in the spec.

Goal

The "tags" field in the config.json file should be updated to contain the tags that are relevant to this track. The list of tags that can be used is listed in the spec.

Example

{
  "tags": [
    "runtime/jvm",
    "platform/windows",
    "platform/linux",
    "paradigm/declarative",
    "paradigm/functional",
    "paradigm/object_oriented"
  ]
}

Tracking

exercism/v3-launch#1

Fix getting started instructions for fortran

Some exercise README templates contain links to pages which no longer exist in v2 Exercism.

For example, C++'s README template had a link to /languages/cpp for instructions on running tests. The correct URLs to use can be found in the 'Still stuck?' sidebar of exercise pages on the live site. You'll need to join the track and go to the first exercise to see them.

Please update any broken links in the 'config/exercise_readme.go.tmpl' file, and run 'configlet generate .' to generate new exercise READMEs with the fixes.

Instructions for generating READMEs with configlet can be found at:
https://github.com/exercism/docs/blob/master/language-tracks/exercises/anatomy/readmes.md#generating-a-readme

Instructions for installing configlet can be found at:
https://github.com/exercism/docs/blob/bc29a1884da6c401de6f3f211d03aabe53894318/language-tracks/launch/first-exercise.md#the-configlet-tool

Tracking exercism/exercism#4102

Consider consistency for the exercises

  • Is there a style guide for Fortran?
  • Are these styles encouraged or enforced?
  • Are there any conventions that we should adopt on this track for the sake of consistency?
  • Can we enforce these?
  • Is there a linter? Are there many? Should we use one?
  • When you add a linter, edit the pull request template [link]
  • Update the pull request template with checks that are appropriate for this track
  • Is there a common convention for filenames? If not, what should our convention be?

Note that this is about the exercises (the test suites and code examples), not people's solutions.

Where are the Fortran communities and enthusiasts?

As we move towards the launch of the new version of Exercism we are going to be ramping up on actively recruiting people to help provide feedback.

Our goal is to get to 100%: everyone who submits a solution and wants feedback should get feedback. Good feedback. You can read more about this aspect of the new site here: http://mentoring.exercism.io/

To do this, we're going to need a lot more information about where we can find language enthusiasts.

  • Is Fortran supported by one or more large organizations?
  • Does Fortran have an official community manager?
  • Do you know of specific communities (online or offline) that are enthusiastic about Fortran? (Chat communities, forums, meetups, student clubs, etc)
  • Are there popular conferences for Fortran? (If so, what are some examples?)
  • Are there any organizations who are targeted specifically at getting certain subgroups or demographics interested in Fortran? (e.g. kids, teenagers, career changers, people belonging to various groups that are typically underrepresented in tech?)
  • Are there specific groups or programs dedicated to mentoring people in Fortran?
  • Are there popular newsletters for Fortran?
  • Is Fortran taught at programming bootcamps? (If so, what are some examples?)
  • Is Fortran taught at universities? (If so, what are some examples?)

In other words: where do people care a lot and/or know a lot about Fortran?

This is part of the project being tracked in exercism/meta#103

Remove obsolete version tracking assertions in exercises

Some tracks have added assertions to the exercise test suites that ensure that the solution has a hard-coded version in it.
In the old version of the site, this was useful, as it let commenters see what version of the test suite the code had been written against, and they wouldn't accidentally tell people that their code was wrong, when really the world had just moved on since it was submitted.

If this track does not have any assertions that track versions in the exercise tests, please close this issue.

If this track does have this bookkeeping code, then please remove it from all the exercises.

See exercism/exercism#4266 for the full explanation of this change.

Use latest version of test module in editor

This was previously a comment on #227, but I'm moving it to a new issue because I think it's specific to how the test module is made available when testing in the editor.

Changes made to the test module in #226 and #227 are not applied when testing in the editor. When testing an incorrect solution to Sieve, there's still the An error occurred while running your tests. This might mean... message — this is the issue that #227 was intended to solve. For Saddle Points (with correct and incorrect solutions), the message is:

We received the following error when we ran your code:

/tmp/saddle-points/saddle_points_test.f:::

   |     character(MAX_RESULT_STRING_LEN) :: s
      |              
Error: GNU Extension: Symbol max_result_string_len is used before it is typed at ()
/tmp/saddle-points/saddle_points_test.f:::

   |   function pa_to_s(p) result(s)
      |                               
Error: Function result s at () has no IMPLICIT type

This suggests that the editor is using the updated version of saddle_points_test.f90, but not the latest TesterMain.f90. I can reproduce the same error when working locally by using the old TesterMain.f90 with the new saddle_points_test.f90.

When testing an incorrect solution to High Scores in the editor, failed tests are double-counted, so the same issue applies to #226 (i.e., the solution is being tested by the pre-#226 version of the test module).

Originally posted by @simisc in #227 (comment)

Test runner needs correctly formatted output from TesterMain

Test runner needs correctly formatted output from TesterMain

see https://github.com/exercism/docs/blob/main/building/tooling/test-runners/interface.md

{
  "version": 2,
  "status": "fail",
  "message": null,
  "tests": [
    {
      "name": "Test that the thing works",
      "status": "fail",
      "message": "Expected 42 but got 123123",
      "output": "Debugging information output by the user",
      "test_code": "assert_equal 42, answerToTheUltimateQuestion()"
    }
  ]
}

Current output missing "message" and "test_code" items.

     { "name"  : "Test 23: non-question ending with whitespace",
       "status": "fail" }
   ],
   "version": 2,
   "status": "fail"
 }

Logo needed

Fortran really doesn't have a logo as far as I can tell. I'd love to see something really cool that looks like a punchcard, but I'm not exactly super good at illustration. Would love help here, but if not we'll end up with a simple pink/black fixed width "f".

Build Representer and Analyzer

This issue is part of the migration to v3. You can read full details about the various changes here.

Representer

In Exercism v3, we're introducing a new (optional) tool: the representer. The goal of the representer is to take a solution and returning a representation, which is an extraction of a solution to its essence with normalized names, comments, spacing, etc. but still uniquely identifying the approach taken. Two different ways of solving the same exercise must not have the same representation.

Each representer is track-specific. When a new solution is submitted, we run the track's representer, which outputs two JSON files that describe the representation.

Once we have a normalized representation for a solution, a team of vetted mentors will look at the solution and comment on it (if needed). These comments will then automatically be submitted to each new solution with the same representation. A notification will be sent for old solutions with a matching representation.

Each track should build a representer according to the spec. For tracks building a representer from scratch, we have a starting guide.

The representer is an optional tool though, which means that if a track does not have a representer, it will still function normally.

Analyzer

In Exercism v3, we are making increased use of our v2 analyzers. Analyzers automatically assess student's submissions and provide mentor-style commentary. They can be used to catch common mistakes and/or do complex solution analysis that can't easily be done directly in a test suite.

Each analyzer is track-specific. When a new solution is submitted, we run the track's analyzer, which outputs a JSON file that contains the analysis results.

In v2, analyzer comments were given to a mentor to pass to a student. In v3, the analyzers will normally output directly to students, although we have added an extra key to output suggestions to mentors. If your track already has an analyzer, the only requisite change is updating the outputted copy to be student-facing.

Each track should build an analyzer according to the spec. For tracks building an analyzer from scratch, we have a starting guide.

The analyzer is an optional tool though, which means that if a track does not have an analyzer, it will still function normally.

Goal 1

Build a representer for your track according to the spec. Check this page to help you get started with building a representer.

Note that the simplest representer is one that merely returns the solution's source code.

It can be very useful to check how other tracks have implemented their representer.

Goal 2

Build an analyzer for your track according to the spec. Check this page to help you get started with building an analyzer.

It can be very useful to check how other tracks have implemented their analyzer.

Choosing between representer and analyzer

If you want to build both, we recommend starting by building the representer for the following reasons:

  • Representers are usually (far) easier to implement
  • Representers can have a far bigger impact on the mentoring load than analyzers by empowering mentors
  • Representers apply to all exercises, whereas analyzers usually target specific exercises or a subset

Tracking

exercism/v3-launch#8

Launch Tracker 🔴

This issue is part of the migration to v3. You can read full details about the various changes here.

To get your track ready for Exercism v3, the following needs to be done:

This issue may be automatically added to over time. While track maintainers should check off completed items, please do not add/edit items in the list.

Tracking

exercism/v3-launch#7

Investigate/write alternative unit test framework

Right now, I've started with funit.

Pros:

  • Fortran-like syntax
  • From what I've seen, cleanest/easiest to write tests in
  • codebase is ruby, easy to follow

Cons:

  • Not actually Fortran
  • Requires ruby and rubygems to be installed
  • Weird bugs (can't use strings with commas in assertions?)
  • Invalid Fortran test ends up with
  • Has to run for a folder vs file

Given the list of other options, I'm not sure I'm seeing anything right now that meets what I would like:
http://fortranwiki.org/fortran/show/Unit+testing+frameworks

Ideally, I want something that allows you to write valid Fortran (so it can by syntax checked first), and requires as minimal setup for a user as possible (i.e. no new languages).

So I want to write this...

module hello_test
  use hello
  character(20) :: expected_greeting

  function setup
    expected_greeting = 'Hello, World!'
  end function setup
  
  function test_hello
    assert_equals( expected_greeting, greet() )
  end function test_hello

end module hello_test

and be able run it like...

  $ xfunit hello_test.f90
  .
  1 test passed, 1 assertion

This may require parsing the test Fortran module to scan for methods like setup/teardown & test_*.

Simplify CI/CD

There have been multiple unrelated PRs making small tweaks to CI/CD

We should take some time (after getting more exercises etc) to revisit it and see if there's a way to simplify it or make it less fragile

bob: Update to clarify ambiguity regarding shouted questions

TL;DR: the problem specification for the Bob exercise has been updated. Consider updating the test suite for Bob to match. If you decide not to update the exercise, consider overriding description.md.


Details

The problem description for the Bob exercise lists four conditions:

  • asking a question
  • shouting
  • remaining silent
  • anything else

There's an ambiguity, however, for shouted questions: should they receive the "asking" response or the "shouting" response?

In exercism/problem-specifications#1025 this ambiguity was resolved by adding an additional rule for shouted questions.

If this track uses exercise generators to update test suites based on the canonical-data.json file from problem-specifications, then now would be a good time to regenerate 'bob'. If not, then it will require a manual update to the test case with input "WHAT THE HELL WERE YOU THINKING?".

See the most recent canonical-data.json file for the exact changes.

Remember to regenerate the exercise README after updating the test suite:

configlet generate . --only=bob --spec-path=<path to your local copy of the problem-specifications repository>

You can download the most recent configlet at https://github.com/exercism/configlet/releases/latest if you don't have it.

If, as track maintainers, you decide that you don't want to change the exercise, then please consider copying problem-specifications/exercises/bob/description.md into this track, putting it in exercises/bob/.meta/description.md and updating the description to match the current implementation. This will let us run the configlet README generation without having to worry about the bob README drifting from the implementation.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.