Giter Club home page Giter Club logo

common-lisp's Introduction

Exercism Common Lisp Track

Configlet Config Check Status Exercise Test Status

Exercism exercises in Common Lisp.

Contributing to the Common Lisp Track

There are several ways to contribute to the Common Lisp track including (but not limited to):

  • Reporting problems with the track.
  • Working on the test runner.
  • Working on the representer.
  • Working on the analyzer.
  • Working on concept exercises.
  • Working on practice exercises.
  • Working on track documents.

There are two guides to the structure of the track and tooling which would be good to be familiar with.

  • The language track guide. This describes how all the language tracks are put together, as well as details about the common metadata.

  • The track tooling guide. This describes the interface to the various tooling (test runner, representer and analyzer) as well as how they are used and invoked.

Issues

Feel free to file an issues on the track repository for problems of any size. Feel free to report typographical errors or poor wording for example. You can greatly help improve the quality of the exercises by filing reports of invalid solutions that pass tests or of valid solutions that fail tests.

For issues specifically with the analyzer, the representer, or the test runner please file the issues in the appropriate repository.

Pull Requests

Feel free to submit pull requests to correct any issues or to add new functionality.

For pull requests specifically with the analyzer, the representer, or the test runner please file the pull requests in the appropriate repository.

Pull Requests should be focused on a single change. They must pass the CI system before they will be merged.

Creating or Modifying Exercises

There are two types of exercises: concept and practice.

Concept exercises are intended to teach the student a particular concept of the language. They should be simple and short. Refer to the document on the anatomy of a concept exercises for details of the parts that are needed for a concept exercises. The work needed for a concept exercise can be large, feel free to create an issue or pull request to discuss ideas for a concept exercise so it can be worked on collaboratively.

Practice exercises are intended to allow a student to further practice and extend their knowledge of a concept. They can be longer and/or more 'clever'. Refer to the document on the anatomy of a practice exercise for details of the parts that are needed for a concept exercise.

Practice Exercise Generation

Many practice exercises are part of a canonical set of exercises shared across tracks (information on this can be found in the problem specifications repository. There is a generator in the ./bin folder that you can use to generate all of the requisite files from the problem-specifications. (Note, you will need to have cloned the problem specifications repository for the generator to work.) The generator is written in Python, and you will therefore need to have Python 3.8 or later installed. You can run the script directly and follow the prompts, or you can run it from the command line. If you wish to run the generator from the command line, first navigate to your common-lisp repository. From here, there are two ways to run the generator, the first way being to enter the following:

python ./bin/lisp_exercise_generator.py

and from there, follow the prompts. The second way is to type in:

python ./bin/lisp_exercise_generator.py [-f] [path exercise author]

where:

  • path is the relative or absolute path to your problem-specifications repository
  • exercise is the name of the exercise to be generated
  • author is your Github handle
  • -f is a flag to force overwrite an already existing exercise

Any one of these methods will generate and fill in all the necessary files, with the exception of the .meta/example.lisp file, which you will need to complete yourself. The common-lisp/config.json file will remain unaltered - you will have to manually alter this file.

A Common Lisp replacement for this generator will be coming "soon".

Development Setup

This track uses SBCL for its development. Since Common Lisp is a standardized language and (at present) exercises only use features and behavior specified by the standard any other conforming implementation could be used for development of features for the track. However any tooling created for this track (such as part of its build system) must work in SBCL. It is outside the scope of this document to describe how to install a Common Lisp implementation. Please refer to the documentation for your chosen implementation for details.

The track also uses QuickLisp for system management. Please refer to its documentation for instructions on how to install it.

A note about QuickLisp & ASDF registries

The track contains some tools useful during development such as CI tasks. These are provided as ASDF systems. To ensure they are found appropriately by QuickLisp and ASDF either symbolic-link them into your quickslip/local-projects directory or by configuring your ASDF registry appropriately.

A note about markdown files

Some exercises have a introduction.md.tpl file - this means that exercise's introduction.md file is not meant to be edited by hand, instead it is generated by combining other documents. To update the introduction.md files one must run ./bin/configlet generate.

Track Build System

This track uses GitHub Actions as a build system.

It contains several workflows:

Building & Testing

To run the build "manually" execute the following from the root directory of the track:

  • In the shell: ./bin/fetch-configlet && ./configlet lint
  • In the REPL: (progn (asdf:load-system "config-checker") (config-checker:check-config))
  • In the REPL: (asdf:test-system "test-exercises")

common-lisp's People

Contributors

alvaro121 avatar asahnoln avatar azrazalea avatar benreyn avatar canweriotnow avatar colinbarry avatar defunkydrummer avatar dependabot[bot] avatar ee7 avatar erikschierboom avatar exercism-bot avatar glennj avatar hansonchar avatar ihid avatar kephas avatar ketigid avatar kotp avatar kytrinyx avatar mtreis86 avatar neshamon avatar objarni avatar pault89 avatar serialhex avatar sjwarner avatar thelostlambda avatar timotheosh avatar verdammelt avatar wobh avatar wsgac avatar yurrriq avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

common-lisp's Issues

"point-mutations" is deprecated in favor of hamming

This happened a while back, and it was for really weird legacy reasons.

I've since fixed the underlying issues that caused the problem, but for consistency
it would be nice to rename point-mutation to hamming, so that all the tracks are using
the same exercise name.

Once the problem has been renamed, I can run a script on the website to point people's
existing point-mutations solutions to the new hamming exercise so that they'll be able
to review solutions to hamming, and people who solve the new hamming exercise can see
all the old ones.

Verify that nothing links to help.exercism.io

The old help site was deprecated in December 2015. We now have content that is displayed on the main exercism.io website, under each individual language on http://exercism.io/languages.

The content itself is maintained along with the language track itself, under the docs/ directory.

We decided on this approach since the maintainers of each individual language track are in the best position to review documentation about the language itself or the language track on Exercism.

Please verify that nothing in docs/ refers to the help.exercism.io site. It should instead point to http://exercism.io/languages/:track_id (at the moment the various tabs are not linkable, unfortunately, we may need to reorganize the pages in order to fix that).

Also, some language tracks reference help.exercism.io in the SETUP.md file, which gets included into the README of every single exercise in the track.

We may also have referenced non-track-specific content that lived on help.exercism.io. This content has probably been migrated to the Contributing Guide of the x-common repository. If it has not been migrated, it would be a great help if you opened an issue in x-common so that we can remedy the situation. If possible, please link to the old article in the deprecated help repository.

If nothing in this repository references help.exercism.io, then this can safely be closed.

Build failing after #98

#98 is a much needed update to the README, so this is kind of a WTF.

Failing test Error in Robot, https://travis-ci.org/exercism/xlisp/jobs/109074111#L907

INFO: Running tests for #<PACKAGE "ROBOT">
; in: LAMBDA ()
;     (LISP-UNIT:ASSERT-TRUE
;      (AND (= (LENGTH ROBOT-NAME-TEST::NAME) 5)
;           (EVERY #'ROBOT-NAME-TEST::IS-UPPER-ALPHA-P
;                  (SUBSEQ ROBOT-NAME-TEST::NAME 0 2))
;           (EVERY #'ROBOT-NAME-TEST::IS-DIGIT-P
;                  (SUBSEQ ROBOT-NAME-TEST::NAME 2 5))))
; --> LISP-UNIT::EXPAND-T-OR-F LET 
; ==>
;   #'AND
; 
; caught ERROR:
;   The macro name AND was found as the argument to FUNCTION.
; 
; compilation unit finished
;   caught 1 ERROR condition
 | Execution error:
 | Execution of a form compiled with errors.
Form:
  #'AND
Compile-time error:
  The macro name AND was found as the argument to FUNCTION.
 |
NAME-MATCHES-EXPECTED-PATTERN: 0 assertions passed, 0 failed, and an execution error.

EDIT: And this is weird

https://travis-ci.org/exercism/xlisp/jobs/109074111#L1489

EDIT: removed SBCL warnings about undefined functions in the atbash-cipher example. This is normal since SBCL warns about this if the reference comes before the definition in the file. We may want to specify an example style guideline about this but it's not a problem. From https://travis-ci.org/exercism/xlisp/jobs/109074111#L1554 on down the atbash tests run fine.

Copy track icon into language track repository

Right now all of the icons used for the language tracks (which can be seen at http://exercism.io/languages) are stored in the exercism/exercism.io repository in public/img/tracks/. It would make a lot more sense to keep these images along with all of the other language-specific stuff in each individual language track repository.

There's a pull request that is adding support for serving up the track icon from the x-api, which deals with language-specific stuff.

In order to support this change, each track will need to

In other words, at the end of it you should have the following file:

./img/icon.png

See exercism/exercism#2925 for more details.

Name nucleobases, not nucleosides

The primary nucleobases are cytosine (DNA and RNA), guanine (DNA and RNA), adenine (DNA and RNA), thymine (DNA) and uracil (RNA), abbreviated as C, G, A, T, and U, respectively. Because A, G, C, and T appear in the DNA, these molecules are called DNA-bases; A, G, C, and U are called RNA-bases. - Wikipedia

In other words, we should rename the values in the RNA transcription problem to reflect the following:

  • cytidine -> cytosine
  • guanosine -> guanine
  • adenosine -> adenine
  • thymidine -> thymine
  • uridine -> uracil

Update config.json to match new specification

For the past three years, the ordering of exercises has been done based on gut feelings and wild guesses. As a result, the progression of the exercises has been somewhat haphazard.

In the past few months maintainers of several tracks have invested a great deal of time in analyzing what concepts various exercises require, and then reordering the tracks as a result of that analysis.

It would be useful to bake this data into the track configuration so that we can adjust it over time as we learn more about each exercise.

To this end, we've decided to add a new key exercises in the config.json file, and deprecate the problems key.

See exercism/discussions#60 for details about this decision.

Note that we will not be removing the problems key at this time, as this would break the website and a number of tools.

The process for deprecating the old problems array will be:

  • Update all of the track configs to contain the new exercises key, with whatever data we have.
  • Simultaneously change the website and tools to support both formats.
  • Once all of the tracks have added the exercises key, remove support for the old key in the site and tools.
  • Remove the old key from all of the track configs.

In the new format, each exercise is a JSON object with three properties:

  • slug: the identifier of the exercise
  • difficulty: a number from 1 to 10 where 1 is the easiest and 10 is the most difficult
  • topics: an array of strings describing topics relevant to the exercise. We maintain
    a list of common topics at https://github.com/exercism/x-common/blob/master/TOPICS.txt. Do not feel like you need to restrict yourself to this list;
    it's only there so that we don't end up with 20 variations on the same topic. Each
    language is different, and there will likely be topics specific to each language that will
    not make it onto the list.

The difficulty rating can be a very rough estimate.

The topics array can be empty if this analysis has not yet been done.

Example:

"exercises": [
  {
    "slug": "hello-world" ,
    "difficulty": 1,
    "topics": [
        "control-flow (if-statements)",
        "optional values",
        "text formatting"
    ]
  },
  {
    "difficulty": 3,
    "slug": "anagram",
    "topics": [
        "strings",
        "filtering"
    ]
  },
  {
    "difficulty": 10,
    "slug": "forth",
    "topics": [
        "parsing",
        "transforming",
        "stacks"
    ]
  }
]

It may be worth making the change in several passes:

  1. Add the exercises key with the array of objects, where difficulty is 1 and topics is empty.
  2. Update the difficulty settings to reflect a more accurate guess.
  3. Add topics (perhaps one-by-one, in separate pull requests, in order to have useful discussions about each exercise).

ABCL returns status 0 (success) when there is an error and it hasn't run any of the tests.

ABCL returns status 0 (success) when there is an error and it hasn't run any of the tests.

What might be happening is that, after loading one of the "dna" example packages ("point-mutations", "nucleotide-count", or "rna-transcription") it treats subsequent defpackage calls as redefining the package, and removes any previous defined symbols.

This is should probably be considered a bug in "xlisp-test" which should treat the examples a fixtures and load and unload them around test runs. (It's still probably reasonable load all the test packages in advance, as they all have different names.)

In the meantime it's worrisome that an error like this should not fail the test suite. I don't know if there's some way to tell abcl or java to do this, or if we'll have to write an error catcher into the abcl command-line switch -e.

In the meantime, maybe we should set abcl builds into "allow failures" (even though the problem is, is that they're not failing when expected)

binary: improve tests for invalid numbers

We should have separate tests for:

  • alphabetic characters at the beginning of a valid binary number
  • alphabetic characters at the end of a valid binary number
  • alphabetic characters in the middle of an otherwise valid binary number
  • invalid digits (e.g. 2)

If the test suite for binary has test cases that cover these edge cases, this issue can safely be closed.

See exercism/problem-specifications#95

Meetup - 5th Monday

There is an interesting edge case in the meetup problem:
some months have five Mondays.

March of 2015 has five Mondays (the fifth being March 30th), whereas
February of 2015 does not, and so should produce an error.


Thanks, @JKesMc9tqIQe9M for pointing out the edge case.
See exercism.io#2142.

Move exercises to subdirectory

The problems api (x-api) now supports having exercises collected in a subdirectory
named exercises.

That is to say that instead of having a mix of bin, docs, and individual exercises,
we can have bin, docs, and exercises in the root of the repository, and all
the exercises collected in a subdirectory.

In other words, instead of this:

x{TRACK_ID}/
├── LICENSE
├── README.md
├── bin
│   └── fetch-configlet
├── bowling
│   ├── bowling_test.ext
│   └── example.ext
├── clock
│   ├── clock_test.ext
│   └── example.ext
├── config.json
└── docs
│   ├── ABOUT.md
│   └── img
... etc

we can have something like this:

x{TRACK_ID}/
├── LICENSE
├── README.md
├── bin
│   └── fetch-configlet
├── config.json
├── docs
│   ├── ABOUT.md
│   └── img
├── exercises
│   ├── bowling
│   │   ├── bowling_test.ext
│   │   └── example.ext
│   └── clock
│       ├── clock_test.ext
│       └── example.ext
... etc

This has already been deployed to production, so it's safe to make this change whenever you have time.

Investigate track health and status of the track

I've used Sarah Sharp's FOSS Heartbeat project to generate stats for each of the language track repositories, as well as the x-common repository.

The Exercism heartbeat data is published here: https://exercism.github.io/heartbeat/

When looking at the data, please disregard any activity from me (kytrinyx), as I would like to get the language tracks to a point where they are entirely maintained by the community.

Please take a look at the heartbeat data for this track, and answer the following questions:

  • To what degree is the track maintained?
  • Who (if anyone) is merging pull requests?
  • Who (if anyone) is reviewing pull requests?
  • Is there someone who is not merging pull requests, but who comments on issues and pull requests, has thoughtful feedback, and is generally helpful? If so, maybe we can invite them to be a maintainer on the track.

I've made up the following scale:

  • ORPHANED - Nobody (other than me) has merged anything in the past year.
  • ENDANGERED - Somewhere between ORPHANED and AT RISK.
  • AT RISK - Two people (other than me) are actively discussing issues and reviewing and merging pull requests.
  • MAINTAINED - Three or more people (other than me) are actively discussing issues and reviewing and merging pull requests.

It would also be useful to know if there a lot of activity on the track, or just the occasional issue or comment.

Please report the current status of the track, including your best guess on the above scale, back to the top-level issue in the discussions repository: exercism/discussions#97

Travis Integration

Thought it would be good to get a real travis integration. Maybe that is possible?

  • Travis Badge for README
  • Travis job to run tests (with sbcl for example)
    • install sbcl
    • install quicklisp
    • install lisp-unit
  • solve problem of the tests loading files that do not exist (example files are named example.lisp)
  • ???
  • profit.

(note IMNSHO it would be cool to have the build script, once sbcl &c. are installed be written in Lisp)

robot-name-test: Use case indifferent tests

I don't think it makes sense that robot "ZZ999" should be considered a different robot from "zz999" given the description of the problem. We should be able to easily write tests that don't require a particular normalization.

(Bonus: if they're still needed afterwards, replace is-upper-alpha-p with upper-case-p and is-digit-p with digit-char-p.)

gigasecond: use times (not dates) for inputs and outputs

A duration of a gigasecond should be measured in seconds, not
days.

The gigasecond problem has been implemented in a number of languages,
and this issue has been generated for each of these language tracks.
This may already be fixed in this track, if so, please make a note of it
and close the issue.

There has been some discussion about whether or not gigaseconds should
take daylight savings time into account, and the conclusion was "no", since
not all locations observe daylight savings time.

Unexpected gigasecond results, some tests failing.

Reported here: exercism.io/submissions/ff41c29d66b648b49d2cdcffa608293a

Output of tests on example solution as of a few minutes ago in SBCL 1.2.2 for OSX:

To load "lisp-unit":
  Load 1 ASDF system:
    lisp-unit
; Loading "lisp-unit"

FROM-LISP-EPOCH: 1 assertions passed, 0 failed.

FROM-UNIX-EPOCH: 1 assertions passed, 0 failed.

FROM-20110425T120000Z: 1 assertions passed, 0 failed.

FROM-19770613T235959Z: 1 assertions passed, 0 failed.

 | Failed Form: (GIGASECOND:FROM 1959 7 19 12 30 30)
 | Expected (1991 3 27 14 17 10) but saw (1991 3 27 13 17 10)
 |
FROM-19590719T123030Z: 0 assertions passed, 1 failed.

Unit Test Summary
 | 5 assertions total
 | 4 passed
 | 1 failed
 | 0 execution errors
 | 0 missing tests

T

Might as well get this out of the way: it's probably a DST or TZ thing.

xlisp-test: load example files for tests, not in advance

See #63.

One solution might be adding a readtime conditional above defpackage for the dna package example files.

But it makes sense to take this opportunity to change how "xlisp-test" loads test data from example files. It should probably load the examples like test data, before the test is run, then afterwards using delete-package on the example package.

Add helpful information to the SETUP.md

The contents of the SETUP.md file gets included in
the README.md that gets delivered when a user runs the exercism fetch
command from their terminal.

At the very minimum, it should contain a link to the relevant
language-specific documentation on
help.exercism.io.

It would also be useful to explain in a generic way how to run the tests.
Remember that this file will be included with all the problems, so it gets
confusing if we refer to specific problems or files.

Some languages have very particular needs in terms of the solution: nested
directories, specific files, etc. If this is the case here, then it would be
useful to explain what is expected.


Thanks, @tejasbubane for suggesting that we add this documentation everywhere.
See exercism.io#2198.

CI:ECL: `load` with `:verbose` and `:print` options writes text of file unsafely

For the new prime-factors example I implemented a factorization wheel with a circular list. This became an issue with the ECL tests because ECL's load with :verbose and :print set to true (or possibly just one of those), like we use by default, prints the file to the screen, but does so with *print-circle* left at nil. If the file contained a circular list load would never finish.

This could be seen as a bug in ECL but I would like to come up with a work-around. It might be enough to set *print-circle* to t in xlisp-test. Before I download ECL and start tinkering with it, what do you think about this?

Failing job: https://travis-ci.org/wobh/xlisp/jobs/82872525

scrabble-score: replace 'multibillionaire' with 'oxyphenbutazone'

The word multibillionaire is too long for the scrabble board. Oxyphenbutazone, on the other hand, is legal.

Please verify that there is no test for multibillionaire in the scrabble-score in this track. If the word is included in the test data, then it should be replaced with oxyphenbutazone. Remember to check the case (if the original is uppercase, then the replacement also should be).

If multibillionaire isn't used, then this issue can safely be closed.

See exercism/problem-specifications#86

Grade school exercise tests too implementation specific

Requires a school that's a CLOS object, and grade roster that's a hash.

We should be able to rewrite with fewer assumptions about implementation.

  • assume a factory, school:make-school
  • treat school:grade-roster and school:grade as an iterators, or coerce.

beer-song-test: Use FORMAT to generate test control string

Per http://exercism.io/submissions/2b5d71f2d6234c498eb1a482c1071f05 file formatting of line endings in "beer-song-test.lisp" causes false negatives. For each line in verses, replace with "~&~A~%" and format.

For example https://github.com/exercism/xlisp/blob/master/beer-song/beer-song-test.lisp#L10:

(defparameter +verse-8+
  "8 bottles of beer on the wall, 8 bottles of beer.
Take one down and pass it around, 7 bottles of beer on the wall.
")

change to:

(defparameter +verse-8+
  (format nil "~&8 bottles of beer on the wall, 8 bottles of beer.~%~
               ~&Take one down and pass it around, 7 bottles of beer on the wall.~%")

Likely this affects any other exercise we have or will have multi-line strings.

(bonus: aligned format not strictly necessary but a nice benefit of ~#\Newline see http://l1sp.org/cl/22.3.9.3)

(bonus, rename parameters from +parameter-name+ to *parameter-name*.)

RFC: Mitigating DST troubles

For some reason, I've lately been thinking a lot about the troubles we had earlier this year with the gigasecond exercise, and I've come up with some ideas about detecting and preventing future issues with that or perhaps with future exercises (todo: make list of other datetime exercises).

Now that we have exercise/example testing setup, the first is simply to schedule a TravisCI build for the DST switch dates in the US. If the tests fail for some reason, the scheduled build should let us know almost as soon as can be known. I poked around the TravisCI settings and didn't see anything like this, so I thought about setting up an IFTTT solution (I'm pretty sure I have an account, but I don't remember the password, as I don't think I've ever used it). Any other suggestions welcome.

The second is that it's seems possible that perhaps only a few CL implementations could be affected by the DST switch due to a bug in time handling. It would be nice to figure out how to conditionally allow failures on a per-test-case basis, so that even if there's a problem with the DST affected exercise tests on that implementation, we can still get feedback on the other exercises until the issue is resolved (or until the switch back).

If more serious problems turn up, we could also look for a date-time library for CL that's well maintained and, if that works out, recommend it to users.

Lastly, I think we should also consider a ---well maybe "policy" is too strong a word, but something between a recommendation and a policy, where we prefer time-dependent test implementations to work for standard time, if for some reason we can't make them work for both DST and ST.

I don't suggest that any of these be permanent or immediately acted upon, I mainly wanted to get my thoughts on the topic out there and collect some ideas going forward.

http://www.nist.gov/pml/div688/dst.cfm

Keep the faith, someday, sometime, the tyranny will end.

triangle: incorrect test in some tracks

Please check if there's a test that states that a triangle with sides 2, 4, 2 is invalid. The triangle inequality states that for any triangle, the sum of the lengths of any two sides must be greater than or equal to the length of the remaining side. If this doesn't affect this track, go ahead and just close the issue.

Implement Lisp Exercises

This is a placeholder issue for keeping track of porting assignments to Lisp, or noting why they may not be applicable. (This list shamelessly stolen from exercism/clojure#1)

  • bob
  • word-count
  • anagram
  • beer-song
  • nucleotide-count
  • rna-transcription
  • point-mutations
  • phone-number
  • grade-school
  • robot-name
  • leap
  • etl
  • meetup
  • space-age
  • grains
  • gigasecond
  • triangle
  • scrabble-score
  • roman-numerals
  • binary
  • prime-factors
  • raindrops
  • allergies
  • strain
  • atbash-cipher
  • accumulate
  • bank-account
  • crypto-square
  • trinary @verdammelt
  • sieve
  • simple-cipher
  • octal
  • luhn
  • pig-latin
  • pythagorean-triplet @wobh
  • series
  • difference-of-squares
  • secret-handshake
  • linked-list
  • wordy
  • hexadecimal
  • largest-series-product
  • kindergarden-garden
  • binary-search-tree
  • matrix
  • robot-simulator
  • nth-prime @wobh
  • palindrome-products
  • pascals-triangle @wobh
  • say
  • sum-of-multiples
  • queen-attack
  • saddle-points
  • ocr-numbers

Delete configlet binaries from history?

I made a really stupid choice a while back to commit the cross-compiled
binaries for configlet (the tool that sanity-checks the config.json
against the implemented problems) into the repository itself.

Those binaries are HUGE, and every time they change the entire 4 or 5 megs get
recommitted. This means that cloning the repository takes a ridiculously long
time.

I've added a script that can be run on travis to grab the latest release from
the configlet repository (bin/fetch-configlet), and travis is set up to run
this now instead of using the committed binary.

I would really like to thoroughly delete the binaries from the entire git
history, but this will break all the existing clones and forks.

The commands I would run are:

# ensure this happens on an up-to-date master
git checkout master && git fetch origin && git reset --hard origin/master

# delete from history
git filter-branch --index-filter 'git rm -r --cached --ignore-unmatch bin/configlet-*' --prune-empty

# clean up
rm -rf .git/refs/original/
git reflog expire --all
git gc --aggressive --prune

# push up the new master, force override existing master branch
git push -fu origin master

If we do this everyone who has a fork will need to make sure that their master
is reset to the new upstream master:

git checkout master
git fetch upstream master
git reset --hard upstream/master
git push -fu origin master

We can at-mention (@) all the contributors and everyone who has a fork here in this
issue if we decide to do it.

The important question though, is: Is it worth doing?

Do you have any other suggestions of how to make sure this doesn't confuse people and break their
repository if we do proceed with this change?

org-ify the docs

Continuing in the grand tradition of xelisp, xclojure, and xscheme, any objection to moving the markdown docs to org-mode? I'd be happy to take it on.

Should we provide skeleton exercise packages?

The Elixir exercisms provide a skeleton module and interface, which is nice since, as a Elixir newby, I would have had no idea how to set up an Elixir module that would make the tests work. The ceremony required to setup a Common Lisp package is more, uh, ceremonious than that of Elixir and pretty heavyweight for the early Exercisms.

Exercism Elixir's leap.exs:

https://github.com/exercism/xelixir/blob/master/leap/leap.exs

Here's one way a corresponding leap.lisp could look:

(cl:in-package #:cl-user)

(cl:defpackage #:leap
  (:use #:cl)
  (:export #:leap-year-p)
  (:documentation "Provides `leap-year-p'
A leap year occurs:
on every year that is evenly divisible by 4
  except every year that is evenly divisible by 100
    except every year that is evenly divisible by 400."))

(cl:in-package #:leap)

(defun leap-year-p (year)
  "Returns whether `year' is a leap year."
  )

Since we only use packages to isolate tests from implementation from the CL-USER namespace, and all we need the above for (minus documentation) is so that tests work, it seems like it would be a good idea to provide something like this for those interested in CL track but not familiar with the arcana of CL packages.

How to set up a local dev environment

See issue exercism/exercism#2092 for an overview of operation welcome contributors.


Provide instructions on how to contribute patches to the exercism test suites
and examples: dependencies, running the tests, what gets tested on Travis-CI,
etc.

The contributing document
in the x-api repository describes how all the language tracks are put
together, as well as details about the common metadata, and high-level
information about contributing to existing problems, or adding new problems.

The README here should be language-specific, and can point to the contributing
guide for more context.

From the OpenHatch guide:

Here are common elements of setting up a development environment you’ll want your guide to address:

Preparing their computer
Make sure they’re familiar with their operating system’s tools, such as the terminal/command prompt. You can do this by linking to a tutorial and asking contributors to make sure they understand it. There are usually great tutorials already out there - OpenHatch’s command line tutorial can be found here.
If contributors need to set up a virtual environment, access a virtual machine, or download a specific development kit, give them instructions on how to do so.
List any dependencies needed to run your project, and how to install them. If there are good installation guides for those dependencies, link to them.

Downloading the source
Give detailed instructions on how to download the source of the project, including common missteps or obstacles.

How to view/test changes
Give instructions on how to view and test the changes they’ve made. This may vary depending on what they’ve changed, but do your best to cover common changes. This can be as simple as viewing an html document in a browser, but may be more complicated.

Installation will often differ depending on the operating system of the contributor. You will probably need to create separate instructions in various parts of your guide for Windows, Mac and Linux users. If you only want to support development on a single operating system, make sure that is clear to users, ideally in the top-level documentation.

rna-transcription: don't transcribe both ways

I can't remember the history of this, but we ended up with a weird non-biological thing in the RNA transcription exercise, where some test suites also have tests for transcribing from RNA back to DNA. This makes no sense.

If this track does have tests for the reverse transcription, we should remove them, and also simplify the reference solution to match.

If this track doesn't have any tests for RNA->DNA transcription, then this issue can be closed.

See exercism/problem-specifications#148

Need more exercises!

Here are exercises from the xclojure project which are not yet implented in xlisp:

  • scrabble-score
  • roman-numerals
  • binary
  • prime-factors
  • raindrops
  • allergies
  • atbash-cipher
  • bank-account
  • crypto-square
  • kindergarten-garden
  • robot-simulator
  • queen-attack
  • accumulate
  • binary-search-tree
  • difference-of-squares
  • hexadecimal
  • largest-series-product

(xclojure chosen rather arbitrarily as a place to get a list)

Launch the lisp track?

We have enough exercises to launch:

  1. anagram
  2. beer-song
  3. bob
  4. etl
  5. gigasecond
  6. grade-school
  7. grains
  8. leap
  9. meetup
  10. nucleotide-count
  11. phone-number
  12. point-mutations
  13. rna-transcription
  14. robot-name
  15. space-age
  16. triangle
  17. word-count

These need to be ordered by (roughly) increasing difficulty in the problems section of config.json.

We already have the help/setup page, exercism knows how to recognize lisp submissions.

We will need someone who knows lisp well enough to hang out and nitpick at the start. Would that be you, @verdammelt?

Robot name is considered reset when just set to `nil`

I've been thinking about this implementation of the Robot Name exercise: http://exercism.io/submissions/dc01bbe303724a958e509d819ab49853 and it's occurred to me that it reveals a bug in the tests for this exercise.

When reset-name is called on a robot, here, it causes that robot's robot-name to return nil, which, of course, is not equal to the original-name in the test, and thus the assertion passes. It's okay that a robot not have a name (in fact, the README requires it as the initial state of new robots; we don't test that expectation so everyone implements new robots with names, which is likely another bug in the exercise), but reset-name should at least provide the robot with a new name.

To fix: break out the code for name-matches-expected-pattern into robot-name-valid-p to be used in that test and in a new assertion on the robot's reset name in name-can-be-reset.

DOC: emacs setup

We get enough misformatted submissions the I think we should take the hint and add some super basic documentation about setting up Emacs for exercism exercises. This might also help with the Clojure, Scheme, Elisp, and nascent LFE tracks. We might even want to go so far as a minor mode.

@verdammelt, @canweriotnow what do you think?

I don't think I know any maintainers of the Clojure or LFE, so if you do and think they'd be interested in this too, we should send them a shout-out.

cc @kytrinyx

Make Hamming conform to official definition

From issue exercism/exercism#1867

Wikipedia says the Hamming distance is not defined for strings of different length.

I am not saying the problems cannot be different, but for such a well-defined concept it would make sense to stick to one definition, especially when the READMEs provide so little information about what is expected from the implementation.

Let's clean this up so that we're using the official definition.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.