exercism / common-lisp Goto Github PK
View Code? Open in Web Editor NEWExercism exercises in Common Lisp.
Home Page: https://exercism.org/tracks/common-lisp
License: MIT License
Exercism exercises in Common Lisp.
Home Page: https://exercism.org/tracks/common-lisp
License: MIT License
The metadata for this problem has been simplified, and it might be worth changing the test suite and example so that theres no mention of RNA or uracil.
I've also changed the API of the problem in Ruby so that there is no error checking in the object itself, only in the factory that creates it: https://github.com/exercism/xruby/blob/master/nucleotide-count/example.rb#L2-L8
I don't know if this is relevant or idiomatic in Lisp.
We have enough exercises to launch:
These need to be ordered by (roughly) increasing difficulty in the problems
section of config.json
.
We already have the help/setup page, exercism knows how to recognize lisp submissions.
We will need someone who knows lisp well enough to hang out and nitpick at the start. Would that be you, @verdammelt?
I don't think it makes sense that robot "ZZ999" should be considered a different robot from "zz999" given the description of the problem. We should be able to easily write tests that don't require a particular normalization.
(Bonus: if they're still needed afterwards, replace is-upper-alpha-p
with upper-case-p
and is-digit-p
with digit-char-p
.)
I made a really stupid choice a while back to commit the cross-compiled
binaries for configlet (the tool that sanity-checks the config.json
against the implemented problems) into the repository itself.
Those binaries are HUGE, and every time they change the entire 4 or 5 megs get
recommitted. This means that cloning the repository takes a ridiculously long
time.
I've added a script that can be run on travis to grab the latest release from
the configlet repository (bin/fetch-configlet), and travis is set up to run
this now instead of using the committed binary.
I would really like to thoroughly delete the binaries from the entire git
history, but this will break all the existing clones and forks.
The commands I would run are:
# ensure this happens on an up-to-date master
git checkout master && git fetch origin && git reset --hard origin/master
# delete from history
git filter-branch --index-filter 'git rm -r --cached --ignore-unmatch bin/configlet-*' --prune-empty
# clean up
rm -rf .git/refs/original/
git reflog expire --all
git gc --aggressive --prune
# push up the new master, force override existing master branch
git push -fu origin master
If we do this everyone who has a fork will need to make sure that their master
is reset to the new upstream master:
git checkout master
git fetch upstream master
git reset --hard upstream/master
git push -fu origin master
We can at-mention (@) all the contributors and everyone who has a fork here in this
issue if we decide to do it.
The important question though, is: Is it worth doing?
Do you have any other suggestions of how to make sure this doesn't confuse people and break their
repository if we do proceed with this change?
#98 is a much needed update to the README, so this is kind of a WTF.
Failing test Error in Robot, https://travis-ci.org/exercism/xlisp/jobs/109074111#L907
INFO: Running tests for #<PACKAGE "ROBOT">
; in: LAMBDA ()
; (LISP-UNIT:ASSERT-TRUE
; (AND (= (LENGTH ROBOT-NAME-TEST::NAME) 5)
; (EVERY #'ROBOT-NAME-TEST::IS-UPPER-ALPHA-P
; (SUBSEQ ROBOT-NAME-TEST::NAME 0 2))
; (EVERY #'ROBOT-NAME-TEST::IS-DIGIT-P
; (SUBSEQ ROBOT-NAME-TEST::NAME 2 5))))
; --> LISP-UNIT::EXPAND-T-OR-F LET
; ==>
; #'AND
;
; caught ERROR:
; The macro name AND was found as the argument to FUNCTION.
;
; compilation unit finished
; caught 1 ERROR condition
| Execution error:
| Execution of a form compiled with errors.
Form:
#'AND
Compile-time error:
The macro name AND was found as the argument to FUNCTION.
|
NAME-MATCHES-EXPECTED-PATTERN: 0 assertions passed, 0 failed, and an execution error.
EDIT: And this is weird
https://travis-ci.org/exercism/xlisp/jobs/109074111#L1489
EDIT: removed SBCL warnings about undefined functions in the atbash-cipher example. This is normal since SBCL warns about this if the reference comes before the definition in the file. We may want to specify an example style guideline about this but it's not a problem. From https://travis-ci.org/exercism/xlisp/jobs/109074111#L1554 on down the atbash tests run fine.
The word multibillionaire is too long for the scrabble board. Oxyphenbutazone, on the other hand, is legal.
Please verify that there is no test for multibillionaire in the scrabble-score in this track. If the word is included in the test data, then it should be replaced with oxyphenbutazone. Remember to check the case (if the original is uppercase, then the replacement also should be).
If multibillionaire isn't used, then this issue can safely be closed.
The old help site was deprecated in December 2015. We now have content that is displayed on the main exercism.io website, under each individual language on http://exercism.io/languages.
The content itself is maintained along with the language track itself, under the docs/
directory.
We decided on this approach since the maintainers of each individual language track are in the best position to review documentation about the language itself or the language track on Exercism.
Please verify that nothing in docs/
refers to the help.exercism.io site. It should instead point to http://exercism.io/languages/:track_id (at the moment the various tabs are not linkable, unfortunately, we may need to reorganize the pages in order to fix that).
Also, some language tracks reference help.exercism.io
in the SETUP.md file, which gets included into the README of every single exercise in the track.
We may also have referenced non-track-specific content that lived on help.exercism.io. This content has probably been migrated to the Contributing Guide of the x-common repository. If it has not been migrated, it would be a great help if you opened an issue in x-common so that we can remedy the situation. If possible, please link to the old article in the deprecated help repository.
If nothing in this repository references help.exercism.io, then this can safely be closed.
For the past three years, the ordering of exercises has been done based on gut feelings and wild guesses. As a result, the progression of the exercises has been somewhat haphazard.
In the past few months maintainers of several tracks have invested a great deal of time in analyzing what concepts various exercises require, and then reordering the tracks as a result of that analysis.
It would be useful to bake this data into the track configuration so that we can adjust it over time as we learn more about each exercise.
To this end, we've decided to add a new key exercises in the config.json file, and deprecate the problems
key.
See exercism/discussions#60 for details about this decision.
Note that we will not be removing the problems
key at this time, as this would break the website and a number of tools.
The process for deprecating the old problems
array will be:
In the new format, each exercise is a JSON object with three properties:
The difficulty rating can be a very rough estimate.
The topics array can be empty if this analysis has not yet been done.
Example:
"exercises": [
{
"slug": "hello-world" ,
"difficulty": 1,
"topics": [
"control-flow (if-statements)",
"optional values",
"text formatting"
]
},
{
"difficulty": 3,
"slug": "anagram",
"topics": [
"strings",
"filtering"
]
},
{
"difficulty": 10,
"slug": "forth",
"topics": [
"parsing",
"transforming",
"stacks"
]
}
]
It may be worth making the change in several passes:
Continuing in the grand tradition of xelisp, xclojure, and xscheme, any objection to moving the markdown docs to org-mode? I'd be happy to take it on.
For the new prime-factors
example I implemented a factorization wheel with a circular list. This became an issue with the ECL tests because ECL's load
with :verbose
and :print
set to true (or possibly just one of those), like we use by default, prints the file to the screen, but does so with *print-circle*
left at nil
. If the file contained a circular list load
would never finish.
This could be seen as a bug in ECL
but I would like to come up with a work-around. It might be enough to set *print-circle*
to t
in xlisp-test
. Before I download ECL and start tinkering with it, what do you think about this?
Failing job: https://travis-ci.org/wobh/xlisp/jobs/82872525
See exercism/exercism#1310 for original discussion.
The Elixir exercisms provide a skeleton module and interface, which is nice since, as a Elixir newby, I would have had no idea how to set up an Elixir module that would make the tests work. The ceremony required to setup a Common Lisp package is more, uh, ceremonious than that of Elixir and pretty heavyweight for the early Exercisms.
Exercism Elixir's leap.exs:
https://github.com/exercism/xelixir/blob/master/leap/leap.exs
Here's one way a corresponding leap.lisp could look:
(cl:in-package #:cl-user)
(cl:defpackage #:leap
(:use #:cl)
(:export #:leap-year-p)
(:documentation "Provides `leap-year-p'
A leap year occurs:
on every year that is evenly divisible by 4
except every year that is evenly divisible by 100
except every year that is evenly divisible by 400."))
(cl:in-package #:leap)
(defun leap-year-p (year)
"Returns whether `year' is a leap year."
)
Since we only use packages to isolate tests from implementation from the CL-USER
namespace, and all we need the above for (minus documentation) is so that tests work, it seems like it would be a good idea to provide something like this for those interested in CL track but not familiar with the arcana of CL packages.
We should have separate tests for:
If the test suite for binary has test cases that cover these edge cases, this issue can safely be closed.
ABCL returns status 0 (success) when there is an error and it hasn't run any of the tests.
What might be happening is that, after loading one of the "dna" example packages ("point-mutations", "nucleotide-count", or "rna-transcription") it treats subsequent defpackage
calls as redefining the package, and removes any previous defined symbols.
This is should probably be considered a bug in "xlisp-test" which should treat the examples a fixtures and load and unload them around test runs. (It's still probably reasonable load all the test packages in advance, as they all have different names.)
In the meantime it's worrisome that an error like this should not fail the test suite. I don't know if there's some way to tell abcl or java to do this, or if we'll have to write an error catcher into the abcl command-line switch -e
.
In the meantime, maybe we should set abcl builds into "allow failures" (even though the problem is, is that they're not failing when expected)
Please check if there's a test that states that a triangle with sides 2, 4, 2 is invalid. The triangle inequality states that for any triangle, the sum of the lengths of any two sides must be greater than or equal to the length of the remaining side. If this doesn't affect this track, go ahead and just close the issue.
Requires a school that's a CLOS object, and grade roster that's a hash.
We should be able to rewrite with fewer assumptions about implementation.
school:make-school
school:grade-roster
and school:grade
as an iterators, or coerce.Per http://exercism.io/submissions/2b5d71f2d6234c498eb1a482c1071f05 file formatting of line endings in "beer-song-test.lisp" causes false negatives. For each line in verses, replace with "~&~A~%"
and format.
For example https://github.com/exercism/xlisp/blob/master/beer-song/beer-song-test.lisp#L10:
(defparameter +verse-8+
"8 bottles of beer on the wall, 8 bottles of beer.
Take one down and pass it around, 7 bottles of beer on the wall.
")
change to:
(defparameter +verse-8+
(format nil "~&8 bottles of beer on the wall, 8 bottles of beer.~%~
~&Take one down and pass it around, 7 bottles of beer on the wall.~%")
Likely this affects any other exercise we have or will have multi-line strings.
(bonus: aligned format not strictly necessary but a nice benefit of ~#\Newline
see http://l1sp.org/cl/22.3.9.3)
(bonus, rename parameters from +parameter-name+
to *parameter-name*
.)
ecl
from allow_failures
in .travis.yml
See #54 (comment) and #55
Ref: exercism/DEPRECATED.x-api#137
Ref: #121
Due to an oversight on my part in #30, "school.lisp" is still exporting the original list of symbols. Easy fix: change #:school
to #:make-school
.
#93 Part 2: Integration Boogaloo.
I think the culprit is in the install_i386_arch
function. Making a branch to test this out in.
From issue exercism/exercism#1867
Wikipedia says the Hamming distance is not defined for strings of different length.
I am not saying the problems cannot be different, but for such a well-defined concept it would make sense to stick to one definition, especially when the READMEs provide so little information about what is expected from the implementation.
Let's clean this up so that we're using the official definition.
I can't remember the history of this, but we ended up with a weird non-biological thing in the RNA transcription exercise, where some test suites also have tests for transcribing from RNA back to DNA. This makes no sense.
If this track does have tests for the reverse transcription, we should remove them, and also simplify the reference solution to match.
If this track doesn't have any tests for RNA->DNA transcription, then this issue can be closed.
See issue exercism/exercism#2092 for an overview of operation welcome contributors.
Provide instructions on how to contribute patches to the exercism test suites
and examples: dependencies, running the tests, what gets tested on Travis-CI,
etc.
The contributing document
in the x-api repository describes how all the language tracks are put
together, as well as details about the common metadata, and high-level
information about contributing to existing problems, or adding new problems.
The README here should be language-specific, and can point to the contributing
guide for more context.
From the OpenHatch guide:
Here are common elements of setting up a development environment you’ll want your guide to address:
Preparing their computer
Make sure they’re familiar with their operating system’s tools, such as the terminal/command prompt. You can do this by linking to a tutorial and asking contributors to make sure they understand it. There are usually great tutorials already out there - OpenHatch’s command line tutorial can be found here.
If contributors need to set up a virtual environment, access a virtual machine, or download a specific development kit, give them instructions on how to do so.
List any dependencies needed to run your project, and how to install them. If there are good installation guides for those dependencies, link to them.Downloading the source
Give detailed instructions on how to download the source of the project, including common missteps or obstacles.How to view/test changes
Give instructions on how to view and test the changes they’ve made. This may vary depending on what they’ve changed, but do your best to cover common changes. This can be as simple as viewing an html document in a browser, but may be more complicated.Installation will often differ depending on the operating system of the contributor. You will probably need to create separate instructions in various parts of your guide for Windows, Mac and Linux users. If you only want to support development on a single operating system, make sure that is clear to users, ideally in the top-level documentation.
Reported here: exercism.io/submissions/ff41c29d66b648b49d2cdcffa608293a
Output of tests on example solution as of a few minutes ago in SBCL 1.2.2 for OSX:
To load "lisp-unit":
Load 1 ASDF system:
lisp-unit
; Loading "lisp-unit"
FROM-LISP-EPOCH: 1 assertions passed, 0 failed.
FROM-UNIX-EPOCH: 1 assertions passed, 0 failed.
FROM-20110425T120000Z: 1 assertions passed, 0 failed.
FROM-19770613T235959Z: 1 assertions passed, 0 failed.
| Failed Form: (GIGASECOND:FROM 1959 7 19 12 30 30)
| Expected (1991 3 27 14 17 10) but saw (1991 3 27 13 17 10)
|
FROM-19590719T123030Z: 0 assertions passed, 1 failed.
Unit Test Summary
| 5 assertions total
| 4 passed
| 1 failed
| 0 execution errors
| 0 missing tests
T
Might as well get this out of the way: it's probably a DST or TZ thing.
This may or may not be appropriate in this language. See exercism/docs#54 for details.
Basically,
This happened a while back, and it was for really weird legacy reasons.
I've since fixed the underlying issues that caused the problem, but for consistency
it would be nice to rename point-mutation to hamming, so that all the tracks are using
the same exercise name.
Once the problem has been renamed, I can run a script on the website to point people's
existing point-mutations solutions to the new hamming exercise so that they'll be able
to review solutions to hamming, and people who solve the new hamming exercise can see
all the old ones.
The primary nucleobases are cytosine (DNA and RNA), guanine (DNA and RNA), adenine (DNA and RNA), thymine (DNA) and uracil (RNA), abbreviated as C, G, A, T, and U, respectively. Because A, G, C, and T appear in the DNA, these molecules are called DNA-bases; A, G, C, and U are called RNA-bases. - Wikipedia
In other words, we should rename the values in the RNA transcription problem to reflect the following:
cytidine
-> cytosine
guanosine
-> guanine
adenosine
-> adenine
thymidine
-> thymine
uridine
-> uracil
The problems api (x-api) now supports having exercises collected in a subdirectory
named exercises
.
That is to say that instead of having a mix of bin
, docs
, and individual exercises,
we can have bin
, docs
, and exercises
in the root of the repository, and all
the exercises collected in a subdirectory.
In other words, instead of this:
x{TRACK_ID}/
├── LICENSE
├── README.md
├── bin
│ └── fetch-configlet
├── bowling
│ ├── bowling_test.ext
│ └── example.ext
├── clock
│ ├── clock_test.ext
│ └── example.ext
├── config.json
└── docs
│ ├── ABOUT.md
│ └── img
... etc
we can have something like this:
x{TRACK_ID}/
├── LICENSE
├── README.md
├── bin
│ └── fetch-configlet
├── config.json
├── docs
│ ├── ABOUT.md
│ └── img
├── exercises
│ ├── bowling
│ │ ├── bowling_test.ext
│ │ └── example.ext
│ └── clock
│ ├── clock_test.ext
│ └── example.ext
... etc
This has already been deployed to production, so it's safe to make this change whenever you have time.
See #63.
One solution might be adding a readtime conditional above defpackage
for the dna
package example files.
But it makes sense to take this opportunity to change how "xlisp-test" loads test data from example files. It should probably load the examples like test data, before the test is run, then afterwards using delete-package
on the example package.
A duration of a gigasecond should be measured in seconds, not
days.
The gigasecond
problem has been implemented in a number of languages,
and this issue has been generated for each of these language tracks.
This may already be fixed in this track, if so, please make a note of it
and close the issue.
There has been some discussion about whether or not gigaseconds should
take daylight savings time into account, and the conclusion was "no", since
not all locations observe daylight savings time.
This is a placeholder issue for keeping track of porting assignments to Lisp, or noting why they may not be applicable. (This list shamelessly stolen from exercism/clojure#1)
The contents of the SETUP.md file gets included in
the README.md that gets delivered when a user runs the exercism fetch
command from their terminal.
At the very minimum, it should contain a link to the relevant
language-specific documentation on
help.exercism.io.
It would also be useful to explain in a generic way how to run the tests.
Remember that this file will be included with all the problems, so it gets
confusing if we refer to specific problems or files.
Some languages have very particular needs in terms of the solution: nested
directories, specific files, etc. If this is the case here, then it would be
useful to explain what is expected.
Thanks, @tejasbubane for suggesting that we add this documentation everywhere.
See exercism.io#2198.
For some reason, I've lately been thinking a lot about the troubles we had earlier this year with the gigasecond exercise, and I've come up with some ideas about detecting and preventing future issues with that or perhaps with future exercises (todo: make list of other datetime exercises).
Now that we have exercise/example testing setup, the first is simply to schedule a TravisCI build for the DST switch dates in the US. If the tests fail for some reason, the scheduled build should let us know almost as soon as can be known. I poked around the TravisCI settings and didn't see anything like this, so I thought about setting up an IFTTT solution (I'm pretty sure I have an account, but I don't remember the password, as I don't think I've ever used it). Any other suggestions welcome.
The second is that it's seems possible that perhaps only a few CL implementations could be affected by the DST switch due to a bug in time handling. It would be nice to figure out how to conditionally allow failures on a per-test-case basis, so that even if there's a problem with the DST affected exercise tests on that implementation, we can still get feedback on the other exercises until the issue is resolved (or until the switch back).
If more serious problems turn up, we could also look for a date-time library for CL that's well maintained and, if that works out, recommend it to users.
Lastly, I think we should also consider a ---well maybe "policy" is too strong a word, but something between a recommendation and a policy, where we prefer time-dependent test implementations to work for standard time, if for some reason we can't make them work for both DST and ST.
I don't suggest that any of these be permanent or immediately acted upon, I mainly wanted to get my thoughts on the topic out there and collect some ideas going forward.
http://www.nist.gov/pml/div688/dst.cfm
Keep the faith, someday, sometime, the tyranny will end.
Here are exercises from the xclojure project which are not yet implented in xlisp:
(xclojure chosen rather arbitrarily as a place to get a list)
Right now all of the icons used for the language tracks (which can be seen at http://exercism.io/languages) are stored in the exercism/exercism.io repository in public/img/tracks/
. It would make a lot more sense to keep these images along with all of the other language-specific stuff in each individual language track repository.
There's a pull request that is adding support for serving up the track icon from the x-api, which deals with language-specific stuff.
In order to support this change, each track will need to
img/
at the root of this repository if it doesn't already exist, thenimg/
directory, and importantlyicon.png
In other words, at the end of it you should have the following file:
./img/icon.png
See exercism/exercism#2925 for more details.
I've been thinking about this implementation of the Robot Name exercise: http://exercism.io/submissions/dc01bbe303724a958e509d819ab49853 and it's occurred to me that it reveals a bug in the tests for this exercise.
When reset-name
is called on a robot, here, it causes that robot's robot-name
to return nil
, which, of course, is not equal
to the original-name
in the test, and thus the assertion passes. It's okay that a robot not have a name (in fact, the README requires it as the initial state of new robots; we don't test that expectation so everyone implements new robots with names, which is likely another bug in the exercise), but reset-name
should at least provide the robot with a new name.
To fix: break out the code for name-matches-expected-pattern
into robot-name-valid-p
to be used in that test and in a new assertion on the robot's reset name in name-can-be-reset
.
We get enough misformatted submissions the I think we should take the hint and add some super basic documentation about setting up Emacs for exercism exercises. This might also help with the Clojure, Scheme, Elisp, and nascent LFE tracks. We might even want to go so far as a minor mode.
@verdammelt, @canweriotnow what do you think?
I don't think I know any maintainers of the Clojure or LFE, so if you do and think they'd be interested in this too, we should send them a shout-out.
cc @kytrinyx
There is an interesting edge case in the meetup problem:
some months have five Mondays.
March of 2015 has five Mondays (the fifth being March 30th), whereas
February of 2015 does not, and so should produce an error.
Thanks, @JKesMc9tqIQe9M for pointing out the edge case.
See exercism.io#2142.
If you visit http://exercism.io/languages/lisp and then click on 'About the Common Lisp Track' it says "We're missing a short introduction about the language.". And to put that text into docs/ABOUT.md
.
So I guess we need on of those.
clisp
and clisp32
from allow_failures
matrix in .travis.yml
See #54 (comment)
I guess I missed this when I reviewed & merged the pull request.
Per discussion in http://exercism.io/submissions/c9f240b76eb248d9aeec1fd416ea51be we think adding a case-sensitivity test is a good idea.
The test should fail if allergic-to-p
returns nil
for score and corresponding allergens that are lowercase, mixed-case (including title-cased), or uppercase. Examples that should be true:
(allergic-to-p 0 "eggs")
(allergic-to-p 0 "eGgS")
(allergic-to-p 0 "Eggs")
(allergic-to-p 0 "EGGS")
I've used Sarah Sharp's FOSS Heartbeat project to generate stats for each of the language track repositories, as well as the x-common repository.
The Exercism heartbeat data is published here: https://exercism.github.io/heartbeat/
When looking at the data, please disregard any activity from me (kytrinyx
), as I would like to get the language tracks to a point where they are entirely maintained by the community.
Please take a look at the heartbeat data for this track, and answer the following questions:
I've made up the following scale:
It would also be useful to know if there a lot of activity on the track, or just the occasional issue or comment.
Please report the current status of the track, including your best guess on the above scale, back to the top-level issue in the discussions repository: exercism/discussions#97
@pminten brought up a very good point about the API of meetup being
pretty terrible (See exercism/exercism#950).
If the meetup problem has not already been fixed in this track, we should
improve the test suite and example code to give the API a better design.
Some things that might be worth touching on:
Thought it would be good to get a real travis integration. Maybe that is possible?
example.lisp
)(note IMNSHO it would be cool to have the build script, once sbcl &c. are installed be written in Lisp)
ccl
and ccl32
from allow_failures
matrix in .travis.yml
See #54 (comment)
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.