Giter Club home page Giter Club logo

vlang's Introduction

Exercism V Track

Configlet V / Test

Exercism exercises in V.

Testing

To test the exercises, run ./bin/test. This command will iterate over all exercises and check to see if their example implementation passes all the tests.

Track linting

configlet is an Exercism-wide tool for working with tracks. You can download it by running:

$ ./bin/fetch-configlet

Run its lint command to verify if all exercises have all the necessary files and if config files are correct:

$ ./bin/configlet lint

The lint command is under development.
Please re-run this command regularly to see if your track passes the latest linting rules.

Basic linting finished successfully:
- config.json exists and is valid JSON
- config.json has these valid fields:
    language, slug, active, blurb, version, status, online_editor, key_features, tags
- Every concept has the required .md files
- Every concept has a valid links.json file
- Every concept has a valid .meta/config.json file
- Every concept exercise has the required .md files
- Every concept exercise has a valid .meta/config.json file
- Every practice exercise has the required .md files
- Every practice exercise has a valid .meta/config.json file
- Required track docs are present
- Required shared exercise docs are present

Contributing

General Information (useful for all contributions)

Setting up V

See INSTALLATION.md

Style Guide:

Before committing, please run v fmt -w [FILE_NAME] on whatever file you're committing to ensure it is formatted properly. More info on V formatting can be found in the docs

In the comments section anywhere

Use conventional comments

Issues

Opening new issues is highly encouraged! To make the process as smooth as possible, please include as much information as possible about the issue. Better to have more information than not enough.

Pull requests

How to get started contributing

A good place to start in the docs is to understand how the Hello World exercise is created. Make sure you've fetched configlet!

How to implement a new exercise from start to finish

There are two ways to implement a practice exercise. You can follow all the 13 steps listed below from start to finish, or you can run ./bin/bootstrap_practice_exercise.sh [SLUG] to create all the files and folders you'll need. This will allow you to skip a few steps and jump right into writing your example solution, but you'll need bash and jq to run the script.

  1. Pick an exercise from the problem-specifications repo.
  2. Create a new entry for the exercise in config.json. Include
  • a new UUID for the exercise (generated with bin/configlet uuid)
  • the slug of the exercise (should be the same as in problem-specifications)
  • the name of the exercise (should be the same as in problem-specifications)
  • any concepts it practices and prereqs it has (safe to leave blank usually!)
  • your best estimate of how difficult the task is
  1. Create the directory for the exercise and download shared files using these two commands:
  • bin/configlet sync --update --yes --docs --metadata --exercise [SLUG]
  • bin/configlet sync --update --tests include --exercise [SLUG]
  1. Create 3 files in the new directory (located at exercises/practice/[SLUG]):
  • run_test.v
  • [SLUG].v
  • .meta/example.v
  1. Write an example implementation in [SLUG].v
  2. Write a test suite in run_test.v based on the canonical data in problem-specifications. Here's an example of canonical data and here's the corresponding test suite.
  3. Run the test suite with v -stats test run_test.v
  4. Once all tests pass, make sure code is formatted properly with v fmt -w [V_FILE] on all the v files (example and test files)
  5. Copypaste everything in [SLUG].v into .meta/example.v
  6. Remove everything from [SLUG].v except the stub of the needed function, the module main at the top, and make a stub of a struct or two.
  7. Add needed info to the ...[SLUG]/.meta/config.json file:
  • Author's GitHub username
  • solution file name (should be [SLUG].v)
  • test file name (should be run_test.v)
  • example solution file name (should be .meta/example.v)
  1. Commit changes with conventional commits
  2. Make your PR and do a little happy dance

Decision Records

(putting these here for now until someone tells me a better place or I find a better place)

How to handle module naming

See solution #3 in this excellent repo

Name of sample solutions

example.v

Name of testing script for each solution

run_test.v

"All test functions have to be inside a test file whose name ends in _test.v."

vlang's People

Contributors

1ethanhansen avatar andrerfcsantos avatar asbaumgarten avatar bnandras avatar erikschierboom avatar exercism-bot avatar gautampanchal94 avatar hraftery avatar justanothergithubber avatar kahgoh avatar keiravillekode avatar kytrinyx avatar m-charlton avatar natanaelsirqueira avatar siragi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

vlang's Issues

Thanks!

Thanks, good work! โค๏ธ

Are you present on the official V discord? You can add yourself there and post about things like that, I found this gem and community is happy too.
@1ethanhansen

Screenshot_2023-02-03-07-16-38-529_com discord
Screenshot_2023-02-03-07-16-47-969_com discord
Screenshot_2023-02-03-07-17-03-669_com discord

misleading comment on exercises/practice/accumulate

first remark :
The exercise requires the student to essentially re-implement array.map, why are the functions called accumulate ( that's an aggregation that in all programming languages that I know is called either reduce or fold ) while the requirement is to do something similar with map ( or transform in some languages )

second remark:

The reason "Because V functions cannot be overloaded[1]" comes across as misleading in the context of what the answers could be

// Because V functions cannot be overloaded[1], make another function
//  called `accumulate_strs` that does the same thing for strings
// instead of ints

solutions:

module main

// sol 1 with builtin map
fn accumulate_ints0(values []int, operation fn (int) int) []int {
	return values.map(operation)
}
fn accumulate_strs0(values []string, operation fn (string) string) []string {
	return values.map(operation)
}

// sol 2 reimplementation of builtin
fn accumulate_ints1(values []int, operation fn (int) int) []int {
	mut arr:=[]int{}
	for k in values {
		arr<<operation(k)
	}
	return arr
}
fn accumulate_strs1(values []string, operation fn (string) string) []string {
	mut arr:=[]string{}
	for k in values {
		arr<<operation(k)
	}
	return arr
}

// sol 3 generic reimplementation of builtin
fn  accumulate[T](values []T, operation fn(T) T) []T {
	mut arr:=[]T{}
	for k in values {
		arr<<operation(k)
	}
	return arr
}
fn accumulate_ints(values []int, operation fn (int) int) []int {
	return accumulate[int](values,operation)
}
fn accumulate_strs(values []string, operation fn (string) string) []string {
	return accumulate[string](values,operation)
}

// Because V functions cannot be overloaded[1], make another function
//  called `accumulate_strs` that does the same thing for strings
// instead of ints

what if it were possible to overload the accumulate_ints ? one would be able to overload the arguments, but not the return type ๐Ÿ˜› which does not apply because accumulate_str returns a []string that has little in common with []int in the context.

why don't you formulate like this :

// Make another function
//  called `accumulate_strs` that does the same thing for strings
// instead of ints

Make the testing output more verbose

    Non-blocking question: Can we make the output more verbose? 

The CI logs contain only the output from line 33:

Checking hello_world exercise...
Checking leap exercise...
Checking reverse_string exercise...
Checking space_age exercise...

So it's not fully reassuring that the every test case does actually run. It would be nice to not just rely on the exit code.

Originally posted by @ee7 in #10 (comment)

Create an Analyzer

From the docs:

Exercism's analyzers automatically assess student's submissions and provide mentor-style commentary.

`simple-linked-list` tests rely on undocumented interface (`.len`)

I don't think a len field is specified in the problem statement or stub code. But it is required to run the tests. Moreover, being a field, it's required to be updated within every other mutating method.

Should we either add it to the specification, or remove it from the tests?

For reference, I can see (arbitrary sampling):

  • Haskell doesn't require it.
  • Go has Size() in the interface.
  • Javascript has length() in the interface.
  • Rust has len() in the interface.

I also note that simple-linked-list doesn't have a canonical-data.json to guide us: https://github.com/exercism/problem-specifications/tree/main/exercises/simple-linked-list

My opinion, held very lightly, is that I'm split between removing the requirement so the exercise is as easy as the problem description describes; or to add it as a method (not a field) to the interface, since it's a good exercise to do with a few different implementations.

Launch tracking

This issue helps keep track of the tasks you're working on towards launching this track.

The next steps are:

Once you've finished a task, you can check them in this list.

Questions

Please ask if you have any questions or if anything is confusing!

Add all-your-base practice exercise

I would like to add a port to V of the practice exercise all-your-base.

If the review doesn't create too much work, I could also port a few more tests (suggestions welcome).

Create a Representer

From the docs:

A Representer is a bit of code that has the single responsibility of taking a solution and returning a normalized representation of it.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.