Giter Club home page Giter Club logo

elm-review's Introduction

elm-review

elm-review analyzes Elm projects, to help find mistakes before your users find them.

elm-review reporter output

What does elm-review do?

elm-review analyzes your source code, trying to recognize code that is known to cause problems. All the rules describing problematic code are written in Elm, and elm-review does not come with built-in rules; instead users are encouraged to write rules themselves and publish them as Elm packages, for everyone to benefit. Search the package registry to find what's out there!

Encouraging users to write rules also makes it easy to add custom rules that only apply to your project. Such as rules that:

  • enforce that e.g. image paths only live in an Images module, which other modules can reference.
  • make everyone use a common Button component, instead of creating their own.
  • help users of a library you made, to avoid making mistakes that your API could not prevent them from doing.

Beware how and why you introduce rules in your project though. Often a good API, that guides users to correct solutions, is the best way to go, so instead of writing a rule, maybe there is an API that can be improved? But if a rule seems like the best solution, remember to discuss it with your team. It's easy to mix up patterns that are objectively bad, with patterns that you personally find problematic, and forbidding patterns that other people find useful can be very disruptive.

Try it out

The easiest way to run elm-review, if you have Node.js and npm installed, is to use the elm-review CLI tool.

# Save it to your package.json, if you use npm in your project.
# This is the recommended way.
npm install elm-review --save-dev

# Install globally. This is not recommended.
npm install -g elm-review

You can also try it out without installing it or configuring it if you have Node.js installed. All you need is to find a configuration on GitHub, and note that elm-review packages are encouraged to provide an example one. Once you found the configuration, run elm-review by specifying the name of the GitHub repository and the path to the configuration:

npx elm-review --template jfmengels/elm-review-unused/example

Do you want to find and remove all the dead code in your project? Then run the following command (this might take a while if your project has a lot of dead code though!), and think about whether you want this goodness in your project!

npx elm-review --template jfmengels/elm-review-unused/example --fix-all

More information on that in the CLI documentation.

Configuration

elm-review is configured through a review/ folder in your project. It is a self-contained Elm project where you can specify your dependencies, and write, import, and configure review rules.

Rules are configured in the review/src/ReviewConfig.elm file:

module ReviewConfig exposing (config)

import Review.Rule exposing (Rule)
import Third.Party.Rule
import My.Own.Custom.rule
import Another.Rule


config : List Rule
config =
    [ Third.Party.Rule.rule
    , My.Own.Custom.rule
    , Another.Rule.rule { ruleOptions = [] }
    ]

Get started

You can get started with a fresh configuration by running the elm-review init command with the command line tool installed, which will add a review folder to your project.

You can also use an existing configuration using elm-review init --template <some configuration>. I created some configurations that I believe can be good starting points.

# Start with an empty configuration
elm-review init

# Starter configuration for an Elm application
elm-review init --template jfmengels/elm-review-config/application

# Starter configuration for an Elm package
elm-review init --template jfmengels/elm-review-config/package

Once you have set up an initial configuration, you can add new rules. As elm-review does not come with built-in rules, you can look for packages with rules on the Elm package registry by searching for packages named elm-review-.

Once you've found a package that you like, you can install it with the elm install command, just like any other Elm project dependency.

cd review/ # Go inside your review configuration directory
elm install authorName/packageName
# then update your `review/src/ReviewConfig.elm` to add the rule
# as explained in the package's documentation

Before you start adding rules or an unfamiliar existing configuration, I suggest reading the rest of this document, especially the section on when to enable a rule.

Write your own rule

You can write your own rule using this package's API and elm-syntax. Check out the Review.Rule documentation for how to get started.

NOTE: If you want to create a package containing elm-review rules, I highly recommend using the CLI's elm-review new-package subcommand. This will create a new package that will help you use the best practices and give you helpful tools like easy auto-publishing. More information is available in the maintenance file generated along with it.

If you want to add/create a rule for the package or for your local configuration, then I recommend using elm-review new-rule, which will create a source and test file which you can use as a starting point. For packages, it will add the rule everywhere it should be present (exposed-modules, README, ...).

Here's an example of a rule that prevents a typo in a string that was made too often at your company.

module NoStringWithMisspelledCompanyName exposing (rule)

import Elm.Syntax.Expression as Expression exposing (Expression)
import Elm.Syntax.Node as Node exposing (Node)
import Review.Rule as Rule exposing (Error, Rule)

-- Create a new rule
rule : Rule
rule =
    -- Define the rule with the same name as the module it is defined in
    Rule.newModuleRuleSchema "NoStringWithMisspelledCompanyName" ()
        -- Make it look at expressions
        |> Rule.withSimpleExpressionVisitor expressionVisitor
        |> Rule.fromModuleRuleSchema

-- This function will visit all the expressions (like `1`, `"string"`, `foo bar`, `a + b`, ...)
-- and report problems that it finds
expressionVisitor : Node Expression -> List (Error {})
expressionVisitor node =
    case Node.value node of
        -- It will look at string literals (like "a", """a""")
        Expression.Literal str ->
            if String.contains "frits.com" str then
                -- Return a single error, describing the problem
                [ Rule.error
                    { message = "Replace `frits.com` by `fruits.com`"
                    , details = [ "This typo has been made and noticed by users too many times. Our company is `fruits.com`, not `frits.com`." ]
                    }
                    -- This is the location of the problem in the source code
                    (Node.range node)
                ]

            else
                []

        _ ->
            []

Then add the rule in your configuration:

module ReviewConfig exposing (config)

import NoStringWithMisspelledCompanyName
import Review.Rule exposing (Rule)


config : List Rule
config =
    [ NoStringWithMisspelledCompanyName.rule
    -- other rules...
    ]

If you want to write a rule but might not have an idea of where to start, have a look on the elm-review-rule-ideas repository. You may also want to look for rules in the Elm packages registry and in the GitHub elm-review topic.

When to write or enable a rule

The bar to write or enable a rule should be pretty high. A new rule can often turn out to be a nuisance to someone, sometimes in ways you didn't predict, so making sure the rule solves a real problem, and that your team is on board with it, is important. If a developer disagrees with a rule, they may try to circumvent it, resulting in code that is even more error prone than the pattern that was originally forbidden. So the value provided by the rule should be much greater than the trouble it causes, and if you find that a rule doesn't live up to this, consider disabling it.

Review rules are most useful when some pattern must never appear in the code. It gets less useful when a pattern is allowed to appear in certain cases, as there is no good solution for handling exceptions to rules. If you really need to make exceptions, they must be written in the rule itself, or the rule should be configurable.

For rules that enforce a certain coding style, or suggest simplifications to your code, I would ask you to raise the bar for inclusion even higher. A few examples:

  • I much prefer using |> over <|, and I think using the latter to pipe functions over several lines is harder to read. Even if using |> was indeed better for most situations and even if my teammates agree, this would prevent me from writing tests the suggested way for instance.
  • If a record contains only one field, then I could suggest not using a record and use the field directly, which would make things simpler. But using a record can have the advantage of being more explicit: findFiles [] folder is harder to understand than findFiles { exceptions = [] } folder.

Some rules might suggest using advanced techniques to avoid pitfalls, which can make it harder for newcomers to get something done. When enabling this kind of rule, make sure that the message it gives is helpful enough to unblock users.

When wondering whether to enable a rule, I suggest using this checklist:

  • I have had problems with the pattern I want to forbid.
  • I could not find a way to solve the problem by changing the API of the problematic code or introducing a new API.
  • If the rule exists, I have read its documentation and the section about when not to enable the rule, and it doesn't apply to my situation.
  • I have thought very hard about what the corner cases could be and what kind of patterns this would forbid that are actually okay, and they are acceptable.
  • I think the rule explains well enough how to solve the issue, to make sure beginners are not blocked by it.
  • I have communicated with my teammates and they all agree to enforce the rule.
  • I am ready to disable the rule if it turns out to be more disturbing than helpful.

Is there a way to ignore errors?

elm-review does not provide a way to disable errors on a case-by-case basis — by line or sections of code — like a lot of static analysis tools do. I've written about this in How disable comments make static analysis tools worse.

Because you can't ignore errors easily, elm-review puts more burden on the rules, requiring them to be of higher quality — less false positives — and better designed — avoiding rules that will inherently have lots of exceptions or false positives.

It does provide 2 systems that I think are better alternatives for the health of your project.

Configuring exceptions

You can configure exceptions, which consists of marking specific directories or files as not relevant to a rule or set of rules, preventing errors to be reported for those.

It is a good fit if you wish for elm-review to not report errors in vendored or generated code, or in files and directories that by the nature of the rule should be exempted.

Temporarily suppressing errors

elm-review has a system to temporarily suppressed errors which aims to help you gradually adopt rules that report many errors in your project without having you fix all the issues beforehand.

Running elm-review suppress will generate one JSON file in review/suppressed/ (in your review configuration) for every rule that currently reports errors, and records the number of suppressed errors per file in your project. These files should be included in your versioning system.

As long as suppression files exist for your project, running elm-review will behave as usual but with these additional behaviors:

  • Suppressed errors won't be reported.
  • If there are outstanding errors for the ignored rules and files, the related suppressed errors will be reported until you reduce the number of errors back to the number in the JSON file. This is a good opportunity to fix more!
  • If no errors are being reported and there are less suppressed errors than before, suppression files will be updated automatically, in order to make sure no new errors get re-introduced unknowingly.

While you can run the suppress command to ignore newly reported errors, please do so with moderation. The aim is to allow enabling rules while there are errors remaining and to have these fixed incrementally, not to make it easier to ignore errors.

Use elm-review suppress --help to start using this, and read more about the design choices that went into the feature.

Note that to avoid uncommitted suppression files in your project's main branch, it is recommended to use elm-review suppress --check-after-tests at the end of your test suite.

Extract information

elm-review has quite a nice way of traversing the files of a project and collecting data, especially when things aren't as simple as grepping or applying a regex on the code.

While the tool is mainly designed around reporting issues, you can also use it to extract information from the codebase. You can use this to gain insight into your codebase, or provide information to other tools to enable powerful integrations.

To make use of this feature, run elm-review --extract --report=json with a configuration containing a rule that uses Rule.withDataExtractor.

The result for a rule will be stored under <json>.extracts.<YourRuleName>. To access it, you can then pipe the result into either a Node.js script, a tool that expects JSON, or jq as in the example below.

elm-review --extract --report=json |
    jq -r '.extracts["YourRuleName"]'

# or to be slightly faster
elm-review --extract --report=json --rules YourRuleName |
    jq -r '.extracts["YourRuleName"]'

Combine the above out with --watch for fast feedback when editing your code!

elm-review --report=json --extract

and by reading the value at <output>.extracts["YourRuleName"] in the output.

elm-review's People

Contributors

aszenz avatar bebbs avatar ben-eb avatar dependabot[bot] avatar jfmengels avatar jiegillet avatar kasma1990 avatar leojpod avatar lishaduck avatar lue-bird avatar martinsstewart avatar siriusstarr avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

elm-review's Issues

Expressions returning only `Just _ : Maybe a` -> convert to return `a` instead

What the rule should do:
Check all returned expressions inside if...else and case...of expressions of type Maybe _ for whether they're all Justs.
In that case it should suggest that you change it to instead return the values inside the Justs and (perhaps?) call the Just constructor outside (or refactor the usages of this expression/function to something nicer / more idiomatic).

Example of things the rule would report:

case stuff of
  X -> Just "x"
  Y -> Just "y"
  Z -> 
    if foo then
      Just <| blabla "zzz" 999
    else
      Just <| somethingMoreComplicated "z" 42

⬇️

case stuff of
  X -> "x"
  Y -> "y"
  Z -> 
    if foo then
      blabla "zzz" 999
    else
      somethingMoreComplicated "z" 42

or

if foo then
  Just "x"
else
  Just "y"

⬇️

if foo then
  "x"
else
  "y"

Example of things the rule would not report:

if foo then
  Just "x"
else
  Nothing

When (not) to enable this rule:

Libraries might be bound by their existing API design and so it might not be desirable to refactor such functions to stop exposing Maybe.

I am looking for:

  • Feedback
  • Someone to implement it with/for me

Disable CustomTypeConstructorArgs for empty tuples (unit)

Describe the bug

NoUnused.CustomTypeConstructorArgs: Argument is never extracted and therefore
never used.

226|     | LoggedInError (Result Decode.Error ErrorData)
227|     | DeleteChapter (Result Firestore.Error ())
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^
228|     | ReadChapter (Result Firestore.Error (Firestore.Document Reading))

This argument is never used. You should either use it somewhere, or remove it at
the location I pointed at.

Expected behavior
I don't think I should get a failing build when the actual type of the delete REST API firestore message is delete : Path (DocumentPath a) -> Task Error (), maybe there is a way to disable this rule in my project? 🤔

Thanks in advance!

Incremental adoption of rules

Abstract

I am looking for ways to make it easier to adopt static analysis/linting rules in a project when there are many errors, and fixing all of them in one go is not feasible or too daunting.

Context

At the moment, when you add a new rule to the elm-review configuration (or simply run elm-review for the first time), you're likely to end up with a lot of new errors, which you may not want to deal with right now.

If the rule happens to provide automatic fixes, that will help out a lot. But if there were a lot of errors, even that might result in a lot of changes that your colleagues (and yourself!) will not fancy reviewing.

Currently the best option is probably to enable the rule, but to ignore the errors for every file containing an error using ignoreErrorsForFiles, and then get your team to fix the errors gradually.

config : List Rule
config =
    -- Opt-in list of rules to enable
    [ Some.Rule.rule
        |> Rule.ignoreErrorsForFiles
            [ "src/A.elm"
            , "src/B.elm"
            , "..."
            ]
	]

You can also fix the errors before enabling the rule, but there is then a risk of new errors appearing in the meantime.

I propose a solution for this problem, but I first want to go through the "obvious" alternative solutions that other tools seem to have chosen.

Alternative solutions

Severity levels

I believe that one way this is usually done in other static analysis tools (ESLint for instance), is to associate rules with a severity level, like "warning" and "error" (which we don't support). I believe this to be a non-helpful solution in the long run, as I describe here. I personally don't enable any rule as a "warning" when I work with ESLint.

I therefore really don't want to support severity levels, and prefer this problem to remain as it is than to solve it this way (except if it does solve other kinds of problems).

Add suppression comments

Again, this is something that you could see in ESLint-land. Though if there are too many errors, the rule will most certainly end up as a warning. But it would be well possible to automate adding a // eslint-disable-line <rule-name> on every reported line, or even a /* eslint-disable <rule-name> */ at the top of each reported file.

elm-review does not support suppression comments. I have described over here why I think this is not a good idea in my opinion, even though that can make a lot of things harder.

I would add to this that people don't go hunting for disable comments to fix the errors. The reasoning is that it's often hard to know whether a rule was disabled for good reason (bug in the rule, ...) or not (lazy developer, ...), plus there is no centralized location that tells you how many disabled errors there are or where to start.

Proposed solution

My proposal is this: Add new functions that temporarily suppress a rule for a list of files. It would look very much like ignoreErrorsForFiles. I will for the sake of this proposal name the function temporarilySuppressErrors, but the bikeshed is open.

config : List Rule
config =
    [ Some.Rule.rule
        |> Rule.temporarilySuppressErrors
            [ "src/A.elm"
            , "src/B.elm"
            , "..."
            ]
	]

It looks very much like ignoreErrorsForFiles, but there would be some differences.

When we notice that no error is showing up in the given file anymore (even if by coincidence), elm-review would report an error and fail, telling the user to remove the file from the list of ignored files in their review configuration. This is a small improvement but would help clean up step by step. Maybe the user would get a dopamine rush and feel like fixing the rest of the issues!

If a file turns out not to exist, then the CLI would report that. Otherwise we'd never be able to clean up the list of files if a file was deleted.

At the moment and when possible, ignored files are not being analyzed by rules, when we know that not analyzing it will not impact the analysis of the rest of the project. For suppressed files, we would need to analyze them, because otherwise we can't report when they do not contain errors anymore.

The CLI would provide a flag (and editors would provide some toggle?) allowing the user to disable the suppressions (i.e. re-activating the rules), allowing you to easily go hunt for these errors. The user could also manually remove a file from the suppression list if they prefer. Having everything in the configuration makes it easier to see the amount of errors left, even though we don't see how many errors are in each file.

An idea from a colleague of mine was to run elm-review in the CI twice: Once the "usual" way, and if there are no errors, run it once again with the previously mentioned flag (but ignoring the exit code), and feed that to our monitoring service (we are a monitoring company after all), so we can follow the quantity of suppressed errors and prioritize their hunt. An alternative solution would be to make elm-review NOT exit with an error code if only suppressed errors were reported, so that we don't have to run the tool twice. I imagine that if some people see a good reason for the tool to fail even in that case, we could add a separate flag or make it take an argument.

For this last thing to work well, we do need to output of the error to be marked as "suppressed", so I think the output will most likely look something like this

-- ELM-REVIEW ERROR ----------------------------------------- src/Main.elm:10:11

(fix) (suppressed) NoUnused.Variables: Imported variable `span` is not used

...

and the JSON output would have an additional field "suppressed": true added to it.

We could have done the same thing by having the rules to be enabled in a different review configuration, but I don't believe that to be practical for several reasons (that I could go into if asked, I'm being lazy now).

After reviewing, the CLI would report the number of suppressed errors (without showing them individually). The wording is temporary, but it shows the idea.

# Currently
I found no errors!
# With proposed and if there suppressed errors
I found no errors, but there are 500 suppressed errors lying around.
# If there are some fixable errors in there, the message could be more precise
I found no errors, but there are 500 suppressed errors lying around, including 12 that provide an automatic fix. To fix them, run

    elm-review --fix --no-suppressions

I don't think we should mention suppressed errors when there are non-suppressed errors being reported. It's likely just distracting.

Granularity

As you may have noticed, you can't disable a rule for only a section of a file as the smallest item you can customize here is a file. The issue with this is that you can start with a file with N suppressed errors, and when you want to fix the issues, notice that they have grown to a number bigger than N since the time of suppression.

We could ignore errors at given locations (Rule.temporarilySuppressErrors [ ("src/A.elm", [{start={row=1, column=1},end={row=2, column=1}}]) ], but since files change and elm-review can't compare with previous versions of the source code (I imagine that including Git in the analysis would make several things very complex), if a reported line gets moved because another line was included 100 lines prior, you'd get a false positive. It's technically possible, but I think it would make things very annoying.

We could provide an API that allow for both (example below), but that does not solve the flakiness of the suppressions.

config : List Rule
config =
    [ Some.Rule.rule
        |> Rule.temporarilySuppressErrors
            [ forWholeFile "src/A.elm"
            , onlyInSomeLocations "src/B.elm"
                  [ {start={row=1, column=1},end={row=2, column=1}}
                  ]
            ]
	]

A alternative solution - which I think would be the way to go - is to count the number of errors in each file, and if the number of errors changes, we will require it to be updated (and we'd forbid specifying 0).

config : List Rule
config =
    [ Some.Rule.rule
        |> Rule.temporarilySuppressErrors
            [ { path = "src/A.elm", nbErrors = 1 }
            , { path = "src/B.elm", nbErrors = 5 }
            ]
	]

This would not help prevent exchanging errors (one gets fixed while one gets removed), but it would at least help detect when one gets introduced and the codebase gets worse.

I imagine that if the count increases, we'd have to display all the suppressed errors for that file so that the user could figure what was introduced. Not great but better than not showing anything.

Making this easy to use

There would not be an equivalent to ignoreErrorsForDirectories. While it makes sense to have ignoreErrorsForDirectories to disable elm-review on generated and vendored code, temporarily suppressing an entire folder would make it much harder for users to gradually fix the errors.

I have not personally felt the pain of adding the list of ignored files into the configuration, because I am good enough with editors to extract the file names from the errors and format them in the required way. But I imagine that not to be the case for most people.

I think it would be nice if the CLI helped with that. I am imagining a flag that would generate the suppression code for you. Or even modify the configuration, but that's a way more daunting task.

An example (again, just temporary names and texts)

> elm-review --generate-suppressions
  [ Some.Rule.rule
      |> Rule.temporarilySuppressErrors
          [ "src/A.elm"
          , "src/B.elm"
          ]
  , Some.Other.Rule.rule
      |> Rule.temporarilySuppressErrors
          [ "src/A.elm"
          , "src/C.elm"
          ]
  ]

The tricky part would be to provide a way that makes it as easy as possible to just copy-paste this in the file. At Humio, parts of our configuration looks like

config : List Rule
config =
    [ Some.Rule.rule
        |> Rule.ignoreErrorsForFiles ignoredForSomeRule
	]

ignoredForSomeRule  : List String
ignoredForSomeRule =
    [ "src/A.elm"
    , "src/B.elm"
    ]

Which doesn't prone itself to easy copy pasting of what I suggested above.

The CLI could suggest using this flag when the number of shown errors reaches a certain threshold (20?).

Additional alternative solution: Integrate with Betterer or do the same thing

I haven't yet mentioned @phenomnomnominal's Betterer as a solution. It is a tool that aims to solve the same issue in a more generic way. I only talk about it now because I believe it tries to solve the issue in a similar manner, and it's worth comparing. That said, I have not yet been able to use it, so my understanding is likely a bit shallow...

From what I understand, it either snapshots the results from ESLint (for the ESLint integration), stores that file in the project for it to be committed, and then checks that nothing got worse than before. I have no clue how the location gets matched with the previous run.

That or it simply counts the number of errors, but I could not figure out whether it's the total number of errors or the number of errors per file 🤷‍♂️

Summary

The proposed solution

  • 👍 allows you to enable a rule gradually
  • 👍 rules that are not being suppressed in a file will be enforced in that file (not the case with warnings)
  • 👍 keeps all the errors that need to be fixed in a single place
  • 👍 would make it easier to hunt and monitor suppressed errors
  • 👍 would not introduce features with nasty side-effects, as far as I'm aware
  • 👍 helps differentiate between files that have been cleared as okay for infringing the rules and those that still need to be fixed
  • 👍 would make sure no additional errors show up
  • 👎 would not help against exchanging errors

What I'm looking for

I'm looking for general feedback on this feature. Do you think this is a good idea, a bad idea? Do you think you'd use it?

What do you think about the granularity issue? Can you think of a better solution?

It's not my main concern now, but I could use some bikeshedding on the names here (for which I have not yet put any effort into).

Do you think we should always show problems related to suppression errors (new errors showed up, some errors got fixed), or only if there are no more non-suppressed errors?

RFC: Context creator

I have been playing (and doing a lot of refactoring to make it work) with creating rules where the context can be initialized with some pre-computed data. This is what it would look like:

rule : Rule
rule =
    Rule.newModuleRuleSchemaWithContextCreator "NoDebug.TodoOrToString" contextCreator
        |> Rule.withImportVisitor importVisitor
        |> Rule.withExpressionEnterVisitor expressionVisitor
        |> Rule.fromModuleRuleSchema

contextCreator : Rule.ContextCreator () Context
contextCreator =
    Rule.initContextCreator
        (\metadata () ->
            { moduleName = Rule.moduleNameFromMetadata metadata
            , moduleNameNode = Rule.moduleNameNodeFromMetadata metadata
            , isInSourceDirectories = Rule.isInSourceDirectories metadata
            }
        )
        |> Rule.withModuleMetadata

The API for the "context creator" is similar to the JSON decode pipeline. Every with adds an argument to the function.

Right now, the available information is:

  • Metadata about the current module, containing
    • The node to the module name (just like we do in fromProjectToModule)
    • The module name (in case you don't care about the node)
    • isInSourceDirectories, a boolean that tells you whether the module is in the source-directories. This will allow giving some different behavior for tests/review code (like
  • The moduleKey, like the one fromProjectToModule (I still need to forbid this when you are in the context of a module visitor)

Some others pre-computed that I am thinking of:

  • A lookup table to know the "real" module name of a type/value, based on the range of the Node. Ideally, this would replace elm-review-scope.
  • The modules' signatures (this might actually be passed through a visitor, like the dependencies visitor, not sure yet).
  • Type inference, in a similar way to the lookup table. Not for today though 😅

The benefit of this system is also that we can compute most of this once, before rules are run, instead of once for every rule that wants that information (and needs to collect it on their own).

Why not give all the information at once?

  • That would be a lot of parameters/fields, most of which you'll ignore. And ignoring parameters at the very least is annoying
  • I expect we'll add more as time goes, and adding a parameter would mean a breaking change
  • Some of these might be expensive to compute (type inference, https://github.com/jfmengels/elm-review/issues/15, ...), and we could prevent computing them if we notice that none of the rules require that information (which we can with this system).

A few points:

  • For project rules, you can also choose to have fromProjectToModule and fromModuleToProject to also use this context creation construct.
  • I am looking to get rid of that () in the context creator. That is meant to be the projectContext, but for module rules that is simply (). It's annoying but we'll see if I can manage to get rid of it :man-shrugging:
  • The names I chose are probably not great, I'd love to get input on them these.
  • I wish I could break some of these functions (context creation, metadata, ...) into different modules, but unfortunately, that would mean a breaking change because of an issue in the compiler... 😞 So I'll leave that for later.
  • Under the hood, all modules rules are transformed to project rules. I now only have one implementation of rules to maintain.
  • It could be interesting to have some of these information be dynamic (in other words, you provide a hook), and that hook would only be computed once for every rule that would use the same one. I'm not sure that will turn out great though, so it's more likely than not that this won't get in.

What I am looking for

Feedback

I am looking for feedback on this feature. Do you think this is useful, great, cumbersome, overkill?

Bikeshedding

I am looking for better names to the functions I shown here. Note that I might need to have a separate context creator for module visitors than for project visitors.

---- Review.Rule - MINOR ----

    Added:
        type ContextCreator from to
        type Metadata 
        initContextCreator : (from -> to) -> Review.Rule.ContextCreator from to
        isInSourceDirectories : Review.Rule.Metadata -> Basics.Bool
        moduleNameFromMetadata :
            Review.Rule.Metadata -> Elm.Syntax.ModuleName.ModuleName
        moduleNameNodeFromMetadata :
            Review.Rule.Metadata
            -> Elm.Syntax.Node.Node Elm.Syntax.ModuleName.ModuleName
        newModuleRuleSchemaUsingContextCreator :
            String.String
            -> Review.Rule.ContextCreator () moduleContext
            -> Review.Rule.ModuleRuleSchema {} moduleContext
        withMetadata :
            Review.Rule.ContextCreator Review.Rule.Metadata (from -> to)
            -> Review.Rule.ContextCreator from to
        withModuleContextUsingContextCreator :
            { fromProjectToModule :
                  Review.Rule.ContextCreator projectContext moduleContext
            , fromModuleToProject :
                  Review.Rule.ContextCreator moduleContext projectContext
            , foldProjectContexts :
                  projectContext -> projectContext -> projectContext
            }
            -> Review.Rule.ProjectRuleSchema
                   { schemaState
                       | canAddModuleVisitor : ()
                       , withModuleContext : Review.Rule.Required
                   }
                   projectContext
                   moduleContext
            -> Review.Rule.ProjectRuleSchema
                   { schemaState
                       | hasAtLeastOneVisitor : ()
                       , withModuleContext : Review.Rule.Forbidden
                   }
                   projectContext
                   moduleContext
        withModuleKey :
            Review.Rule.ContextCreator Review.Rule.ModuleKey (from -> to)
            -> Review.Rule.ContextCreator from to

Does it even make sense to have a Metadata field, when we could do withModuleName, withModuleNameNode, withIsInSourceDirectories? I am thinking this might be easier and I am not sure what other fields I might add later to the metadata. Also, I think it's better for re-runs that rules don't store the Metadata itself inside the context (but not sure that will impact anything yet). I am leaning towards that at the moment.

Should ContextCreator be named ContextBuilder, or something else entirely?

People trying it out

Try it out on your rules, and let me know what you think. Let me know if you find nice patterns that we can suggest to use in the documentation. I think the easiest way to try it out for now is to checkout this repo and create a rule in the tests.

You can find the documentation here: https://elm-doc-preview.netlify.app/?repo=jfmengels/elm-review

Check code examples in documentation

What the rule should do:
Find code examples in function and module documentation and verify that it's syntactically correct (with a few exceptions) and that the functions and types that are used exist.

This is similar to https://github.com/stoeffel/elm-verify-examples but where that checks that the code computes a specific value, this rule focuses on if the example will (probably) compile

What problems does it solve:
I've spent a lot of time double checking that the code examples in packages I've written actually work. Usually I end up missing something after I've made API changes making the examples a hindrance to newcomers.

Example of things the rule would report:

Example 1:

{-| My function. 
    
    numberTwicer 4 -- 8

-}
numberDoubler a = a * 2

Here we changed the function name so numberTwicer no longer exists. We know that this is a line of code and not a comment because it's indented 4 spaces.

Example 2:

{-| My function. 
    
    numberTwicer MyFunction.four -- 8

-}
numberDoubler a = a * 2

Here we are using MyFunction.four without importing it. This rule would offer this fix:

{-| My function. 

    import MyFunction    

    numberTwicer MyFunction.four -- 8

-}
numberDoubler a = a * 2

Example of things the rule would not report:

Example 1:

{-| This isn't a code example so it shouldn't get checked. -}
foo = 0

Example 2:

{-| 

    numberDoubler 0 -- 0
    numberDoubler 1 -- 2
    numberDoubler 2 -- 4

 -}
numberDoubler a = a * 2

Multiple expressions separated by newlines is valid.

Example 2:

{-| 

    import MyModule

    numberDoubler 0 -- 0
    numberDoubler 1 -- 2
    numberDoubler 2 -- 4

 -}
numberDoubler a = a * 2

Imports followed by multiple expressions separated by newlines is valid.

Example 3:

{-| ...

## Failure

    port module Main exposing (main)

    import Html
    import Json.Encode as Encode


    -- Port `action` is never used.
    port action : (Encode.Value -> msg) -> Sub msg

    port alarm : Encode.Value -> msg

    -- Port `alarm` is never used, because `play` is never used.
    play : Cmd msg
    play =
        alarm (Encode.string "play")

    main =
        Html.text "Hello"

-}

Normal module code is also allowed.

When (not) to enable this rule:

This rule is probably going to end up being pretty complex to handle different use cases. There's probably some that will be too difficult to support or conflict with other use cases. For those situations you wouldn't want to use this rule.

I am looking for:

If people could provide code examples where it's not clear if the rule would consider them invalid or not. There are probably a lot of edge cases something like this would need to handle.
Also any thoughts in general.

Show in the documentation when a rule is autofixable

I see that a few static analysis tools indicate that a rule is fixable or not.

I don't think a rule should always provide fixes, but when it does, I think it's nice to highlight it, as that would make the user quite happy.

Some Prior art

ESLint

On the rule listing:
Screenshot from 2020-09-25 13-35-07

On the rule page:

The --fix option on the command line can automatically fix some of the problems reported by this rule.

(again with the wrench symbol)

The docs do not indicate what the fix will do or in which situations it will (not) be enabled.

Textlint

(I had never heard of textlint before today by the way, but it's probably worth looking into).

Rules seem to indicate that they are fixable, and there is a check mark next to the rule name in the rule list

HLint

HLint's rule list indicate when they are NOT fixable. It looks like there is a lot of work on making most rules fixable, so that makes sense. Even the unfixable rules indicate a "suggestion" of what the code should look like instead of what was detected. I guess it's more of a "we can't autofix" situation here.

Rome frontend

Rome generates documentation and includes a real report by running the rules on example files (this is pretty cool by the way, though it's worth noting that there are no custom rules, which simplifies the task).

The generated report shows the suggested fix by default.

Exploration

I'd like to explore nice ways to show that a fix is available, and if possible to enforce this convention.

I think we could detect whether there the rule proposes a fix by checking whether the file imports Review.Fix. If it does, then we can enforce/autofix adding that information in the README and in the rule's documentation.

The question is then, should

I imagine that for some rules it's easiest and best to show the fixed results in the examples directly. Something like

List.map (\v -> v.foo)
-- would be fixed to
List.map .foo

For others, fixes might be a more difficult, and therefore a simple mention somewhere that the rule provides automatic fixes is enough. On my end, I think I'll try to add such examples on my repos and see what comes out of that.

Dead link

The link to the Review.Rule file in the "Write your own rule" section is dead.

Integrate with the compiler in order to show compiler errors then review errors

I have been reading Lessons from Building Static Analysis Tools at Google recently, which I found to be a really great resource on static analysis.

One of the things they talk about and had great success with is a tool for Java that they made and called Error Prone, which extends the Java compiler (javac) with additional checks. As they say it:

Error Prone …

  • hooks into your standard build, so all developers run it without thinking
  • tells you about mistakes immediately after they’re made
  • produces suggested fixes, allowing you to build tooling on it

I think that the compiler allows for configuration for additional checks, but it might also be a separate tool that calls the compiler and hooks into its internals. I'm unclear about the details here.

Anyway, a lot of people don't want to run additional tooling (some of colleagues for instance, and confirmed by the article at Google), and only follow what the compiler and maybe their editor tells them.

Idea

What I am thinking of is, is that we could have a CLI that mostly mimics the behavior of elm make. I don't yet know whether this CLI should be provided by elm-review itself under a new flag, a new subcommand or a different binary, or whether it should be a new tool.

This is roughly what I expect it to do:

  1. Runs elm make with the elm make arguments from the user
  • If the compilation fails, we display the errors just like elm make does, and we exit the process.
  • If the compilation succeeds, we withhold the generated HTML/JS file in memory instead of writing it to where the user would expect.
  1. Run elm-review with the elm-review arguments from the user
  • If the review returns errors, we display the errors just like elm-review does, and we exit the process.
  1. Create the file generated by elm make

This would work well in a watch mode like elm-review --watch and elm-live already provide, as much as I understand from elm-live. Instead of exiting the process, we would then only interrupt this process, and start it again when a file gets changed. I also think that the current behavior would work when combined with --fix and --fix-all.

This makes inherent sense to me, since one of elm-review suppositions is that the analyzed project compiles anyway, so that we (rule authors) can avoid doing a lot of unnecessary checks that the compiler already does. Said otherwise, you should not follow elm-review reports if the compilation fails because you may get false positives.

For editors, this makes a lot of sense too. We discussed this at some point with @klazuka with regards to integrating elm-review in IntelliJ, and we figured it would make sense to only display the elm-review errors if there were no compilation errors.

I don't know if this would make it easier, but editors might then integrate elm-review by re-using their current integration with elm make, but instead of running elm make, they run elm-review in the proposed way (only if the user checked some box saying they wanted to use elm-review this way?). The JSON output of elm-review already resembles the output from elm make anyway, and we can adapt it if necessary. @klazuka @razzeee I'd love to know if you think this makes sense.

An issue with this is that even though some of the rules in a configuration make a lot of sense to enable at compilation time, like detecting invalid regexes, noticing problematic code structures, ..., all rules don't make sense like the rules that detect unused code. Some workarounds I can think of for that are

  • Adding a flag that does not prevent the generation of the compiled file when there are review errors. The downside is that it may look like there are a lot of errors that you will at least ignore temporarily, and cognitively this might be hard on the user.
  • Using a different configuration for when you are developing and when you are cleaning up (which would be the regular one). This is not great because that makes the ergonomics for editors quite complicated, and because the people who don't want to do that extra tooling step won't do this anyway.
  • Categorizing each rule, and allowing elm-review to only/not run some categories. I see this happening in quite a few linters, but I imagine that they did that for other reasons (would love research/insight into this). I would prefer avoiding this kind of option though, especially as that will mean needed to figure out useful categorizations, which I doubt we'd get right from the start.

Alternative solution: forking the compiler

When I came up with this idea, I originally thought of forking the compiler and adding the elm-review capabilities to it. This would have the benefit of giving us more information like type inference and not having to re-parse the source code. But that would be a huge endeavor, especially since I don't know any Haskell and am unfamiliar with the compiler codebase.

This would be pretty cool but would also have some downsides such as

  • Tests would change a lot: Since we would need to spawn a command to analyze, we could not write tests using elm-test. Likely, the result would be something like creating one test file per test like what Scalafix does. Or since elm-review works with projects: one folder per test. The ergonomics would be pretty terrible IMO compared to what we would have.
  • We couldn't make --fix-all really fast which finds and applies all fixes in a single function call, because we'd have to interrupt the process to spawn the compiler to get the information we'd want.
  • Web demos would not be possible or as easy as currently.
  • elm-review rules would become its own ecosystem, only working with this tool. You would not be able to integrate elm-review checks in other tools that can't spawn commands (I know of an attempt to integrate it into elm-ui-explorer for instance).
  • Keeping up-to-date with new versions of the compiler would also become a huge chore. Seeing that Lamdera is still using a version of 0.19.0, I don't want to imagine the work we'd need to do.

The type inference information would be cool though 😅

Feedback

Please tell me what you think of all this!

elm-review cannot parse {}

Describe the bug

If the Elm code contains an empty record, elm-review fails to parse the file and blames the user.

SSCCE (Short Self-Contained Correct Example)

x = {}

Steps to reproduce the behavior:

  1. Add the SSCCE code to any elm module
  2. Run elm-review on the module

Expected behavior
elm-review should correctly parse valid Elm code, or at least not blame the user.

Tried get started, got strange error

c:\data6\cs\sss6\sss6elm>
c:\data6\cs\sss6\sss6elm>npx elm-review init
npx: installed 1 in 1.619s
Path must be a string. Received undefined
C:\Users\matti\AppData\Roaming\npm-cache_npx\26080\node_modules\elm-review\bin\elm-review
Unexpected token {

Find correct way to reference function or type

Sometimes I want to generate some code containing a call to a function or type that's defined in another module. Determining how to qualify that function or type is tricky though. Is it exposed directly (no qualification is needed) or is there a import alias I should use? Maybe it's a function from an implicitly imported module like Platform.Cmd? Maybe it's not imported at all and I can't reference this function or type.

Instead elm-review could provide something like this
getQualifiedReference : ModuleNameLookupTable -> ModuleName -> FunctionOrValue -> Maybe ModuleName
This function takes the fully qualified module name for the function or type and then returns the version that accounts for import aliases and exposings. Nothing is returned if the function or type does not exist or there is no import for it.

Disallow referencing functions and types that are not yet defined

What the rule should do:
Disallow referencing functions and types that haven't been defined yet when reading the document top to bottom. This is inspired by how F# works. One difference though is that F# has a rec keyword to allow for defining functions and types that have mutual references. Since there is no such keyword in Elm, any functions or types with mutual references would be exempt from needing to be placed before/after eachother.

Example of things the rule would report:

Example 1:

type alias A = { b : B }


type alias B = Int 

A has a reference to B which is defined later in this module. Move the type declaration for B to somewhere before A.
Automatic fix:

type alias B = Int 


type alias A = { b : B }

Example 2:

a = b + 1


b = 1

a has a reference to b which is defined later in this module. Move the function declaration for b to somewhere before a.
Automatic fix:

b = 1


a = b + 1

Example of things the rule would not report:

a value = b value + 2


b value = a value + 1

These functions both have a reference to eachother so their relative order doesn't matter

When (not) to enable this rule:

People who find this restriction annoying or don't find that it makes it easier to navigate code. I'm personally unsure if I like it or not but I think it would be a good experiment to see how well this works in Elm.

I am looking for:

  • What other people think about this restriction. Has anyone work with F# and found this restriction useful/annoying there?
  • Is this feasible to do in elm-review?

No functions in specified types

What the rule should do:
The rule will be provided a list of types and check if any of those types contains a function. Any type referenced by those types must not contain functions either. This is a stopgap solution until Elm supports an equatable typeclass which will automatically enforce this behavior.

Example of things the rule would report:

module Main exposing (..)

type alias Model = {
    a : Int -> Int
}

Model will be marked as an error (assuming we've configured the rule to check for Main.Model).

module Main

import Image

type Msg
    = NoOp
    | LoadedImage Image

Msg will be marked as an error (assuming we've configured the rule to check for Main.Msg) because Image (from justgook/elm-image) contains a function. This is largely what this rule is intended for, sparing the user from having to go through the internally defined types in packages checking if there's a function in some unexpected place.

I am looking for:

  • Mostly just cataloguing a rule that requires access to the source code in packages in order to work

Disallow packages with outdated minor or patch versions

What the rule should do:
Check elm.json to see if any package are out of date. In this context, "out of date" is only if there's a package with the same major version and a larger minor or patch version.

When (not) to enable this rule:

In the rare cases where a patch or minor change introduces a breaking change to the logic of an existing function

I am looking for:

  • Is this rule useful? I personally like that it consolidates a feature of elm-json into elm-review. The fewer elm tools I need to install the better!
  • What should the rule be called?

I don't think this rule is possible to implement right now since elm-review doesn't give us information about packages. This issue is more of a data point for what's possible with more elm-review features.

Add a way to specify a limit to how many fixes are batched together when running --fix-all

A growing frustration I have with automatic fixes is that --fix is kind of too slow/tedious to apply, while --fix-all can take too much time when there are a lot of issues to fix.

In my current case, I'm writing a new rule and testing the automatic fixes on a codebase with over 2000 errors to make sure that the fix won't introduce compiler errors. --fix-all takes a long long time to run and would show me too many instances where the fix introduced errors. --fix works well but when errors become more rare takes me quite a while to go to the next error.

I propose a way to set a batch size for when running --fix-all, which I imagine to look like elm-review --fix-all=20 and elm-review --fix-all 20. I would prefer only the former to work, especially with regards to backwards compatibility (it could swallow arguments that would have been accepted before) but I am not sure whether the arguments parser we use allows for making a distinction between the two. If we can't figure out a way to do that, then we might need to add this only in the next major release or by using a different flag. What do you think?

When run with such an argument, elm-review would apply N fixes (20 in the previous example), give the same report as what you'd get with --fix-all at the moment, then when accepted, start over and apply fixes up to the same limit.

I am not entirely sure what we would need to do if the user refuses the applied fix. We can't put these fixes in a list of refused fixes like we do with --fix because only the first fix is sure to have the same position in the source code as before (the first fix might change the position of the errors reported after it).
My current thinking is that refusing the suggested fix will cause elm-review to continue as if it didn't run in watch mode (until a change is being made if --watch is used), just like the current behavior of --fix-all.

I think this would help with introducing a new rule to a codebase too. When there are too many errors, running --fix-all takes a long time and produces a big diff. Limiting the size of the batch makes for a more reviewable diff, which you can look at, accept, commit before continuing in the same manner. You can also stop whenever you want, so you could potentially do this over multiple days.

Namespace maintenance

Looking at elm review rules, it would be handy to establish a namespace for rules, something like Review.Rule.Xyz.
Than we can easily for search for rules by namespace prefix.
And second, rules gets automatically categorised, if we spot "Review.Rule.NoAlways" we know that is a review rule.

image

What do you think?

Analyze code from dependencies

The problem

By only analyzing the source code, elm-review encounters a few knowledge gaps when we need to know implementation details from code that comes from dependencies

  • https://github.com/jfmengels/elm-review-rule-ideas/issues/7: Detect functions in a type to prevent crashes during Model comparisons and the Elm debugger from not working
  • jfmengels/elm-review-rule-ideas#8: Detect/evaluate whether the code can enter a dangerous case.
  • NoUnused.CustomTypeConstructors also has a use-case for detecting the phantom types in the types from dependencies, as that information is lost when we encounter opaque types in the docs.json.
  • If we want to detect that Html.lazy is used correctly, we sometimes need to know whether a function creates a new reference to a value or not. To know that, we need to look at the implementation of functions, including those from dependencies.

What would be needed

  • Need to analyse the dependencies in the elm-review API
    • A simple solution would be to have a visitor that just takes all the dependencies information, and let the user use that.
    • A much better solution would be re-using the visitor pattern that we have.
      • We could use project visitors that can't return errors, but just fetch information.
      • We might need another layer of ModuleContext/ProjectContext, since you would need to analyze a dependency, store the resulting context (WorkspaceContext? ProjectContext would have fit nicely here...), then kickstart the analysis of a dependency using that WorkspaceContext with a system like withModuleContext but on a higher-level.
  • We would need to handle naming collisions, since there could be several files with the same name. Make sure that project code can't import modules from indirect dependencies.
  • Provide a way to give dependencies information in a Project
  • Provide a way to create dependencies for tests (in a non-painful way...)
  • Provide a way to test the encoding/decoding of the dependencies analysis
    • We could re-run a rule that uses the dependencies information after encoding/decoding the analysis, and check that all results are equal (and the context too?)
  • Decide what to do if the decoding of the cache fails. Abort elm-review and display an error?
    • Maybe we could try to decode it before saving it to the file-system. If that fails, then... we throw an error? We could ignore the problem but then performance will be quite impacted.

Performance improvements

  • If no rules in the configuration requires access to this information, don't load/fetch/access it. This would require some introspection
  • Cache the result
    • Ask the rule author for a way to return a Json.Encode.Value. (Bytes would be nice too?)
    • The cache would be stored under the hash of the configuration, because it is configuration dependent. We could create a hash of the combined set of dependencies and versions + rule name, and store it in FS using that information.
    • Each dependency would probably have its own cache entry, so that we don't have to recompute every dependency if a single one is changed. We could then combine them just like we do for project contexts. It would also help prevent recomputing packages like elm/core multiple times.
  • If the cache for this set of dependencies and rule exists, load it, but don't try to load/download the dependencies data.
  • Discard the dependencies data once we don't need it anymore (in watch mode), so that it can be garbage-collected.
  • Clean up outdated cache files after some time? I guess it depends on how much information is usually stored in the resulting context. I am guessing not much, but I can't be sure.
  • As I mentioned above, the cache would be stored under the hash for the configuration. Meaning that if you change a single thing in your configuration (ReviewConfig, elm.json or custom rule), then we need to re-run this computation again. That is because we have little ways of knowing whether that rule's behaviour hasn't changed (its implementation, the implementation of one of its imports, ...). It would be nice to better finger-print this cache, so we re-compute this cache less, especially when it is non-configurable and comes from a package.

Fetching dependencies

Dependencies may not be available. That should be relatively rare though, so the happy path that we should try first is to fetch the data from the file system in the ELM_HOME.

If some dependencies are not available, we could do either of two things:

  1. Warn the user that they need to compile first
  2. Run elm make ourselves

Running elm make ourselves

We can simply run elm make, we don't need a target file. If the project is a package, that will be sufficient to run the compiler. If the project is an application, that is not sufficient for the compiler to run, but thankfully the Elm compiler downloads the packages beforehand (If that behaviour ever changes, we should fall back to displaying an error message). We then ignore the error.

If there is no input (for applications), then downloads were successful and we get an error with the title NO INPUT. If there is no network access, we get PROBLEM LOADING PACKAGE LIST. If the user messed the dependencies by hand, we get INCOMPATIBLE DEPENDENCIES.

Notes

This is definitely not light work :D
I think it is a lot of work for a small benefit. That said, once we have it, elm-review would potentially become much more powerful. I kind of like elm-review becoming more "omniscient".

I am worried about the performance impact this would have, as this would add a lot of additional work, especially in the initial runs and when the configuration or dependencies change.

Note that even with this, we still have a few knowledge gaps:

  • What happens in non-Elm files (JS, CSS, HTML, list of images in the project, REST/GraphQL schemas, ...)
  • What happens in native modules from core packages.

I don't know about the second gap, but I am interested in exploring the non-Elm files realm. Please open an issue if you want to discuss those two points, I just wanted to point out that even with this, there would still be some knowledge gaps (though a lot less!).

Report misuses of Html.Lazy

What the rule should do:

Detect cases where Html.Lazy functions will miss the cache.

What problems does it solve:

Html laziness is a very hard thing to get right, and a very easy thing to break afterwards. I found the best way to discover whether I used Html.Lazy right, since it is just an optimization and having it work or fail won't make a visual difference and won't be noticeable unless the page is very "heavy", is to put Debug.log calls in the lazy-ed function, and checking whether I see it printed out a lot.

After I am done making it work, I remove the Debug.log because they can't stay in my production code, and then I don't have any safety net that makes sure my setup won't break again, which it can easily do since laziness is fragile.

Example of things the rule would report:

Note that this rings true for all variants of lazy: lazy, lazy2... lazy9. We could also report for elm-ui's and elm-css lazy modules either out of the box or through configuration (there are other alternatives, and we probably don't want to hardcode them all, plus maybe it will be expensive computation-wise).

The lazy-ed function's reference must be the same
-- BAD
view value = 
  Html.Lazy.lazy (viewAddition 100) value -- Computing a new reference for every call of `view`

viewAddition a b =
  text <| String.fromInt (a + b)

-- GOOD
view value = 
  Html.Lazy.lazy2 viewAddition 100 value

viewAddition a b =
  text <| String.fromInt (a + b)
The references to the arguments must be the same
-- BAD
view list =
  let
    -- Computing a new reference for every call of `view`
    transformedList = List.map identity list
  in
  Html.Lazy.lazy viewHelper transformedList

viewHelper list =
  -- show something

-- GOOD
view list =
  Html.Lazy.lazy viewHelper list

viewHelper list =
  let
    transformedList = List.map identity list
  in
  -- show something

-- GOOD
view : List Int -> Html msg
view list =
  -- Identity doesn't create a new reference, so we need to detect
  -- which functions create new references and which don't
  Html.Lazy.lazy viewHelper (identity int)

-- GOOD
view : Int -> Html msg
view int =
  -- Primitives are compared by value, not by reference, so this is okay
  Html.Lazy.lazy viewHelper (int + 5)

-- GOOD
type Enum = EnumValue
constant = []
view : Model -> Html msg
view model =
  -- Constants are okay
  Html.Lazy.lazy3 viewHelper model.value constant EnumValue

-- BAD
view _ =
  let
    -- Creates new reference
    newRecord = {}
  in
  Html.Lazy.lazy viewHelper newRecord

Ideally, we should look further up the call chain (until we reach main) to see if a new reference is created

-- BAD
view list =
  let
    transformedList = List.map identity list
  in
  viewThatWillUseLazy transformedList

viewThatWillUseLazy list =
  Html.Lazy.lazy viewHelper transformedList
Values must not have been destructured

This was very surprising to me, but @Arkham talked about it in https://juliu.is/performant-elm-html-lazy/ and from trying it out, seems to be true (but I have no clue why).

-- BAD
view {value} =
  Html.Lazy.lazy viewHelper value

-- GOOD
view params =
  Html.Lazy.lazy viewHelper params.value

When (not) to enable this rule:

This rule won't be useful if you don't use Html.Lazy anywhere.

I am looking for:

  • Feedback
  • Fact-checks on what I said about when laziness would be defeated, and find other things we should report
  • Someone to implement it (with or without me. If this works out well, I guarantee you glory!)

I am sure this is a very complex problem to solve compared to most ones out there. I think a good beginning would be to start detecting when the lazy-ed function is being recomputed everytime, and to support more complex cases iteratively. In this case, I think that false negatives are okay (maybe inevitable here?), but we really don't want false positives in elm-review.

I also would like to figure in which package(s) this would fit best. I originally thought it could go in something like elm-review-performance (along with rules that check that tail-call optimization is working alright), but it might make more sense in something like elm-review-html, though I wouldn't know with which rules.

Side-note: Detecting this problem has been one of my long-standing goals when working on elm-review. I would love this one to be implemented ❤️

Forbid using wildcard in pattern matching when it only replaces a single case

What the rule should do:

The rule should report an error when a pattern match contains a wildcard _ pattern match, when there is only a single case left to handle.

What problems does it solve:

Having a wildcard makes it tricky to add new cases to custom type declarations. In this case, having a wildcard adds unnecessary complexity

Example of things the rule would report:

a =
  case foo of
    True -> 1
    _ -> 2

type A = B | C | D

b =
  case foo of
    B -> 1
    C -> 2
    _ -> 3

Example of things the rule would not report:

a =
  case foo of
    True -> 1
    False -> 2

type A = B | C | D

b =
  case foo of
    B -> 1
    C -> 2
    D -> 3

{-| _ covers multiple cases, which is ok -}
b2 =
  case foo of
    B -> 1
    _ -> 2

c =
  case foo of
    1 -> 1
    _ -> 2
d =
  case foo of
    [] -> 1
    _ -> 2

** "Edge-case" **

So one thing where it can be a bit annoying, is for cases like the following

type Something = A | B
maybeSomething : Maybe Something
maybeSomething = Nothing
a = 
  case maybeSomething of
    Just A -> 1
    -- `_` covers both `Just B` and `Nothing`
    _ -> 2

We should technically have all the necessary knowledge to know how many cases are covered by _, but it will be hard work, especially when there is a lot of nested patterns.

Either we do the hard work of computing all of these, or the rule requires that you have a case for every custom type constructor, unless the _ covers two or more of them.

When (not) to enable this rule:

Except for the edge-case I mention above, I think it would always be useful, so always enable? 🤷

I am looking for:

  • A good name
  • To know in which package(s) this would fit best. I am thinking elm-review-common.
  • General feedback: Is this a useful rule, or do you think it will be too often annoying, especially with regards to the edge case
  • What do you think should happen for tuple pattern matching, like when you pattern match ( Msg, Model )?

Report custom types with 1 constructor and without any data

What the rule should do:

Report when a custom type does not hold any data.

What problems does it solve:

I found code that started like

type A = B | C Data

but got trimmed down because of elm-review-unused rules to type A = B, and that's where these rules stop reporting anything.
The model still has a field of type A and there is code that uses it

type alias Model = { a : A, ... }

view model =
  div []
    [ case model.a of
        B -> text "something"
    , if model.a /= B then
        text "not B"
      else
        text "B"
    ]

By reporting A and telling the user to remove it, we will force him to remove the a field and to simplify the different branching that happens in the code for no reason.

Example of things the rule would report:

type A = B

Example of things the rule would not report:

When the type is used in the stead of a phantom type variable.

type Thing = Thing
type OtherThing = OtherThing
type A a = A String

function1 : A Thing -> String
function2 : A OtherThing -> String

Such custom types are typical use-cases for phantom types.

I am wondering whether we should report these types when they're exposed as part of the module. I would like to think that yes, unless they are used in the stead of a phantom type variable somewhere in this file, but I wonder whether there are edge cases where that would be ok.

When (not) to enable this rule:

I am looking for:

  • A good rule name
  • To know in which package(s) this would fit best. I feel like it would fit in elm-review-unused, as it's kind of a useless thing. What do you think?

Report functions in let expressions that do not depend on local values

What the rule should do:

Report functions in let expressions that do not depend on local values

Prior work: elm-analyse has a rule to forbid functions in let expressions, regardless of how they are used. https://stil4m.github.io/elm-analyse/#/messages/FunctionInLet
I don't like this rule and disabled it when I was using elm-analyse because functions could be using closures in an aim to make the code using it simpler.

What problems does it solve:

Making functions much smaller in a way that makes them much more readable IMO.

I don't know if defining functions inside let expressions adds a performance cost because the functions would be redefined at every call.

Example of things the rule would report:

kebabCase str =
  let
    toLowerCase s = String.toLowerCase s
  in
  magicAlgorithm str |> toLowerCase

Example of things the rule would not report:

toLowerCase s = String.toLowerCase s

kebabCase str =
  magicAlgorithm str |> toLowerCase

func n =
  let
    -- depends on `n`
    func m = m + n
  in
  func 1

When (not) to enable this rule:

Only enable if if you care about this kind of cleanup? 🤷‍♂️

Questions I have

  • Should this be applied only on functions, or also on constants?

If so, in the following example secondsInMinute should be moved to the top-level.

toSeconds nbMinutes =
  let
    secondsInMinute = 60
  in
  nbMinutes * secondsInMinute

I'd say why not, but it might be too much? Note that point-free functions look like constants, especially without type annotations

  • In what package would you see this being available?

Preserve record fields order

What the rule should do:
Keep the order of the fields in a record constructor or record update the same as the order in the record definition.

Example of things the rule would report:

Example 1:

type alias Point = { x : Float, y : Float }

a = { y = 0, x = 0 }

a = { y = 0, x = 0 } would be automatically replaced with a = { x = 0, y = 0 }

Example 2:

type alias Point = { x : Float, y : Float, z : Float }

a = { x = 0, y = 0, z = 0 }

a2 = { a | z = 1, x = 1 }

a2 = { a | z = 1, x = 1 } would be automatically replaced with a2 = { a | x = 1, z = 1 }

I am looking for:

  • Is this useful?
  • Are there cases when you wouldn't want this?

Use `elm-review` as an information extraction tool

Idea

I find the elm-review API quite fun to use, and much more powerful than using grep or regexes. When a rule analyzes a project, it may collect contextual data that it will use to know whether it should report an error or not.

We could use the same API to collect data to be used for other purposes. For instance if you use CSS files and classes, you could extract the list of CSS classes currently being used in your application, and pass that to your CSS linter which would tell you to remove the unused classes. We could also have a rule that extracts the Markdown documentation of the whole project so that a script can create those files. You could then use that to publish the documentation in a custom way on your product website, or you can use a Markdown linter to report grammatical issues.

For the Markdown linting, we could also build those rules ourselves and they would be more helpful, but I’m sure that this would be far faster to build. And then that might inspire some to write the rules inside elm-review 🙂

You could also collect data to find statistics about your project and display them in a dashboard such as module coupling. Or for getting a feel for where to start cleaning an Elm application by detecting code smells: how many times is a primitive type being aliased, how often is Maybe.withDefault used, etc.

The proposal

elm-review will provide a new builder for Rule named withDataExtractor (open to naming suggestions) for project rules.

withDataExtractor :
    (projectContext -> Extract)
    -> ProjectRuleSchema schemaState projectContext moduleContext
    -> ProjectRuleSchema schemaState projectContext moduleContext

rule =
  Rule.newProjectRuleSchema "NoMissingSubscriptionsCall" initialProjectContext
      |> ...
      |> Rule.withDataExtractor extract
      |> Rule.fromProjectRuleSchema

You give it a function that takes your project context and returns an extract. I imagine that it will often times be a Json.Value value but we'll go over that later.

extract : ProjectContext -> Review.Extract.Extract
extract projectContext =
    Review.Extract.json
        (Json.Encode.object
            [ ( "usedCssClasses", Json.Encode.list Json.Encode.string projectContext.usedClasses )
            ]
        )

We will not call this function in a normal elm-review run. The CLI will get a --extract boolean flag (open to naming suggestions) which will make the tool run as follow:

  • If there are review errors, it reports only the errors
  • If there are no errors, then we output the extracted data for each rule that extracts data.

When reporting JSON, we can probably output both the errors are extraction.

If you built a review rule that only aims to extract data, it may not make sense to have it be able to report an error, and therefore it may not necessarily make sense that elm-review does this. But there will be times where the rule will run into code that it will not accept or able to make sense out of. If we take the example of extracting CSS classes out of the code, we can imagine a call to Html.Attributes.class with a dynamic value that is too complex for the rule to compute. In that case, the rule could report an error if you so wished it to.

It would be compatible with --fix and --fix-all where it would fix the errors, and when no fixes are found, do what I mentioned before.

For module rules

For project rules, once we have finished reviewing the project, we have a projectContext that we can pass to the data extractor. But for module rules we don't have a single result, we only have a module context, one per module, which internally is discarded at the moment.

The easiest and most performant solution I think is to only support having a data extractor for project rules.

Extracted data output

I think that this might be where I have the biggest design issue.
My original thought was that we'd output a JSON object that would look like the following

> elm-review --extract
{
  "type": "review-errors",
  "errors": [],
  -- The fields above are what we normally display.
  "extract": {
    "Name.Of.The.Rule": {
      "usedCssClasses": [ "classname1", "classname2" ]
    },
    "Name.Of.Another.Rule": "..."
  }
}

This sounds like the easiest way to deal with multiple rules. An issue that could come up is that the name of the rule is not necessarily unique. You can enable a rule twice, and one valid use-case for that is if they have different configurations.

So if we key this by the rule name, we potentially override and lose data. I see two ways around this

  • The author of the rule makes the name change based on the given configuration. This will not be great for errors reports, and there are mechanisms that depend on a rule having a static name that is the same as the module name it is in, so I'd like that to stay as is.
  • Instead of an object, we output a list and then let user handle the conflicts. That will be quite hard for the user though...
[
  { "name": "Name.Of.The.Rule", "payload": {"usedCssClasses": [ "classname1", "classname2" ]}},
  { "name": "Name.Of.The.Rule", "payload": "other data" }
]
  • Not wrap the extract the data as JSON, but output ones after the other
{ "name": "Name.Of.The.Rule", "payload": {"usedCssClasses": [ "classname1", "classname2" ]}}
{ "name": "Name.Of.The.Rule", "payload": "other data" }

Extracted data output, as plain text?

I think --watch mode could be quite useful if we just display the extracted data anytime when there are no errors instead of reporting that there are no errors. That way, you could have some kind of "monitor" of the codebase that updates everytime you save. To make this work, we need to be able to display text since JSON will make this monitor not readable.

An example of a monitor: we are on a quest to add type annotations everywhere and would like to do the work in batches. We create a rule that counts the number of missing type annotations and returns the top 5 modules.

extract : ProjectContext -> Review.Extract.Extract
extract projectContext =
    Review.Extract.text
        (projectContext.modules
            |> List.sortBy .count
            |> List.take 5
            |> List.indexedMap (\index item -> String.fromInt index ++ ". " ++ item.name)
            |> String.join "\n
        )
> elm-review --watch --config ./code-cleanup
Top 5 files with the most missing type annotations
1. Some.Module.A
2. Some.Module.B
3. Some.Module.C
4. Some.Other.Module
5. Some.Module.D

Now I know it's best to target Some.Module.A first, then Some.Module.B, etc.

Or imagine you are working on a complex update function, and you wish to visualize the flow of messages

> elm-review --watch --rules UnderstandUpdateFlow src/ComplexFile.elm
(init)
  ↳ GotLoadData
  ↳ <port "analytics">

(click) PayButtonClicked
  ↳ GotUserData
    ↳ GotBasket
      ↳ GotServerValidation
        ↳ GotPaymentValidation
          ↳ <url change>
          ↳ <port "analytics">

As you would work on that file, the update flow would be updated to reflect how the page works.

This would I think open up quite a lot of interesting applications, but it would not work very well if we reported JSON, as that would not make the monitors readable.

Integration with external tools

It can be useful to pipe the extracted data to another tool, just like in the Unix philosophy. While this would work when calling elm-review, it would not when running in watch mode as the program never terminates. I don't think we need to for at least the first iteration, but we could have an argument --command where we would pipe the extracted content to. elm-review --watch --extract --command "sed s/Thing/Other/ > main.css" (fake bash, but you get the gist).

I do feel like the tool would be limited since we could only reasonably/usefully pass the output of one single rule to something else, or even use only a single thing as a monitor, leading to multiple elm-review calls with different configurations/arguments. Or we wrap all that in a giant JSON and let the user handle it all with a tool like jq or a custom Node.js script 🤷‍♂️

Request for comments

  • What do you think of this proposal as a whole?
  • Does it make to you to have elm-review do this, or should that be a different tool?
  • Can you think of use-cases you would have for extracting data?
  • Can you think of use-cases you would for a monitor?
  • What issues do you see here that I didn't already cover?

Can not run elm-review v2

Hi!

Thanks for your awesome work, I enjoyed the v1 version of elm-review, and now I'm trying to make v2 work.

Setup: elm-0.19.1, elm-review-2.0.0

$ npm i -g [email protected]
$ mkdir tmp && cd tmp
$ elm init   # yes 
$ elm-review init   # yes

After that I run elm-review and face a wall of compilation errors, starting with

-- MODULE NOT FOUND ------------------------- elm-review-src\Elm\Review\Main.elm

You are trying to import a `Review.Fix` module:

13| import Review.Fix as Fix exposing (FixResult)
           ^^^^^^^^^^
I checked the "dependencies" and "source-directories" listed in your elm.json,
but I cannot find it! Maybe it is a typo for one of these names?

    Elm.Review.Main
    Elm.Review.Text
    Bitwise
    Elm.Error

Hint: If it is not a typo, check the "dependencies" and "source-directories" of
your elm.json to make sure all the packages you need are listed there!

-- MODULE NOT FOUND ------------------------- elm-review-src\Elm\Review\Main.elm

You are trying to import a `Review.Project` module:

14| import Review.Project as Project exposing (Project, ProjectModule)
           ^^^^^^^^^^^^^^
I checked the "dependencies" and "source-directories" listed in your elm.json,
but I cannot find it! Maybe it is a typo for one of these names?

    Elm.Review.Text
    Elm.Parser
    Elm.Writer
    Process

and so on.

Is there something off with my setup? If yes, what should I fix?

Cheers

Review.Test: display whitespace differences

The test failure message when the result of a fix is the same expect for some white-space is not very useful.

    I found the following result after the fixes have been applied:
      `` `
        module A exposing (..)
        a = thing 
      `` `
    but I was expecting:
      `` `
        module A exposing (..)
        a = thing
      `` `

I think that we could do something like this:

  • Check if the fix result is the same as expected, and pass the test if it is.
  • If it isn't, replace all white-space (not newlines though) in both and re-compare.
  • If they are not the same, then the difference is not (solely) related to white-space, and we should display the current error message
  • If they are the same, then the difference is only white-space, and we can improve the error message.

I'm thinking we could improve the error message by splitting the texts in lines, and comparing each line. For the lines that are different, we could replace white-space by · (https://theasciicode.com.ar/extended-ascii-code/interpunct-space-dot-ascii-code-250.html), so we don't show the problem everywhere.

If someone wants to a better diff and highlight the differences in color (like Git diffs), that could be useful too (but the · will probably still be useful for color-blind people?).

Pondering

  • Maybe we should accept white-space differences, since we run elm-format afterwords anyway. I'm thinking this could influence indentation, so maybe we should report the ones at the beginning of lines, but for ones in the middle or at the end of lines we could give the user leeway without any impactful consequences?

How to reproduce

Build a rule that reports anything and introduces white-space somewhere at the end of a line.

delcarationVisitor : Node a -> List (Error {})
delcarationVisitor node =
    Rule.errorWithFix
        { message = "Foo"
        , details = [ "Foo" ]
        }
        (Node.range node)
        [ Fix.insertAt (Node.range node).end " " ]

and have a rule that expects a fix without the additional

Possible to use both elm-review and elm-test in existing project?

I'm experimenting with writing a custom rule within an existing Elm project, and as part of that I'd like to have tests for the custom rule.

However, it's not clear how to arrange the elm-review files in a way that is compatible with elm-test?

For instance, running elm-test init creates the files in the review directory as expected, but when I cd review and run elm-test, I get various errors, depending on what combination of things I've tried.

If I put the test file under review, elm-test says it can't find anything under the tests/ directory.

If I put the test file under review/tests, elm-test reports a "Confusing File" error.

If keep the test there and move ReviewConfig.elm into review/src, changing the elm.json to have "source-directories": ["src"], then elm-test appears to work, while elm-review now complains about missing files.

I understand moving the rules and tests to their own package would solve this, but I'd prefer to keep it in the existing package if possible.

Ambiguous error location message is misleading

Describe the bug
The SSCCE code below produces this error message

AMBIGUOUS ERROR LOCATION

    Your test passes, but where the message appears is ambiguous.

    You are looking for the following error message:

      `Message`

    and expecting to see it under:

      ```
        a : Int
        a = 0
      ```

    I found 0 locations where that code appeared. Please use
    `Review.Test.atExactly` to make the part you were targetting unambiguous.

    Tip: I found them at:


At a glance it looks like it shouldn't be ambiguous where the error appears. What's probably causing problems is that on Windows, line breaks contain a \r which is getting inserted into the multiline string. I think this is confusing elm-review somehow.

Additionally, it's strange that at the end of the error message, there's Tip: I found them at: but nothing is listed.

SSCCE (Short Self-Contained Correct Example)

import Test exposing (..)
import Rule
import Review.Test

test = test "test" <|
                \_ ->
                    """module A exposing (..)

a : Int
a = 0"""
      |> Review.Test.run
          (Rule.newModuleRuleSchema "Rule" ()
              |> Rule.withDeclarationListVisitor
                  (\_ _ ->
                      ( [ Rule.error
                              { message = "Message"
                              , details = [ "" ]
                              }
                              { start = { column = 0, row = 3 }, end = { column = 5, row = 4 } }
                        ]
                      , ()
                      )
                  )
              |> Rule.fromModuleRuleSchema
          )
      |> Review.Test.expectErrors
          [ Review.Test.error
              { message = "Message"
              , details = [ "" ]
              , under = "a : Int\na = 0"
              }
              |> Review.Test.whenFixed ""
          ]

Expected behavior
Not sure but maybe elm-review could check if there's windows line endings present when it gets this error and warn that, that might be causing the issue.

Additional context
This happens on Windows only due to the way it uses \r\n as line breaks instead of just \n. So to reproduce this issue, you'll need to run it on Windows or replace

"""module A exposing (..)

a : Int
a = 0"""

with
"module A exposing (..)\r\n\r\na:Int\r\na = 0"

Report deprecated dependencies

What the rule should do:

Report deprecated dependencies. I see a few packages in the registry with DEPRECATED in the elm.json summary. I am thinking that we could use that as a sign that package is deprecated, and that it shouldn't be used anymore. Maybe we will be able to find other clues, like titles in the README containing deprecated.

What problems does it solve:

Allowing you to detect when a package becomes deprecated, in order to use the latest/best package out there for a problem.

Example of things the rule would report:

  • A dependency of the project's summary is "DEPRECATED: Use xyz instead"

When (not) to enable this rule:

I am looking for:

Answers to the following questions:

  • Should this rule take as a configuration a list of dependencies to allow? I'm thinking no, but if/when the package becomes deprecated, you may not want to fix it right now, as it can be a tedious/long task, depending on the package, meaning that you'd have to ignore the rule for a while. If we allow configuration, that will allow the users to keep the rule enabled and notice new deprecated dependencies.

  • What do you think of the proposed deprecated signs? Can you think of others?

I am thinking it would fit very well with jfmengels/elm-review-rule-ideas#5 (which we have already enabled at my workplace), in something like elm-review-dependencies. But we might want to have a similar rule for noticing deprecated modules/functions, in which case they wouldn't be in the same package. And maybe that's okay.

Rule for no "NoOp" expressions

What the rule should do: Remove code that is effectively a noop.

Example of things the rule would report:

x == True
x /= False
(\x -> x) -- So long as removing this doesn't cause there to be a syntax error.
1 * x
0 + x
"" ++ x

Example of things the rule would not report:

(x) -- elm-format already does a good job of removing unneeded parenthesis and sometimes you want to keep them for grouping stuff

I am looking for:
Ideally someone who will do the work for me :)
But otherwise this is just so I don't forget about this so I can get around to doing it sometime.

I've also posted the issue here jfmengels/elm-review-simplify#1 as I think this would be a good place for this rule to exist.

Minimize floating point errors

There's this neat new tool https://herbie.uwplse.org that takes floating point math and determines if there's a way to reorder the expression so that it produces less rounding errors. This rule would port that tool into Elm and then run it against all floating point expressions in your Elm code.

Example of things the rule would report:

a = sqrt (x+1) - sqrt x

This floating point expression can be replaced with 1 / (sqrt (x+1) + sqrt x) in order to reduce precision errors.`

When (not) to enable this rule:

If floating point rounding errors don't matter much, you have floating point math structured in a specific way for readability, or you don't like having a rule that takes minutes to run (this rule is probably better fit for CI)

I am looking for:

  • Would you use this rule?
  • Would you use this tool but don't think it's a good fit for an elm-review rule?

Usage as an information extraction tool for CSS

Hey Jeroen,

I have been reading your blogpost and the Usage as an information extraction tool paragraph caught my attention. Particularly this part:

For instance if you use CSS files and classes, you could extract the list of CSS classes currently being used in your application, and pass that to your CSS linter which would tell you to remove the unused classes.

which is exactly the idea I was having for a long time and I have ran into the need for this yesterday as well.

We are using Weak-CSS methodology and class-namespaces to build our styles, which makes any tool like post-css useless :( The only solution would be to somehow parse the class-namespaces calls and reconstruct the CSS class name out of that and then compare it with the class names from SASS to find out which classes are not used at all.

Another nice use-case for that would be using tailwind-css where the resulted CSS file can be stripped down only to used classes by PostCSS, but only if you use directly the whole tailwind classname. You can't concat those names or do any other optimisations on those as well, which is something I'd definitely use :)

So my questions are:

  • can elm-review help in those situations?
  • how should it work with elm-review to have those custom use-cases (I bet there would be many different use-cases on how to put class names together)
  • what is currently the biggest problem to move towards making the information extraction from elm-review?

NoConfusingPrefixOperator

What the rule should do:

Report for example (<) 0, such as in List.filter ((<) 0). My brain automatically reads that as “keep all negative numbers”, but it actually means List.filter (\x -> 0 < x) which does the opposite – keeping all positive numbers.

Example of things the rule would report:

(<) 0

Example of things the rule would not report:

(==) 0

More comprehensive thoughts:

The following operators are OK to partially apply, because the order of the operands does not matter:

(+) (*) (==) (/=)

The following are not OK:

(-) (/) (//) (^) (<) (>) (<=) (>=) (++) (|>) (<|) (>>) (<<)
(|.) (|=) (</>) (<?>)

I think these are technically OK, but it’s pretty weird partially applying them?

(&&) (||)

Trivia:

In Haskell you can also do (<) 0 and it means the same thing as in Elm. But Haskell has additional syntax:

  • (0 <) means \x -> 0 < x (just like (<) 0)
  • (< 0) means \x -> x < 0

When (not) to enable this rule:

I think this rule always makes sense!

I am looking for:

Is this a good fit for https://github.com/jfmengels/review-simplification? That package already has NoFullyAppliedPrefixOperator which is a bit similar.

I’d love to implement this myself, but I already have lots of other fun open source stuff to do so I might not get around to it.

Rule for no `text ""`

What the rule should do:
This rule should flag the use of Html.text with an empty string literal and suggest to use one of the alternative from elm-community/html-extra instead.

Example of things the rule would report:

stuff |> Maybe.map viewStuff |> Maybe.withDefault (text "")

Example of things the rule would not report:

import ModuleWithText exposing (text)

stuff |> Maybe.map viewStuff |> Maybe.withDefault (text "")

When (not) to enable this rule:

  • if you do not want to use Html.Extra
  • if you do not mind using text ""

I am looking for:

  • Feedback
  • To know in which package(s) this would fit best or if you've other rules that would fit in.

At the moment the package is published here: https://package.elm-lang.org/packages/leojpod/review-no-empty-html-text/latest/

No `Parser.backtrackable`

This function is an antipattern and becomes inefficient when used together with Parser.oneOf. The thing is, it's a quick fix sometimes and leads to nicer-looking (more declarative) code, so people are tempted to use it.

Disallow usages of this function.

Note there are two variants: Parser.backtrackable and Parser.Advanced.backtrackable.

BONUS ROUND: have a fix that rewrites the code from backtrackable to non-backtrackable. (Likely hard? Spread across multiple declarations etc.)

No extensible records in Model

What the rule should do:

Report when extensible records are used for data. In practice, I think that will mean forbid extensible records used directly or indirectly in Model (and Msg? Elsewhere?).

What problems does it solve:

As Evan Czaplicki once said (as quoted by Richard Feldman in Scaling Elm Apps):

Extensible records are useful to restrict arguments, not for Data modeling.

I find that code using extensible records for their data is often too coupled to where it is used, and a lot of unnecessary fields/values are necessary in places they are not needed.

Example of things the rule would report:

type alias User a = { a | id : UserId, name : String }

type alias Model =
  { user : User { age : Int }
  --       ^^^^
  }

view : Model -> Html msg
view model =
  div [] [ viewUser model.user ]

viewUser : User a -> Html msg
viewUser user =
    text (user.name ++ ": " ++ String.fromInt user.age) ++ " years old")

In this example, User would be reported because we don't want to store it in the model. Here, the id field is defined but not used, but we'll need to create it to be able to create a User.

In practice, I find that User is often defined in a separate module, so as to be able to re-use functions across modules, and that this increases the chance to have unused data (that needs to be fetched, initialized, ...).

Example of things the rule would not report:

type alias User = { name : String, age : Int }

type alias Model =
  { user : User
  }

viewUser : { a | name : String, age : Int } -> Html msg
viewUser user =
    text (user.name ++ ": " ++ String.fromInt user.age) ++ " years old")
type alias User = { name : String, age : Int }

----

type alias Config user =
  { getUserName : { user | name : String } -> String
  }

viewUser : Config user -> user -> Html msg
viewUser config user =
    text (config.getUserName user)

When (not) to enable this rule:

🤷‍♂️

I am looking for:

  • Feedback. Is this a reasonable rule? Do you see any unwanted limitations this would create?
  • Someone to implement it with/for me
  • To know in which package(s) this would fit best

RFC: Remove direction from declarationVisitor

Both declaration and expression visitors are visited twice

  • on "enter", before the children have been visited
  • on "exit", after the children have been visited

This is quite useful to know in which "context" you are: Am I inside a let..in expression, which function declaration am I, etc?

For let..in expression, this can be quite useful, as the direction-less alternative can be quite painful to write.

For declaration visitors though, the point of the "exit" visitor is, most often as I have seen, to remove the "current" function related information that were set in the "enter" visitor. But these will be overridden anyway by the next "enter" visit.

We might avoid a lot of visits by making the declaration visitor "direction-less", thus gaining in performance.

This will likely happen, unless scenarios are found where this simplifies the visit of a module drastically. If you know of any, please write them down as comments.

No long lines

What the rule should do:
This rule is a generalization of https://package.elm-lang.org/packages/r-k-b/no-long-import-lines/latest/
It will report an error for any line that exceeds a specified character length. An exception is made for lines that can't be shortened without making changes beyond just inserting whitespace.

Example of things the rule would report:

a = [ "asdf", "asdf", "asdf", "asdf", "asdf", "asdf", "asdf", "asdf", "asdf", "asdf" ]

gets fixed to

a = 
    [ "asdf", 
     "asdf", 
     "asdf", 
     "asdf", "asdf", "asdf", "asdf", "asdf", "asdf", "asdf" ]

which elm-format can then fix up.

Example of things the rule would not report:

a = "A very long string that exceeds the specified max line length"

The only place a line break can be introduced is here = " which don't make the line short enough to fit.

When (not) to enable this rule:

If it turns out to be too difficult to implement. There are some edge cases that might be tricky. For example:

a = case Nothing of 
    Just _ -> [ "asdf", "asdf", "asdf", "asdf", "asdf", "asdf", "asdf", "asdf", "asdf", "asdf" ]
    Nothing -> []

If we only apply a line break and a fixed amount of spacing to the long line, we'll end up with

a = case Nothing of 
    Just _ -> 
    [ "asdf", "asdf", "asdf", "asdf", "asdf", "asdf", "asdf", "asdf", "asdf", "asdf" ]
    Nothing -> []

which won't compile since the expression needs to be indented past the start of the pattern match.

I am looking for:

  • Is this rule useful?
  • Are there other edge cases that make this rule difficult to implement?
  • elm-syntax currently doesn't include line comments so a = "some text" ++ {- a comment -} "more text" will probably cause weird results. Does this seem like a big problem? Do people often place comments in lines of code?

Provide type information to rules

It would be really nice if we could provide type information to rules, so that they provide more useful analysises.

Just like elm-review-scope and the module name lookup table, we could start by having some visitors we could add, and when we're happy with the result, we could make turn this into a native functionality using something like a .

What I'm looking for

Use-cases where this would be useful

If we don't find enough interesting use-cases, it's probably not worth implementing.

  • Knowing what the type of a function is would allow NoMissingTypeAnnotation and NoMissingTypeAnnotationInLetIn to provide an automatic fix. That said, it will probably not provide the exact type annotation you'd want (when dealing with records mostly) as I have experienced with IDE features that do this, but it will definitely be a helpful boost.

Advice on how to implement it

I know nothing of how type inference is done, and how it can be made efficient. I imagine that we can somehow take shortcuts, as we are not trying to validate types, since we already let the compiler handle that for us.

I know Evan said that the type inference algorithm used by Elm cannot be done in a performant manner without mutation, but I have no clue whether that applies to figuring out types and/or for validating them.

A note on compiler integration

I mentioned in #40 that it would be possible to fork the compiler and use the type inference information (and even AST) it provides and pass it directly to the rules. While that sounds attractive, it does bring a lot of lock-in which would have a lot of impact on how tests would be written and where elm-review could be used.

My current thoughts are: Get this working natively well, and later on we could potentially fork the compiler and re-use its information but only for performance concerns. That would allow elm-review to still be able to write the tests like it does and to be run wherever Elm can be run.

No hidden units

Forbid using _ in place of () - it's valuable to know when a pattern is meant to be a unit and it could be hiding the fact that you're missing data when it's not a unit (perhaps you refactored away from the unit).

One place this comes up is in tests:

-- Fail: `_` used where `()` would be better
test1add1 =
    test "1 + 1 = 2" <|
        \_ ->
            1 + 1 |> Expect.toBe(2)

-- Pass: () is used where () should be
test1add1 =
    test "1 + 1" <|
        \() ->
            1 + 1 |> Expect.equal(2)

-- Note: when refactoring the above test(s) we might forget to use the fuzz value if it's `_`
testAdd1 =
    fuzz int "1 + 1" <|
        \_ ->
            1 + 1 |> Expect.equal(2)

Another might be unused flags passed to an init function:

-- Fail: `_` used where `()` would be better
init : () -> ( Model, Cmd Msg )
init _ =
    ( emptyModel, Cmd.none )

-- Pass: () used where () should be
init : () -> ( Model, Cmd Msg )
init () =
    ( emptyModel, Cmd.none )

-- Note: when adding Flags you might forget to actually use the flags
init : Flags -> ( Model, Cmd Msg )
init flags =
    ( modelFromFlags flags, Cmd.none )

When (not) to enable this rule:

  • If you don't care about hiding units.
  • This rule is a nitpick - your team might find this rule annoying and it may slow down elm-review on large code bases for little value.

I am looking for:

  • Feedback - would you use this rule?
  • To know in which package(s) this would fit best
  • How to find out where units should be in lambda functions (like the tests) (programmatically)

Report functions that are not tail-call-optimized

What the rule should do:

Report when functions that are tagged to or supposed to be TCO are not.

What problems does it solve:

TCO functions are faster than their non-TCO counterparts (usually)

Example of things the rule would report:

Doing an operation on the result of a recursive call

reverse : List a -> Maybe a
reverse list =
  case list of
    [] -> Nothing
    x :: xs -> reverse xs ++ [ x ]

Using pipelines in the recursive call

tailCallWithPipeline : Int -> String -> String
tailCallWithPipeline count acc =
    if count <= 0 then
        acc

    else
        acc
            |> tailCallWithPipeline (count - 1)

Having the recursive call inside a boolean expression

isPrefixOf : List a -> List a -> Bool
isPrefixOf prefix list =
    case ( prefix, list ) of
        ( [], _ ) ->
            True
        ( _ :: _, [] ) ->
            False

        ( p :: ps, x :: xs ) ->
            -- Defeats TCO
            p == x && isPrefixOf ps xs
            -- The following would work
            -- if p == x then
            --     isPrefixOf ps xs
            -- else
            --     False

Example of things the rule would not report:

Properly TCO function that was somehow tagged

last : List a -> Maybe a
last list =
  case list of
    [] -> Nothing
    x :: [] -> Just x
    _ :: xs -> last xs

Also, non-TCO functions that were not tagged, and non-recursive functions.

Configuration example:

This depends on how we wish to tag functions.

config =
  [ -- If we explicit them all in the configuration, then
    NoFailingTailCallOptimization.rule [ ( "List.Extra", "last" ) ]
    -- if we tag them through comments, then
  ,  NoFailingTailCallOptimization.rule
  ]

If we explicit them in the configuration, then we should report functions that could not be found in the project.

For tagging through comments, I imagine we can look for comments like TCO or whatever at the beginning of a function

last : List a -> Maybe a
last list =
  -- TCO
  case list of
    [] -> Nothing
    x :: [] -> Just x
    _ :: xs -> last xs

which is more fragile though. The exact string could be configurable.

When (not) to enable this rule:

If you do not care about performance and stack-safety, and/or have no knowledge about TCO.

I am looking for:

  • Expert knowledge on TCO
  • A good explanation on TCO, to put in and/or link to in the error message
  • Feedback on how to tag a function as TCO
  • A good rule name. I was thinking of but am not entirely convinced by NoFailingTailCallOptimization
  • To know in which package(s) this would fit best. I thought about elm-review-performance, but I don't know what else could go in there.
  • A discussion on whether we should report mutually recursive functions, which are not TCO

Command freezes

Hi @jfmengels 👋
Thanks for this great tool!

I seem to be running into an issue where the elm-review command gets "stuck" (or freezes if you will). I've ran the command with --debug flag and got this:

 - /Users/icidasset/Projects/diffuse/src/Applications/Brain.elm
 - /Users/icidasset/Projects/diffuse/src/Applications/Brain/Ports.elm
 - /Users/icidasset/Projects/diffuse/src/Applications/Brain/Reply.elm
 - /Users/icidasset/Projects/diffuse/src/Applications/Brain/Sources/Processing.elm
 - /Users/icidasset/Projects/diffuse/src/Applications/Brain/Sources/Processing/Common.elm
 - /Users/icidasset/Projects/diffuse/src/Applications/Brain/Sources/Processing/Steps.elm
 - /Users/icidasset/Projects/diffuse/src/Applications/Brain/Tracks.elm
 - /Users/icidasset/Projects/diffuse/src/Applications/Brain/User/Layer.elm
 - /Users/icidasset/Projects/diffuse/src/Applications/UI.elm
 - /Users/icidasset/Projects/diffuse/src/Applications/UI/Alfred.elm
 - /Users/icidasset/Projects/diffuse/src/Applications/UI/Authentication.elm
 - /Users/icidasset/Projects/diffuse/src/Applications/UI/Authentication/ContextMenu.elm
 - /Users/icidasset/Projects/diffuse/src/Applications/UI/Backdrop.elm
 - /Users/icidasset/Projects/diffuse/src/Applications/UI/Console.elm
 - /Users/icidasset/Projects/diffuse/src/Applications/UI/ContextMenu.elm
 - /Users/icidasset/Projects/diffuse/src/Applications/UI/Css.elm
 - /Users/icidasset/Projects/diffuse/src/Applications/UI/Demo.elm
 - /Users/icidasset/Projects/diffuse/src/Applications/UI/DnD.elm
 - /Users/icidasset/Projects/diffuse/src/Applications/UI/Equalizer.elm
 - /Users/icidasset/Projects/diffuse/src/Applications/UI/Kit.elm
 - /Users/icidasset/Projects/diffuse/src/Applications/UI/List.elm
 - /Users/icidasset/Projects/diffuse/src/Applications/UI/Navigation.elm
 - /Users/icidasset/Projects/diffuse/src/Applications/UI/Notifications.elm
 - /Users/icidasset/Projects/diffuse/src/Applications/UI/Page.elm
 - /Users/icidasset/Projects/diffuse/src/Applications/UI/Playlists.elm
 - /Users/icidasset/Projects/diffuse/src/Applications/UI/Playlists/Alfred.elm
 - /Users/icidasset/Projects/diffuse/src/Applications/UI/Playlists/ContextMenu.elm
 - /Users/icidasset/Projects/diffuse/src/Applications/UI/Playlists/Directory.elm
 - /Users/icidasset/Projects/diffuse/src/Applications/UI/Playlists/Page.elm
 - /Users/icidasset/Projects/diffuse/src/Applications/UI/Ports.elm
 - /Users/icidasset/Projects/diffuse/src/Applications/UI/Queue.elm
 - /Users/icidasset/Projects/diffuse/src/Applications/UI/Queue/Common.elm
 - /Users/icidasset/Projects/diffuse/src/Applications/UI/Queue/ContextMenu.elm
 - /Users/icidasset/Projects/diffuse/src/Applications/UI/Queue/Fill.elm
 - /Users/icidasset/Projects/diffuse/src/Applications/UI/Queue/Page.elm
 - /Users/icidasset/Projects/diffuse/src/Applications/UI/Reply.elm
 - /Users/icidasset/Projects/diffuse/src/Applications/UI/Settings.elm
 - /Users/icidasset/Projects/diffuse/src/Applications/UI/Settings/ImportExport.elm
 - /Users/icidasset/Projects/diffuse/src/Applications/UI/Settings/Page.elm
 - /Users/icidasset/Projects/diffuse/src/Applications/UI/Sources.elm
 - /Users/icidasset/Projects/diffuse/src/Applications/UI/Sources/ContextMenu.elm
 - /Users/icidasset/Projects/diffuse/src/Applications/UI/Sources/Form.elm
 - /Users/icidasset/Projects/diffuse/src/Applications/UI/Sources/Page.elm
 - /Users/icidasset/Projects/diffuse/src/Applications/UI/Svg/Elements.elm
 - /Users/icidasset/Projects/diffuse/src/Applications/UI/Tracks.elm
 - /Users/icidasset/Projects/diffuse/src/Applications/UI/Tracks/ContextMenu.elm
 - /Users/icidasset/Projects/diffuse/src/Applications/UI/Tracks/Reply.elm
 - /Users/icidasset/Projects/diffuse/src/Applications/UI/Tracks/Scene.elm
 - /Users/icidasset/Projects/diffuse/src/Applications/UI/Tracks/Scene/List.elm
 - /Users/icidasset/Projects/diffuse/src/Library/Alfred.elm
 - /Users/icidasset/Projects/diffuse/src/Library/Alien.elm
 - /Users/icidasset/Projects/diffuse/src/Library/Chunky.elm
 - /Users/icidasset/Projects/diffuse/src/Library/Chunky/Styled.elm
 - /Users/icidasset/Projects/diffuse/src/Library/Classes.elm
 - /Users/icidasset/Projects/diffuse/src/Library/Color/Ext.elm
 - /Users/icidasset/Projects/diffuse/src/Library/Common.elm
 - /Users/icidasset/Projects/diffuse/src/Library/Conditional.elm
 - /Users/icidasset/Projects/diffuse/src/Library/ContextMenu.elm
 - /Users/icidasset/Projects/diffuse/src/Library/Coordinates.elm
 - /Users/icidasset/Projects/diffuse/src/Library/Cryptography/Hmac.elm
 - /Users/icidasset/Projects/diffuse/src/Library/Css/Classes.elm
 - /Users/icidasset/Projects/diffuse/src/Library/Css/Ext.elm
 - /Users/icidasset/Projects/diffuse/src/Library/Dict/Ext.elm
 - /Users/icidasset/Projects/diffuse/src/Library/Equalizer.elm
 - /Users/icidasset/Projects/diffuse/src/Library/Html/Ext.elm
 - /Users/icidasset/Projects/diffuse/src/Library/Icons.elm
 - /Users/icidasset/Projects/diffuse/src/Library/Json/Decode/Ext.elm
 - /Users/icidasset/Projects/diffuse/src/Library/List/Ext.elm
 - /Users/icidasset/Projects/diffuse/src/Library/Maybe/Ext.elm
 - /Users/icidasset/Projects/diffuse/src/Library/Notifications.elm
 - /Users/icidasset/Projects/diffuse/src/Library/Playlists.elm
 - /Users/icidasset/Projects/diffuse/src/Library/Playlists/Encoding.elm
 - /Users/icidasset/Projects/diffuse/src/Library/Playlists/Matching.elm
 - /Users/icidasset/Projects/diffuse/src/Library/Queue.elm
 - /Users/icidasset/Projects/diffuse/src/Library/Return2.elm
 - /Users/icidasset/Projects/diffuse/src/Library/Return3.elm
 - /Users/icidasset/Projects/diffuse/src/Library/Settings.elm
 - /Users/icidasset/Projects/diffuse/src/Library/Sources.elm
 - /Users/icidasset/Projects/diffuse/src/Library/Sources/Encoding.elm
 - /Users/icidasset/Projects/diffuse/src/Library/Sources/Pick.elm
 - /Users/icidasset/Projects/diffuse/src/Library/Sources/Processing.elm
 - /Users/icidasset/Projects/diffuse/src/Library/Sources/Processing/Encoding.elm
 - /Users/icidasset/Projects/diffuse/src/Library/Sources/Services.elm
 - /Users/icidasset/Projects/diffuse/src/Library/Sources/Services/AmazonS3.elm
 - /Users/icidasset/Projects/diffuse/src/Library/Sources/Services/AmazonS3/Parser.elm
 - /Users/icidasset/Projects/diffuse/src/Library/Sources/Services/AmazonS3/Presign.elm
 - /Users/icidasset/Projects/diffuse/src/Library/Sources/Services/Azure/Authorization.elm
 - /Users/icidasset/Projects/diffuse/src/Library/Sources/Services/Azure/BlobParser.elm
 - /Users/icidasset/Projects/diffuse/src/Library/Sources/Services/Azure/FileMarker.elm
 - /Users/icidasset/Projects/diffuse/src/Library/Sources/Services/Azure/FileParser.elm
 - /Users/icidasset/Projects/diffuse/src/Library/Sources/Services/AzureBlob.elm
 - /Users/icidasset/Projects/diffuse/src/Library/Sources/Services/AzureFile.elm
 - /Users/icidasset/Projects/diffuse/src/Library/Sources/Services/Btfs.elm
 - /Users/icidasset/Projects/diffuse/src/Library/Sources/Services/Common.elm
 - /Users/icidasset/Projects/diffuse/src/Library/Sources/Services/Dropbox.elm
 - /Users/icidasset/Projects/diffuse/src/Library/Sources/Services/Dropbox/Parser.elm
 - /Users/icidasset/Projects/diffuse/src/Library/Sources/Services/Google.elm
 - /Users/icidasset/Projects/diffuse/src/Library/Sources/Services/Google/Parser.elm
 - /Users/icidasset/Projects/diffuse/src/Library/Sources/Services/Ipfs.elm
 - /Users/icidasset/Projects/diffuse/src/Library/Sources/Services/Ipfs/Marker.elm
 - /Users/icidasset/Projects/diffuse/src/Library/Sources/Services/Ipfs/Parser.elm
 - /Users/icidasset/Projects/diffuse/src/Library/Sources/Services/WebDav.elm
 - /Users/icidasset/Projects/diffuse/src/Library/Sources/Services/WebDav/Marker.elm
 - /Users/icidasset/Projects/diffuse/src/Library/Sources/Services/WebDav/Parser.elm
 - /Users/icidasset/Projects/diffuse/src/Library/String/Ext.elm
 - /Users/icidasset/Projects/diffuse/src/Library/Task/Extra.elm
 - /Users/icidasset/Projects/diffuse/src/Library/Time/Ext.elm
 - /Users/icidasset/Projects/diffuse/src/Library/Tracks.elm
 - /Users/icidasset/Projects/diffuse/src/Library/Tracks/Collection.elm
 - /Users/icidasset/Projects/diffuse/src/Library/Tracks/Collection/Internal.elm
 - /Users/icidasset/Projects/diffuse/src/Library/Tracks/Collection/Internal/Arrange.elm
 - /Users/icidasset/Projects/diffuse/src/Library/Tracks/Collection/Internal/Harvest.elm
 - /Users/icidasset/Projects/diffuse/src/Library/Tracks/Collection/Internal/Identify.elm
 - /Users/icidasset/Projects/diffuse/src/Library/Tracks/Encoding.elm
 - /Users/icidasset/Projects/diffuse/src/Library/Tracks/Favourites.elm
 - /Users/icidasset/Projects/diffuse/src/Library/Tracks/Sorting.elm
 - /Users/icidasset/Projects/diffuse/src/Library/Tuple/Ext.elm
 - /Users/icidasset/Projects/diffuse/src/Library/Url/Ext.elm
 - /Users/icidasset/Projects/diffuse/src/Library/User/Layer.elm
 - /Users/icidasset/Projects/diffuse/src/Library/User/Layer/Methods/RemoteStorage.elm

And then nothing happens after that.
Here's the file it gets stuck on: https://github.com/icidasset/diffuse/blob/4af982288c5e46570f43de1117b07db0a87dfc92/src/Library/User/Layer/Methods/RemoteStorage.elm

Any ideas?
Thanks!

PS. You can try it out for yourself, if you want to, by cloning the project, running yarn install and then make quality.

NoLeftOverPizza

What the rule should do:

Forbid simple uses of right pizza (|>) that would not add any parens if removed.

What problems does it solve:

Pizza can make the place look untidy.

Example of things the rule would report:

    context |> rememberElmJsonProject project

Example of things the rule would not report:

    getContext model |> rememberElmJsonProject project
    context
        |> rememberElmJsonProject project
        |> rememberSomethingElse something

When (not) to enable this rule:

If you really like pizza then this rule might not be for you. Right pizza are often used to show a pipeline, but right pizza may also be used to change the focus of an expression - this rule may complain about such expressions.

I am looking for:

  • A rule called "NoLeftOverPizza". 😂
  • Someone to implement it (I'm happy to mentor and review)

Enumerate all variants in list

What the rule should do:
This rule checks that list literals the user has chosen, contain every possible variant of a chosen type.

What problems does it solve:
Those times where you add a new variant to a type and forget to update a list that should contain every variant

Example of things the rule would report:

Example 1:

type IceCreamFlavor = Vanilla | Chocolate | Strawberry

iceCreamFlavors : List IceCreamFlavor
iceCreamFlavors =
    [ Vanilla
    , Chocolate
    ]

gets fixed to this (assuming the user has configured the rule to check iceCreamFlavors)

type IceCreamFlavor = Vanilla | Chocolate | Strawberry

iceCreamFlavors : List IceCreamFlavor
iceCreamFlavors =
    [ Vanilla
    , Chocolate
    , Strawberry
    ]

Example 2:

type IceCreamFlavor = Vanilla | Chocolate | Strawberry | Custom String

iceCreamFlavors : List IceCreamFlavor
iceCreamFlavors =
    [ Vanilla
    , Chocolate
    , Strawberry
    ]

This would cause a configuration error because Custom can't be enumerated in a list (assuming the rule was configured to check iceCreamFlavors).

Example 3:

type IceCreamFlavor = Vanilla | Chocolate | Strawberry

This would cause a configuration error because iceCreamFlavor is missing (again, assuming that the rule is configured to check for it).

When (not) to enable this rule:

You don't have lists that need to enumerate a type

I am looking for:

  • Would you use this rule?
  • An alternative to having rule configuration for determining which functions should be check is to have some naming convention. For example, checking top level functions that start with all and have a type annotation of List <type name>. I personally don't think this is the way to go since things might silently break if the function gets renamed.

Add elm/* packages to Review.Project.Dependency

It's time consuming to set up a Review.Project data structure when writing tests. One thing that could make this less time consuming would be if elm/* packages were already available in Review.Project.Dependency. For example, Review.Project.Dependency.elmUrl would return elm/url in the form of a project that the user could use with Review.Project.addDependency.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.