Giter Club home page Giter Club logo

scala-collection-contrib's Introduction

This is Scala 2! Welcome!

This is the home of the Scala 2 standard library, compiler, and language spec.

For Scala 3, visit scala/scala3.

How to contribute

Issues and bug reports for Scala 2 are located in scala/bug. That tracker is also where new contributors may find issues to work on: good first issues, help wanted.

For coordinating broader efforts, we also use the scala/scala-dev tracker.

To contribute here, please open a pull request from your fork of this repository.

Be aware that we can't accept additions to the standard library, only modifications to existing code. Binary compatibility forbids adding new public classes or public methods. Additions are made to scala-library-next instead.

We require that you sign the Scala CLA before we can merge any of your work, to protect Scala's future as open source software.

The general workflow is as follows.

  1. Find/file an issue in scala/bug (or submit a well-documented PR right away!).
  2. Fork the scala/scala repo.
  3. Push your changes to a branch in your forked repo. For coding guidelines, go here.
  4. Submit a pull request to scala/scala from your forked repo.

For more information on building and developing the core of Scala, read the rest of this README, especially for setting up your machine!

Get in touch!

In order to get in touch with other Scala contributors, join the #scala-contributors channel on the Scala Discord chat, or post on contributors.scala-lang.org (Discourse).

If you need some help with your PR at any time, please feel free to @-mention anyone from the list below, and we will do our best to help you out:

username talk to me about...
@lrytz back end, optimizer, named & default arguments, reporters
@retronym 2.12.x branch, compiler performance, weird compiler bugs, lambdas
@SethTisue getting started, build, CI, community build, Jenkins, docs, library, REPL
@dwijnand pattern matcher, MiMa, partest
@som-snytt warnings/lints/errors, REPL, compiler options, compiler internals, partest
@Ichoran collections library, performance
@viktorklang concurrency, futures
@sjrd interactions with Scala.js
@NthPortal library, concurrency, scala.math, LazyList, Using, warnings
@bishabosha TASTy reader
@joroKr21 higher-kinded types, implicits, variance

P.S.: If you have some spare time to help out around here, we would be delighted to add your name to this list!

Branches

Target the oldest branch you would like your changes to end up in. We periodically merge forward from older release branches (e.g., 2.12.x) to new ones (e.g. 2.13.x).

If your change is difficult to merge forward, you may be asked to also submit a separate PR targeting the newer branch.

If your change is version-specific and shouldn't be merged forward, put [nomerge] in the PR name.

If your change is a backport from a newer branch and thus doesn't need to be merged forward, put [backport] in the PR name.

Choosing a branch

Most changes should target 2.13.x. We are increasingly reluctant to target 2.12.x unless there is a special reason (e.g. if an especially bad bug is found, or if there is commercial sponsorship).

The 2.11.x branch is now inactive and no further 2.11.x releases are planned (unless unusual, unforeseeable circumstances arise). You should not target 2.11.x without asking maintainers first.

Repository structure

Most importantly:

scala/
+--build.sbt                 The main sbt build definition
+--project/                  The rest of the sbt build
+--src/                      All sources
   +---/library              Scala Standard Library
   +---/reflect              Scala Reflection
   +---/compiler             Scala Compiler
+--test/                     The Scala test suite
   +---/files                Partest tests
   +---/junit                JUnit tests
   +---/scalacheck           ScalaCheck tests
+--spec/                     The Scala language specification

but also:

scala/
   +---/library-aux          Scala Auxiliary Library, for bootstrapping and documentation purposes
   +---/interactive          Scala Interactive Compiler, for clients such as an IDE (aka Presentation Compiler)
   +---/intellij             IntelliJ project templates
   +---/manual               Scala's runner scripts "man" (manual) pages
   +---/partest              Scala's internal parallel testing framework
   +---/partest-javaagent    Partest's helper java agent
   +---/repl                 Scala REPL core
   +---/repl-frontend        Scala REPL frontend
   +---/scaladoc             Scala's documentation tool
   +---/scalap               Scala's class file decompiler
   +---/testkit              Scala's unit-testing kit
+--admin/                    Scripts for the CI jobs and releasing
+--doc/                      Additional licenses and copyrights
+--scripts/                  Scripts for the CI jobs and releasing
+--tools/                    Scripts useful for local development
+--build/                    Build products
+--dist/                     Build products
+--target/                   Build products

Get ready to contribute

Requirements

You need the following tools:

  • Java SDK. The baseline version is 8 for both 2.12.x and 2.13.x. It is almost always fine to use a later SDK such as 11 or 15 for local development. CI will verify against the baseline version.
  • sbt

MacOS and Linux work. Windows may work if you use Cygwin. Community help with keeping the build working on Windows and documenting any needed setup is appreciated.

Tools we use

We are grateful for the following OSS licenses:

Build setup

Basics

During ordinary development, a new Scala build is built by the previously released version, known as the "reference compiler" or, slangily, as "STARR" (stable reference release). Building with STARR is sufficient for most kinds of changes.

However, a full build of Scala is bootstrapped. Bootstrapping has two steps: first, build with STARR; then, build again using the freshly built compiler, leaving STARR behind. This guarantees that every Scala version can build itself.

If you change the code generation part of the Scala compiler, your changes will only show up in the bytecode of the library and compiler after a bootstrap. Our CI does a bootstrapped build.

Bootstrapping locally: To perform a bootstrap, run restarrFull within an sbt session. This will build and publish the Scala distribution to your local artifact repository and then switch sbt to use that version as its new scalaVersion. You may then revert back with reload. Note restarrFull will also write the STARR version to buildcharacter.properties so you can switch back to it with restarr without republishing. This will switch the sbt session to use the build-restarr and target-restarr directories instead of build and target, which avoids wiping out classfiles and incremental metadata. IntelliJ will continue to be configured to compile and run tests using the starr version in versions.properties.

For history on how the current scheme was arrived at, see https://groups.google.com/d/topic/scala-internals/gp5JsM1E0Fo/discussion.

Building with fatal warnings: To make warnings in the project fatal (i.e. turn them into errors), run set Global / fatalWarnings := true in sbt (replace Global with the name of a module—such as reflect—to only make warnings fatal for that module). To disable fatal warnings again, either reload sbt, or run set Global / fatalWarnings := false (again, replace Global with the name of a module if you only enabled fatal warnings for that module). CI always has fatal warnings enabled.

Using the sbt build

Once you've started an sbt session you can run one of the core commands:

  • compile compiles all sub-projects (library, reflect, compiler, scaladoc, etc)
  • scala / scalac run the REPL / compiler directly from sbt (accept options / arguments)
  • enableOptimizer reloads the build with the Scala optimizer enabled. Our releases are built this way. Enable this when working on compiler performance improvements. When the optimizer is enabled the build will be slower and incremental builds can be incorrect.
  • setupPublishCore runs enableOptimizer and configures a version number based on the current Git SHA. Often used as part of bootstrapping: sbt setupPublishCore publishLocal && sbt -Dstarr.version=<VERSION> testAll
  • dist/mkBin generates runner scripts (scala, scalac, etc) in build/quick/bin
  • dist/mkPack creates a build in the Scala distribution format in build/pack
  • junit/test runs the JUnit tests; junit/testOnly *Foo runs a subset
  • scalacheck/test runs scalacheck tests, use testOnly to run a subset
  • partest runs partest tests (accepts options, try partest --help)
  • publishLocal publishes a distribution locally (can be used as scalaVersion in other sbt projects)
    • Optionally set baseVersionSuffix := "bin-abcd123-SNAPSHOT" where abcd123 is the git hash of the revision being published. You can also use something custom like "bin-mypatch". This changes the version number from 2.13.2-SNAPSHOT to something more stable (2.13.2-bin-abcd123-SNAPSHOT).
    • Note that the -bin string marks the version binary compatible. Using it in sbt will cause the scalaBinaryVersion to be 2.13. If the version is not binary compatible, we recommend using -pre, e.g., 2.14.0-pre-abcd123-SNAPSHOT.
    • Optionally set ThisBuild / Compile / packageDoc / publishArtifact := false to skip generating / publishing API docs (speeds up the process).

If a command results in an error message like a module is not authorized to depend on itself, it may be that a global sbt plugin is causing a cyclical dependency. Try disabling global sbt plugins (perhaps by temporarily commenting them out in ~/.sbt/1.0/plugins/plugins.sbt).

Sandbox

We recommend keeping local test files in the sandbox directory which is listed in the .gitignore of the Scala repo.

Incremental compilation

Note that sbt's incremental compilation is often too coarse for the Scala compiler codebase and re-compiles too many files, resulting in long build times (check sbt#1104 for progress on that front). In the meantime you can:

  • Use IntelliJ IDEA for incremental compiles (see IDE Setup below) - its incremental compiler is a bit less conservative, but usually correct.

IDE setup

We suggest using IntelliJ IDEA (see src/intellij/README.md).

Metals may also work, but we don't yet have instructions or sample configuration for that. A pull request in this area would be exceedingly welcome. In the meantime, we are collecting guidance at scala/scala-dev#668.

In order to use IntelliJ's incremental compiler:

  • run dist/mkBin in sbt to get a build and the runner scripts in build/quick/bin
  • run "Build" - "Make Project" in IntelliJ

Now you can edit and build in IntelliJ and use the scripts (compiler, REPL) to directly test your changes. You can also run the scala, scalac and partest commands in sbt. Enable "Ant mode" (explained above) to prevent sbt's incremental compiler from re-compiling (too many) files before each partest invocation.

Coding guidelines

Our guidelines for contributing are explained in CONTRIBUTING.md. It contains useful information on our coding standards, testing, documentation, how we use git and GitHub and how to get your code reviewed.

You may also want to check out the following resources:

Scala CI

Build Status

Once you submit a PR your commits will be automatically tested by the Scala CI.

Our CI setup is always evolving. See scala/scala-dev#751 for more details on how things currently work and how we expect they might change.

If you see a spurious failure on Jenkins, you can post /rebuild as a PR comment. The scabot README lists all available commands.

If you'd like to test your patch before having everything polished for review, you can have Travis CI build your branch (make sure you have a fork and have Travis CI enabled for branch builds on it first, and then push your branch). Also feel free to submit a draft PR. In case your draft branch contains a large number of commits (that you didn't clean up / squash yet for review), consider adding [ci: last-only] to the PR title. That way only the last commit will be tested, saving some energy and CI-resources. Note that inactive draft PRs will be closed eventually, which does not mean the change is being rejected.

CI performs a compiler bootstrap. The first task, validatePublishCore, publishes a build of your commit to the temporary repository https://scala-ci.typesafe.com/artifactory/scala-pr-validation-snapshots. Note that this build is not yet bootstrapped, its bytecode is built using the current STARR. The version number is 2.13.2-bin-abcd123-SNAPSHOT where abcd123 is the commit hash. For binary incompatible builds, the version number is 2.14.0-pre-abcd123-SNAPSHOT.

You can use Scala builds in the validation repository locally by adding a resolver and specifying the corresponding scalaVersion:

$ sbt
> set resolvers += "pr" at "https://scala-ci.typesafe.com/artifactory/scala-pr-validation-snapshots/"
> set scalaVersion := "2.12.2-bin-abcd123-SNAPSHOT"
> console

"Nightly" builds

The Scala CI builds nightly download releases and publishes them to https://scala-ci.typesafe.com/artifactory/scala-integration/ .

Using a nightly build in sbt is explained in this Stack Overflow answer

Although we casually refer to these as "nightly" builds, they aren't actually built nightly, but "mergely". That is to say, a build is published for every merged PR.

Scala CI internals

The Scala CI runs as a Jenkins instance on scala-ci.typesafe.com, configured by a chef cookbook at scala/scala-jenkins-infra.

The build bot that watches PRs, triggers testing builds and applies the "reviewed" label after an LGTM comment is in the scala/scabot repo.

Community build

The Scala community build is an important method for testing Scala releases. A community build can be launched for any Scala commit, even before the commit's PR has been merged. That commit is then used to build a large number of open-source projects from source and run their test suites.

To request a community build run on your PR, just ask in a comment on the PR and a Scala team member (probably @SethTisue) will take care of it. (details)

Community builds run on the Scala Jenkins instance. The jobs are named ..-integrate-community-build. See the scala/community-builds repo.

scala-collection-contrib's People

Contributors

adamgfraser avatar adriaanm avatar dwijnand avatar eed3si9n avatar epronovost avatar erikvanoosten avatar j-mie6 avatar jeantil avatar jiminhsieh avatar joshlemer avatar julienrf avatar linasm avatar lrytz avatar nthportal avatar philippus avatar povder avatar scala-steward avatar sethtisue avatar som-snytt avatar szeiger avatar toromtomtom avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

scala-collection-contrib's Issues

New Release

Would it be possible to get a new release pushed out for the Scala Native support? Thanks.

OSGi support is missing

the problem is split packages. if this code wasn't under scala.collection, we'd be fine. see #51 for (and linked tickets) for gory details

we (the core Scala team) don't plan to tackle this ourselves, but we're open to receiving a pull request on it

comment removed from build.sbt:

    // TODO: osgi settings; not trivial because of split packages.                                                       
    // See https://github.com/scala/scala-collection-compat/pull/226                                                     
    // and https://github.com/scala/scala-collection-contrib/issues/51                                                   
    // (and issues linked from 51)                                                                                       
    // OsgiKeys.exportPackage := Seq(s"scala.collection.*;version=${version.value}"),                                    

Law-test collections implemented in contrib

We should reuse the property-based tests written in the scalacheck sub-project. We will have to adapt the build a bit because it is currently not cross-compatible with dotty and Scala.js.

Release for Scala 2.13.0

All tests pass when using Scala 2.13.0. Does this mean that this project is ready for to be released?

Change values returned from SortedMultiDict to be a SortedSet[V]

Considering the distinction of a SortedMultiDict vs a MultiDict is the sorting, it seems reasonable to me that the returned values should also retain the relative ordering as they were added to the collection.

Scaladoc: https://www.javadoc.io/doc/org.scala-lang.modules/scala-collection-contrib_2.13/latest/scala/collection/SortedMultiDict.html

As a reference, this is how the type signature is in Guava: https://guava.dev/releases/19.0/api/docs/index.html?com/google/common/collect/ListMultimap.html

Add shift operations on BitSet

Add bitwise shift (<< and >>) operations to BitSet collection.
Motivation:

Note: << can be emulated with .map(_ + shiftBy) and >> with .collect { case b if b >= shiftBy => b - shiftBy }, but performance difference is two orders of magnitude.

I have some existing code for it, will be able to submit a PR soon.

MultiDict.add should return this if no changes are made

The add method is defined as follows

  def add(key: K, value: V): MultiDict[K, V] =
    new MultiDict(elems.updatedWith(key) {
      case None     => Some(Set(value))
      case Some(vs) => Some(vs + value)
    })

but elms.updatedWith could easily return the same MultiDict if the value is already in the Set[V].
It would help if the signature made sure that the same object would be returned if nothing changed as that would allow one
to reduce unnecessary object creation (both in the call, and also in the creation of objects in the calling object.
Otherwise one ends up creating a lot of garbage.

Something like this seems better:

def add(key: K, value: V): MultiDict[K, V] =
   val newElems = elems.updatedWith(key) {
      case None     => Some(Set(value))
      case Some(vs) => Some(vs + value) // if vs + value == vs then this will return the original MD I believe
    }
    if newElems eq this.elems then this
    else new MultiDict(newElems)

That is how incl in the standard scala library for HashSet works.

Rename MultiDict back to original MultiMap or SetMultiMap

I think it may make the most sense to rename MultiDict to SetMultiMap, and potentially make MultiMap into a generic trait.

  1. It helps remove the @Deprecation warning if renamed back to MultiMap
  2. "MultiMap" is a more universally used terminology for this type of collection. (e.g. Guava has ArrayListMultimap, ListMultimap, SetMultimap, and etc; C++ STL is std::multimap; Rust is MultiMap, Apache Commons is MultiMap, etc)
  3. The existing name also does support or convey the collection type of the key's values. (e.g. it currently implements Set, but there is no way to extend it to use an Array, Vector or List instead)
  4. The code still uses MultiMap in various locations (like in the hashcode!), and the CC[_] is MapFactory (e.g. not DictFactory)
  5. There are no other Dictionary terms in any of the Scala collections. It's Map for any scala code, only the java converters have the mention of a dictionary. MultiDict isn't scala idiomatic.

Collection-like operations on tuples

Sometimes I work with homogenous tuples, which I use more like a collection with a fixed size. When working in this style, I often define a map extension for tuples. Would something like this be worth adding here?

Example:

  implicit class AnyTuple2Ops[T](v: (T, T)) {
    def map[X](f: T => X): (X, X) = (f(v._1), f(v._2))
  }

  implicit class AnyTuple3Ops[T](v: (T, T, T)) {
    def map[X](f: T => X): (X, X, X) = (f(v._1), f(v._2), f(v._3))
  }

  implicit class AnyTuple4Ops[T](v: (T, T, T, T)) {
    def map[X](f: T => X): (X, X, X, X) = (f(v._1), f(v._2), f(v._3), f(v._4))
  }

Other operations in similar style can be added, like:

  implicit class AnyTuple2Ops[T](v: (T, T)) {
    def combine(that: (T, T))(by: (T, T) => T): (T, T) = (by(v._1, that._1), by(v._2, that._2))
  }

MultiDict add many values at once for a single key

Hi, I've recently started using mutable.MultiDict, but I often found myself using this pattern.

iterable.foreach(v => dict.addOne(k,v))

Could we add a function to add many values all at once for a single key instead of one at a time?

I was thinking of something like the following:

// mutable
  def addMany(key: K, values: Iterable[V]): this.type = {
    elems.updateWith(k) {
      case None     => Some(Set.from(values))
      case Some(vs) => Some(vs ++= values)
    }
    this
  }
// immutable
  def addMany(key: K, values: Iterable[V]): MultiDict[K, V] =
    new MultiDict(elems.updatedWith(key) {
      case None     => Some(Set.from(values))
      case Some(vs) => Some(vs ++ values)
    })

We might also want to skip updates if the iterables are empty.

Document iteration order of multi-dict

The iteration order when iterating over a MultiDict (e.g., with map or values is not specified anywhere in the documentation that I can find.

I suspect that the order is implementation dependent, which is fine. If that is the case, it should be documented.

In my current use case, I am hoping that the iteration order is the insertion order. Since I don't know the order, I don't know if I can use a MultiDict[A, B] or if I need to work with a Map[A, List[B]] which would be more cumbersome.

Possible bug in implementation of `MapDecorator.mergeByKeyWith`.

Seen in

0.3.0

Reproduction

Scala worksheet, so assertions are verified by eye.
Using mergeByKey for convenience, knowing it delegates to mergeByKeyWith.

import scala.collection.decorators.mapDecorator

val arthur = "arthur.txt"

val tyson = "tyson.txt"

val sandra = "sandra.txt"

val allKeys = Set(arthur, tyson, sandra)

val sharedValue = 1

val ourChanges = Map(
  (
    arthur,
    sharedValue
  ),
  (
    tyson,
    2
  )
)

val theirChanges = Map(
  (
    arthur,
    sharedValue
  ),
  (
    sandra,
    3
  )
)

ourChanges -> theirChanges

ourChanges.mergeByKey(theirChanges)

// Expect all the keys to appear in an outer join, and they do, good...
ourChanges.mergeByKey(theirChanges).keys == allKeys

theirChanges.mergeByKey(ourChanges)

// Expect all the keys to appear in an outer join, and they do, good...
theirChanges.mergeByKey(ourChanges).keys == allKeys

// Expect the same associated values to appear in the join taken either way around, albeit swapped around and not necessarily in the same key order. They are, good...
ourChanges
  .mergeByKey(theirChanges)
  .values
  .map(_.swap)
  .toList
  .sorted
  .sameElements(theirChanges.mergeByKey(ourChanges).values.toList.sorted)

// Expect these to be equal, and they are, good...
ourChanges.mergeByKey(theirChanges).keySet == theirChanges
  .mergeByKey(ourChanges)
  .keys

val theirChangesRedux = Map(
  (
    arthur,
    sharedValue
  ),
  (
    sandra,
    sharedValue // <<<<------- Ahem!
  )
)

ourChanges -> theirChangesRedux

ourChanges.mergeByKey(theirChangesRedux)

// Expect all the keys to appear in an outer join, but they don't...
ourChanges.mergeByKey(theirChangesRedux).keys == allKeys

theirChangesRedux.mergeByKey(ourChanges)

// Expect all the keys to appear in an outer join, and they do, good...
theirChangesRedux.mergeByKey(ourChanges).keys == allKeys

// Expect the same associated values to appear in the join taken either way around, albeit swapped around and not necessarily in the same key order. They aren't...
ourChanges
  .mergeByKey(theirChangesRedux)
  .values
  .map(_.swap)
  .toList
  .sorted
  .sameElements(theirChangesRedux.mergeByKey(ourChanges).values.toList.sorted)

// Expect these to be equal, but they aren't...
ourChanges.mergeByKey(theirChangesRedux).keySet == theirChangesRedux
  .mergeByKey(ourChanges)
  .keys

Expectation

The Scaladoc for mergeByKey states:

Perform a full outer join of this and that.
Equivalent to mergeByKeyWith(that) { case any => any }.

So all the keys from ourChanges and theirChanges should appear in the resulting map, albeit not necessarily in the same order if the specialised map implementations are created (which they are in the example above). The corresponding values from one or both of ourChanges and theirChanges should appear in the associated pairs, wrapped in Some.

Observed

ourChanges.mergeByKey(theirChangesRedux) drops key sandra, whereas theirChangesRedux.mergeByKey(ourChanges) preserves all keys.

Discussion

There is a set, traversed that is used to prevent duplicate key processing when switching from iterating over from coll to other. It is however typed as a set over the value type W of other, and is populated with values, not keys.

In the example above, the use of a shared value triggers the problem, causing the entry for key sandra to be dropped.

I am assuming this was a typo and not the desired behaviour.

If my diagnosis is correct, I can submit a PR with a proper test and the change in implementation to make it pass.

I'll wait until others chime in in case I've misinterpreted the original intent...

Eliminate usages of `ImmutableBuilder`

ImmutableBuilder is currently being used as the default builder implementation for:

  • immutable.MultiDict
  • immutable.SortedMultiDict
  • immutable.MultiSet
  • immutable.SortedMultiSet

However, it should be possible, for the sake of performance, to build mutating versions of these, which would mostly be delegating to already-implemented mutating builders of Maps in the standard library (i.e. scala.collection.immutable.{MapBuilderImpl, HashMapBuilder})

Support scala.js

It would be good to make these classes available in scala.js.

Set.permutations

I would love to have permutations on Sets.

E.g. Set(1,2,3).permutations == Iterator(Set(1,2,3), Set(2,3), Set(1,3), Set(2,3), Set(1), Set(2), Set(3), Set.empty)

Project does not correctly import into IntelliJ

The project loads into IntelliJ, but then for some reason, IntelliJ cannot understand that the tests require JUnit as a dependency, so everything JUnit is marked in red. This is confirmed with multiple fresh imports of the project, across multiple users and OS's (at least on Mac and Linux).

Screen Shot 2019-11-01 at 1 24 00 PM

Add Scala 3 crossbuild

volunteer?

it probably works with withDottyCompat, but it's better to have a "real" Scala 3 artifact

Scala Native Support

This library supports scala-js via cross compilation, is there any chance it can support scala-native-0.4.x as well, or is there some blocker to it?

MultiDict and MultiSet should be either abstract traits or concrete implementations

Currently the pattern between MultiDicts and MultiSets are different:

  • MultDict is a concrete collection
  • MultiSet is an abstract data type (trait) with a public concrete implementation MultiSetImpl.

I lean more towards the abstract data type approach of MultiSet, since it is consistent with the rest of the collections. Though the name MultiSetIml/BagImpl isn't very good. Perhaps CountedMultiSet/CountedBag extends MultiSet/Bag?

Add `between`, `betweenBy` functions to Seq

It would be nice to have a function similar to SQL BETWEEN operator that can filter elements from a Seq that are in some range. This is not an uncommon task when working with statistics, time series, location etc.
We can implement it as an extension for Seq, roughly like this:

extension [T: Ordering](seq: Seq[T])
  def between(a: T, b: T): Seq[T] = seq.filter(e => Ordering[T].gt(e, a) && Ordering[T].lt(e, b))

and then use it like so:

Seq(1, 2, 3, 4, 5, 6 ,7, 7, 9, 10).between(4, 7) //will result in Seq(5, 6)

We could also make an extension that would allow filtering sequences of arbitrary product types that don't have an ordering, but have some fields that we can use for comparison, similar to sortBy function, for example:

extension [T](seq: Seq[T])
  def betweenBy[U: Ordering](a: U, b: U)(f: T => U): Seq[T] = seq.filter { e => 
    Ordering[U].gt(f(e), a) && Ordering[U].lt(f(e), b) 
}

locationDataList.betweenBy(hourAgoInstant, Instant.now)(_.timestamp)

IMHO this is much more intuitive, less prone to errors, and nicer to read than using a simple filter function.

P.S. naming is debatable here because I also can suggest using between on numeric types to check if they are in some range, e.g. 1.between(-1, 5) // returns true that would be useful addition too, but might cause confusion if paired with between on Sequences.

publish 0.2.0

it's tagged but not published, the staging repos wouldn't close. I'll come back to it

Link to scaladoc in README

Is it possible to link to the library's scaladoc at the top of the README? Would also be great if specific new types and operations in the README could link to their specific scaladoc.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.