yarnpkg / berry Goto Github PK
View Code? Open in Web Editor NEW๐ฆ๐ Active development trunk for Yarn โ
Home Page: https://yarnpkg.com
License: BSD 2-Clause "Simplified" License
๐ฆ๐ Active development trunk for Yarn โ
Home Page: https://yarnpkg.com
License: BSD 2-Clause "Simplified" License
Describe the bug
When I run yarn build:cli
, I am getting this error An unexpected error occurred: "Invalid value type 2:0 in [root]/berry/.yarnrc".
I don't get this error if I check out earlier commits in this repo.
To Reproduce
Run yarn build:cli
on your berry directory.
Environment if relevant (please complete the following information):
Prettier is an opinionated code formatter which is widely used in the JavaScript and TypeScript ecosystems. It has a number of important advantages for any software engineering project, documented in fine detail over here: https://prettier.io/docs/en/why-prettier.html
I'll pick out a few of parts that I think are most important for this project:
Enforcing a code style
Prettier is the only way to enforce code style requirements which is 100% automatic. Linters can get most of the way there, but can have trouble folding expressions over multiple lines. Prettier eliminates the need to even think about style during code reviews.
Lowering the bar for new contributors
Learning about the particular idiosyncrasies of a project's code style can be frustrating to new contributors if it is significantly different from what they are accustomed to, raising the bar higher than it needs to be. In addition, for the very large contingent of JS/TS devs who already use prettier every day, having to manually format code can feel like a pointless burden.
In the excellent introduction to the future of Yarn the rationale for moving to TypeScript was given as lowering the bar for contributors. Prettier would be another clear win for that cause.
It will save everybody time when actually writing code
It's staggering how much time one saves with prettier. Just hit a key combination and your code is formatted exactly how it should be โจ. In addition it prevents the mental context switching that happens when formatting code manually. This helps one to focus on solving the actual problems at hand.
Describe the drawbacks of your solution
The main drawback is that the first commit after adding prettier will be a pretty huge diff. In addition people who don't currently use prettier will need to configure their editors to use it, and to become accustomed to it's particular usage patterns and ways of formatting code. In my experience people start to feel like they can't live without it within a week :)
Prettier's output is not always as 'pretty' as one can achieve manually, but it is always readable. Subjectively speaking, the time saved is well worth it. In the extremely rare case that something should be formatted in a particular way, you can tell prettier to ignore particular AST nodes with inline comments.
Thanks for you time โค๏ธ I cannot wait to start using berry!
Describe the bug
When adding an unknown option to .yarnrc
, yarn complains about the option and tells the user to run yarn config -v
to see which options are available.
However, running this command results in the same error. Usually I'm quite in love with fail-fast behaviours, but in this case an exception might be nice.
To Reproduce
yarn config -v
Screenshots
Stephans-MacBook-Pro:redacted stephan$ yarn
Error: Unrecognized configuration settings found: npmAlwaysAuth - run "yarn config -v" to see the list of settings supported in Yarn (in /Users/stephan/redacted/.yarnrc)
Stephans-MacBook-Pro:Pro:redacted stephan$ yarn config -v
Error: Unrecognized configuration settings found: npmAlwaysAuth - run "yarn config -v" to see the list of settings supported in Yarn (in /Users/stephan/redacted/.yarnrc)
Environment if relevant (please complete the following information):
yarn --version
shows v2.0.0; I downloaded this using yarn policies set-version berry
Describe the bug
When installing sqlite3
we get an error at Link Step.
To Reproduce
You can clone Embraser01/scratch-repo
and run yarn install
Environment if relevant (please complete the following information):
Additional context
stdout:
โค BR0013: โ wrappy@npm:1.0.2 can't be found in the cache and will be fetched from the remote registry
โค BR0013: โ yallist@npm:3.0.3 can't be found in the cache and will be fetched from the remote registry
โค BR0000: โ Completed in 8.88s
โค BR0000: โ Link step
โค BR0001: โ Error: ENOTDIR: not a directory, scandir 'node_modules/sqlite3/cloudformation'
at ZipFS.readdirSync (N:\scratch-repo\.yarn\releases\yarn-berry.js:2153:33)
at ZipFS.readdirPromise (N:\scratch-repo\.yarn\releases\yarn-berry.js:2145:21)
at NodeFS.copyPromise (N:\scratch-repo\.yarn\releases\yarn-berry.js:825:51)
โค BR0000: โ Completed in 0.03s
โค BR0000: Failed with errors in 11.82s
It seems that ZipFS
doesn't consider empty directory as a directory but I'm not sure!
Can it be because in ZipFS#registerListing
we add a directory to this.listings
from the child element (so with en empty directory, there is not child elements who add it)?
Being curious, I just tried to get started from yarn 1.13.0:
yarn policies set-version nightly
yarn policies set-version berry
then
yarn init
Unfortunately I get this error message:
C:\temp\yarn_berry>yarn
Error: Couldn't find the PnP package map at the root of the project - run an install to generate it
at PnpLinker.findPackageLocator (C:\temp\yarn_berry.yarn\releases\yarn-berry.js:115987:19)
at Project.findLocatorForLocation (C:\temp\yarn_berry.yarn\releases\yarn-berry.js:33834:42)
at Function.find (C:\temp\yarn_berry.yarn\releases\yarn-berry.js:33667:39)
yarn install, yarn add typescript - same error.
Here's the output of yarn config -v
- there appear to be some strange paths in it:
C:\temp\yarn_berry>yarn config -v
bstatePath Path of the file where the current state of the built packages must be stored 'C:\temp\yarn_berry/.yarn/build-state.yml'
cacheFolder Folder where the cache files must be written 'C:\temp\yarn_berry/.yarn/cache'
defaultLanguageName Default language mode that should be used when a package doesn't offer any insight 'node'
defaultProtocol Default resolution protocol used when resolving pure semver and tag ranges 'npm:'
enableColors If true, the CLI is allowed to use colors in its output true
enableNetwork If false, the package manager will refuse to use the network if required to true
enableScripts If true, packages are allowed to have install scripts by default true
enableTimers If true, the CLI is allowed to print the time spent executing commands true
frozenInstalls If true, prevents the install command from modifying the lockfile false
globalFolder Folder where are stored the system-wide settings 'C:\temp\yarn_berry/C:\temp\yarn_berry/C:\Users\username\AppData\Local'
httpProxy URL of the http proxy that must be used for outgoing http requests null
httpsProxy URL of the http proxy that must be used for outgoing https requests null
ignorePath If true, the local executable will be ignored when using the global one true
initLicense License used when creating packages via the init command null
initScope Scope used when creating packages via the init command null
initVersion Version used when creating packages via the init command null
lastUpdateCheck Last timestamp we checked whether new Yarn versions were available '1552409146419'
lockfilePath Path of the file where the dependency tree must be stored 'C:\temp\yarn_berry/yarn.lock'
npmRegistryServer URL of the selected npm registry (note: npm enterprise isn't supported) 'https://registry.yarnpkg.com'
pnpDataPath Path of the file where the PnP data (used by the loader) must be written 'C:\temp\yarn_berry/.pnp.data.json'
pnpEnableInlining If true, the PnP data will be inlined along with the generated loader true
pnpIgnorePattern Regex defining a pattern of files that should use the classic resolution null
pnpPath Path of the file where the PnP loader must be written 'C:\temp\yarn_berry/.pnp.js'
pnpShebang String to prepend to the generated PnP script '#!/usr/bin/env node'
pnpUnpluggedFolder Folder where the unplugged packages must be stored 'C:\temp\yarn_berry/.yarn/unplugged'
preferInteractive If true, the CLI will automatically use the interactive mode when called from a TTY false
rcFilename Name of the files where the configuration can be found '.yarnrc'
virtualFolder Folder where the symlinks generated for virtual packages must be written 'C:\temp\yarn_berry/.yarn/virtual'
yarnPath Path to the local executable that must be used over the global one 'C:\temp\yarn_berry/C:\temp\yarn_berry/.yarn\releases\yarn-berry.js'
Environment if relevant (please complete the following information):
What package is covered by this investigations?
husky
Investigation report
Husky uses run-node
in order to execute the "right" version of Node. Since it only contains a bash script that cannot be executed straight from the archive, this package needs to be unplugged (why don't they use process.execPath
?).
Husky accesses the run-node
binary through require.resolve('.bin/run-node')
, which is invalid (first because it uses .bin
as a package name which it isn't, and second because it references the .bin
folder, which is an implementation detail). It should instead use the following construct: require.resolve('run-node/run-node')
.
I had trouble googling the link to the new docs or finding it via a search here on GitHub so I thought I'd create this issue for better discoverability. Not sure if you wanted to add it to README at this point but I'd be happy to submit a PR if you wanted.
The v2 documentation (WIP) is published here: https://yarnpkg.github.io/berry/
Background
Several feature requests repeatedly pop up for using yarn with workspaces. Most solutions involve using external packages or weird script hacks, voiding some of the merit for yarn workspaces. It would be nice if yarn workspaces
commands had slightly more flexibility to remove external dependencies.
Desired Features
List of yarn workspaces foreach
(previously yarn workspaces
) desired features (with potential API):
--parallel
Run scripts in parallel instead of sequentially--skip-missing
Skip packages that don't have the specified script--prefix
Prefix / colorize script output per packageDrawbacks
yarn workspaces foreach
is a dead-simple command that simply forwards an arbitrary yarn command to each package. It has no knowledge of what sort of command is being executed (npm run
, etc) so detecting missing scripts from here feels messy.
Additionally, I'm not sure how to do await
in parallel. Thinking about Promise.all()
assuming the Promise API is available within Yarn(?).
We'll need a unique way to isolate missing script errors from regular script errors.
Describe alternatives you've considered
Promise.all()
seems promising if the Promise API is available in yarnyarn run [script] --skip-missing
might be a better place for the flag. While useless when used directly, it allows us to simply do: yarn workspaces foreach [workspace] run [script] --skip-missing
--skip-missing
when calling the yarn workspaces foreach
command. This could potentially be stored in a config but is that really a good idea?Additional context
Trying to adopt Yarn Workspaces for my new project, lots of redundant issues for these features exist on the v1 repo.
I'd be willing to implement a solution if I could get some suggestions/consensus on how to tackle the problem. I'm not sure the best way to go about doing it right now.
Describe the user story
I'm a plugin author, I want to distribute my plugin without requiring my users to download a different Yarn bundle. I'm a plugin user, I want to use a specific plugin without switching to a different bundle.
Describe the solution you'd like
Yarn should automatically load the plugins located in a specific folder. Those plugins should also be manageable through a dedicated CLI (that would be part of a plugin-dlopen
plugin):
yarn plugin add <url>
would download the plugin located at the specified url and would store it in the local project plugin folder. Adding a plugin with the same name as one that already exist would remove the old one. This command is very simple and wouldn't use npm or similar (at least not in the first iteration). Just download a single file.
yarn plugin remove [... names]
would find and remove multiple plugins from the local plugin folder, matching them by name. Removing a plugin that doesn't exist would throw an exception.
yarn plugin list
would print the list of plugins - both those that are embed within the current bundle and those that have been dynamically loaded.
Implementing this feature also requires to write the bundler code (in @berry/builder
) that allows compiling plugins independently from the main bundle. The subtlety here is that some packages from the plugin (in particular @berry/core
, but also @berry/fslib
, @berry/parsers
, @berry/pnp
, or @berry/shell
) will have to be directly loaded from the main bundle - making it truly "dynamic linking".
Describe the drawbacks of your solution
This feature is relatively safe and minimal, and as such doesn't have much drawbacks in itself. Further iterations might be needed in order to solve other potential issues:
Should we support non-url installation schemes? For example yarn plugin add arcanis/plugin-cpp
which could automatically map to a Github url somehow.
Should a plugin be able to advertise a bundle version that must be matched in order for it to work properly? What should happen if the version isn't satisfied?
Describe the user story
I'm a visitor. When I go on the constraints page I'd like to see some code highlighting to better understand what's the type of each symbol - especially since it's a language I don't know!
Describe the solution you'd like
The blocks of code should be colored depending on the language.
Additional context
This is the kind of block I'm talking about:
Describe the bug
Following the installation instructions creates a noop script
To Reproduce
> git clone [email protected]:KwanMan/yarn-pnp-workspaces.git
> cd yarn-pnp-workspaces/with-pnp
> yarn policies set-version nightly # as per docs
> yarn policies set-version berry # as per docs
> cat .yarn/releases/yarn-berry.js
false
Expected behavior
There should be more code in .yarn/releases/yarn-berry.js and running yarn commands such as yarn --version
from the project folder should work
Environment if relevant (please complete the following information):
Additional context
Following the instructions also causes yarn to stop working altogether within the folder, until the changes in .yarnrc are removed.
Describe the user story
I'm a developer and each time I run yarn stage -c
it creates a new generic commit. The commit name is generic ("updates the project settings") and it's hard to figure out what actually changed.
Describe the solution you'd like
The generated commit name for yarn stage -c
should be contextual. Some examples:
Describe the drawbacks of your solution
Figuring out what happens isn't easy and will require some extra logic to be added to the VCS drivers. The benefits are clear, however.
Describe alternatives you've considered
We could keep a generic title but that would likely make it harder to understand the history of a project. Given that yarn stage
is precisely made to make managing a project history more efficient than it would be without the command, we must implement this feature.
Additional context
Implementing this behavior will require asking the VCS for the previous version of the package.json
files in the project, and diffing their parsed content (rather than just getting the diff provided by Git).
Describe the user story
I'm a project manager and I want to prevent my employees from adding to our codebase new code distributed under untested licenses (ex WTFPL).
Describe the solution you'd like
I'd like to set a list of accepted (or rejected) SPDX license strings in my .yarnrc
that would cause a validation error at install time. For example:
valid-licenses:
- MIT
- BSD
Note that this would require to also validate transitive dependencies.
Describe the drawbacks of your solution
While it might be important for various entities, I don't think it would belong to the "core". In that sense I'm pretty sure it should be distributed as a separate plugin (either developed on this repository or by an interested third-party).
It would also require a better plugin workflow than what we currently have (we would definitely need this yarn plugin add
command!).
Describe alternatives you've considered
The exact validation mechanism is TBD as while there are multiple ways to achieve a similar result, all have their own drawbacks. In particular:
We could pass a validate
named parameter to the resolvers that they would have the responsibility to call themselves before returning, passing the manifest of the package they're currently handling. This validate
function wouldn't be set in most cases except for yarn add
which would implement it by calling the project validation hook (I think I like this option the most).
We could simply make it an extra command whose sole purpose would be to check the versions. I think it makes sense to do that regardless of the case (it might be especially important when changing the set of accepted dependencies, since in those cases the "validateNewPackage" hook wouldn't trigger), but the overall experience isn't great since it requires an extra command to be run after each install.
We could implement it as a validation hook that would be called from Project
with the Package
value returned by the resolvers, but that would require us to add the license field to the Package
type - and thus serialize it within yarn.lock
. It's not clear how scalable this is - if we need to add all the fields from the manifests to the yarn.lock
it kinda defeats the purpose (this might be alleviated if the plugins were able to specify a list of fields that need to be persisted, but even then it's dubious).
We could store the Manifest
instance within the returned Package
(at least for every resolver where it makes sense) and instruct the lockfile serializer to not save this field (then we would still have this validation hook I mentioned in Project
that would be able to access all values from the manifest). But while it would solve the lockfile format scalability issue that would make the behavior different from one execution to the next (sometimes the manifest for a package would be there and sometimes it wouldn't), and I don't like that very much.
We could add a validation hook to the cache ("if a file should be added, first validate it"), but that wouldn't work well with the global cache approach. At the same time, we don't want to validate the packages every time as that would be a waste of resources.
We could make the resolveEverything
function accept a list argument that would be populated with the list of locators that couldn't be resolved from the lockfile (this is a bit tricky because the resolvers don't return this information at the moment). Then after calling fetchEverything
we would iterate over those new locators and trigger a validation hook. The downside is that rejected packages (+ dependencies + dependents) will still have reached the cache, unless we somehow manage to remove them.
There's a reasonable case that this could be implemented through the constraints engine by exposing all the packages in the dependency tree (rather than just the workspaces). In practice I have some concerns it might grow the fact list exponentially, although I don't have numbers. It would also be an after-the-fact validation, which I'd like to avoid (it wouldn't be a good experience to use a package then at PR-time you figure out you can't use it).
Additional context
I received the stats from the npm survey and interestingly enough licenses were proeminently featured:
Describe the context
In Yarn 2, we use something called "portable paths". The idea is simple: instead of using different APIs depending on the underlying filesystem (and having to deal with potentially different separators, etc), we instead always work with posix paths - even on Windows! On those platforms we simply work with a slightly different path than the regular ones - /d:/foo
instead of d:\foo
, for example. Then, before they get sent to the actual consumers that need native paths, we just convert them back into their native form.
It works pretty well, but it's not always clear which APIs expect portable paths and which APIs expect native paths. We should fix that by adding types to @berry/fslib
.
Describe the solution you'd like
The @berry/fslib
package should export a new type called PortablePath
. This type should use the same trick as the one to type our hashes, which would prevent them from being silently casted back and from native paths. Then the functions that manipulate paths would be updated to match the true type they expect rather than the generic string
type (we would keep string
, but only for native paths).
In order to do this, we'll also have to turn FakeFS
into FakeFS<PathType>
. Most implementations (for example ZipFS
) would extends FakeFS<PortablePath>
, but some of them (for example PosixFS
) would extend FakeFS<string>
.
Describe the drawbacks of your solution
It's a significant amount of changes in the source code, and might lead to some conflicts before merge, but at the same time the changes are fairly well understood and have very little risks - since we're talking about the type system and not the runtime implementation, if it typechecks, it works.
Describe alternatives you've considered
We could keep things as they are, but we've already made mistakes multiple times with a path being modified when it shouldn't have been (or the other way around), and it's definitely something our type system should prevent.
Describe the bug
It seems like the agents don't boot anymore. We didn't change anything as far as I know ๐ค
Would you have an idea of who we could speak with to figure out the problem, @kaylangan?
To Reproduce
Open a PR against master.
Screenshots
https://dev.azure.com/yarnpkg/berry/_build/results?buildId=944
Environment if relevant (please complete the following information):
In the Generic packages section, the link Plug'n'Play-compatible returns 404 Page not found.
It's not obvious to me what the link target should be. Maybe https://yarnpkg.com/en/docs/pnp?
Describe the bug
The constraint feature implemented by the plugin-constraints
package doesn't insert devDependencies into the fact database, so they can't be matched by the rules.
To Reproduce
package.json:
{
"devDependencies": {
"lodash": "*"
}
}
constraints.pro:
gen_invalid_dependency(WorkspaceCwd, DependencyName, _) :-
workspace_has_dependency(WorkspaceCwd, DependencyName).
Running yarn constraints check
should report an error (since the constraints don't allow any dependency to pass), but since lodash
is declared as a devDependency nothing is reported.
Additional context
The fix would be to change the signature of workspace_has_dependency
in order to add the dependency type. This would also be useful for peer dependencies as well (and would allow to build rules such as "reject devDependency if the package also list a dependency of the same name").
What package is covered by this investigations?
Describe the goal of the investigation
To figure out what should our actions be going forward. Find out how to provide a safe and sound user experience that protects against name squatting.
Should we move to the GitHub Package Registry as default registry?
I've seen this question here and there, so we probably should discuss it.
My opinion is: I don't think we need to change the default registry anytime soon, unless something changes dramatically on the npm side. There are three reasons why I think we should wait:
The GitHub registry doesn't mirror the packages from npm afaik. So it's not a replacement for the traditional registry.
The GitHub registry uses scopes to define which set of packages belond to it. Those scopes unfortunately conflict with the ones on npm, so switching the default would put users at risk (they would expect to download from npm, but would instead get the GitHub versions).
I tend to have a "wait and see" policy for this kind of large-scale change. Even once we'll have figured out a way to counterbalance the two first points, we will want to make sure the GitHub registry scales properly before enabling it for everyone.
First-class support
Something we need to consider is: should the GitHub registry be one registry amongst many (in the sense that it would piggy-back on the npm:
protocol), or have a first-class support (with a specific protocol, like npm+gh:
)?
The first case will likely cause developer experience issues (how to depend on a GitHub package from an npm package?), the second doesn't scale very well if we need to do that for all the registries.
My perception is that we need to follow intent. For all purposes, our users will likely choose to depend on a package from one of the two sets of registry: npm or GitHub. Other registries will, I believe, merely be either 1/ mirrors of the first two, or 2/ private npm instances with specific workflows (which will be able to safely enforce the registry configuration for a given scope, for example).
In this light I'd be in favor of npm+gh:
being a supported protocol (rather than just configuring the registry hostname in the settings). It wouldn't so much define the target hostname, but rather the set of packages we're expected to download.
Use a specific package from the GitHub registry instead of npm
This would become possible with the resolutions
field:
{
"resolutions": {
"foo": "npm+gh:^1.2.3"
}
}
Possible action points (please discuss)
Implement a new npm+gh:
protocol that would inherit from the npm registry, but would instead target the GitHub registry (probably configurable the same way as for the regular npm configuration).
Deprecate pure semver dependencies (without protocols). Yarn v2 already supports npm:^x.y.z
. Npm doesn't, but we can solve it without doing changes on their side: pure semver dependencies listed in npm packages can be defined as implicitly using the npm:
protocol, while pure semver dependencies listed in GitHub packages can be defined as implicitly using the npm+gh:
protocol.
We would probably need to extend the resolutions
field in order to be able to change the protocol but not the range. So something like this would become possible, which would enforce Yarn to query the package foo
from GitHub without modifying its semver range:
{
"resolutions": {
"foo": "npm+gh:..."
}
}
Should we implement a yarn gh publish
command (via a new plugin-github-cli
plugin?) that would always send the package to the Github registry? It might be duplicate with yarn npm publish
๐ค
Paging @yarnpkg/berry, @bnb, @zkochan, @clarkbw for feedback (anyone else from @github interested?)
Describe the user story
I'm an application author and I want to add a dependency to a private package.
Describe the solution you'd like
We should add new configuration settings to plugin-npm: npmUser
, npmToken
, and npmAlwaysAuth
.
We probably should add a new configuration type called SECRET
that wouldn't appear when running yarn config
but would appear when running yarn config -v
. This would be helpful for CI systems that want to validate that the configuration is correctly set.
Describe the drawbacks of your solution
It adds some complexity, especially since it's not clear when the token should or shouldn't be sent.
Our test infrastructure doesn't support private packages at the moment, and emulating this behavior might be relatively complex since we would have to reverse-engineer what the npm registry is doing.
Describe alternatives you've considered
We could decide not to support private packages by default, but the maintenance cost seem low enough that it seems worthwhile.
Hi everyone,
This repository contains the source code for Yarn v2, which is described in more details in the following issue. It's a major work in progress and as you can expect some things might be missing - whether it's by design or by oversight.
Please rest assured that Yarn v1 won't go anywhere (as shown by us using a different repository for the time being) and will continue to be maintained for the foreseeable future - after all we still use it ourselves!
I'll open a few issues on what I think would be good discussion topics - feel free to shim in and work on them if you feel like it. Yarn is an awesome project that I'm sure will continue to redefine how your projects are setup.
Describe the user story
I'm a package author, and I'd like to publish a package on the npm registry.
Describe the solution you'd like
This is a perfect example for a plugin. The best solution would be to implement yarn pack
and yarn publish
into their own plugins - this would allow us to iterate much faster on this, for example by adding validation checks on the submitted packages (one could imagine a check to make sure that licenses are correct for example).
Describe the drawbacks of your solution
This is a rare case of a feature with no drawback. Our users simply expect to have yarn publish
, so there will be no surprise.
Describe alternatives you've considered
Similarly, not many alternatives available.
Describe the user story
I'm a package author, and I want to explicitly list the files that my users are allowed to require. I want to do this in order to prevent them from accessing my private files.
I'm a web architect, and I want a way to import packages installed through Yarn. This currently isn't possible because the Node resolution would require http requests to convert the lodash
bare specifier into lodash/index.js
.
Describe the solution you'd like
Packages would have access to a new entryPoints
field that would list the files that users are allowed to require. If the user makes a require
call to an unlisted file the PnP resolver would throw an exception.
As a side effect, because entryPoints
would list all entry points, we would be able to simplify the Node folder & extension resolution by checking which entry exists within the array rather than by querying the file system - yielding unprecedented runtime resolution speed and opening up the possibility to use the Node resolution within browsers (since no http lookup would be required anymore).
Describe the drawbacks of your solution
Since this would affect how the .pnp.js
file is generated, it would require us to add an additional field into the Package
type (which would then have to be serialized in the lockfile).
Describe alternatives you've considered
The entryPoints
could potentially be a more complex feature that would map a require name to a require path (for example "corejs/es5": "corejs/builds/es5.js"
). This doesn't look a good idea as it would not work under different package managers that wouldn't use this standard.
Additional context
First referenced in yarnpkg/yarn#6945
Describe the user story
I'm a Yarn user and I want to learn more about its inner working.
Describe the solution you'd like
The missing pages from the documentation should be implemented.
Describe the bug
The integration tests fail if /tmp/package.json
exists:
โ Commands โบ dlx โบ it should always update the binary between two calls
Temporary fixture folder: /private/var/folders/l6/vfs9p0z15mb_r53cvdpxh8l00000gn/T/tmp-28071lpABFS7ZA0b4
expect(received).resolves.toMatchObject()
Received promise rejected instead of resolved
Rejected to value: [Error: Command failed: /Users/bram/.nvm/versions/node/v10.15.3/bin/node /Volumes/Workspaces/private/berry/packages/acceptance-tests/../../scripts/run-yarn.js dlx -q -p has-bin-entries has-bin-entries-with-relative-require
Error: This command can only be run from within a workspace of your project.
Usage: add [... packages] [-E,--exact] [-T,--tilde] [-D,--dev] [-P,--peer] [-i,--interactive] [-q,--quiet] [--cached]
===== stdout:
```
```
===== stderr:
```
Error: This command can only be run from within a workspace of your project.
Usage: add [... packages] [-E,--exact] [-T,--tilde] [-D,--dev] [-P,--peer] [-i,--interactive] [-q,--quiet] [--cached]
```
]
To Reproduce
echo '{}' > /tmp/package.json
yarn test:integration
The issue only manifests when running the tests. The following works fine:
echo '{}' > /tmp/package.json
mkdir -p /tmp/foo
echo '{}' > /tmp/foo/package.json
cd /tmp/foo
node /path/to/berry/scripts/run-yarn.js dlx -p typescript tsc --init
I offered shiny new tshirts for our contributors, here they are! I just dropped them to the post office, please let me know once you have received them (hopefully a week or two? They come from France).
Note that they aren't exactly the "new" Yarn logo, just a slight variation just for the occasion. Many thanks to Natacha for designing it based on my (very vague) requirements of "the Yarn cat with a pop culture reference"!
And a big thanks to @Embraser01, @rally25rs, @bgotink, @Vlasenko, and @deini for their work. I can seriously say this future release wouldn't be nearly as awesome without you all ๐
(If you read this and want a tshirt as well, I'd be happy to send you one in return for some meaningful contributions! The good first issues and help wanted tags are good sources of inspiration ๐)
Describe the user story
As a user I want to write constraints while hardcoding as little as possible in the constraints file.
Describe the solution you'd like
Introduce more predicates for the constraints file to consume. I can think of two new useful predicates right now:
workspace_private/1
gives the constraints file access to the private
property of the workspace.^
in their dependencies and peerDependencies to allow non-breaking updates of these packages. However, this repo also contains demo packages that explicitly pin their versions. My constraints file should be able to handle that without me having to hardcode a list of all private packages.workspace_root/1
gives the constraints file access to the root workspace's identifier. This allows for accessing dependencies of the root package without having to hard-code what the root package's location or name is.Describe the drawbacks of your solution
We don't want to litter the constraints with too many predicates, so we should think hard about the usefulness of every added predicate.
Describe alternatives you've considered
Hardcoding the names of private packages or the root package. This is prone to error, as new private packages can be added or renamed and public packages could be made private and vice versa.
Describe the user story
I'm a project owner and I'd like to forbid my workspace projects from declaring postinstall scripts.
Describe the solution you'd like
I should be able to write the following rule:
gen_workspace_field_requirement(_, 'scripts.install', null) :-
workspace_field(WorkspaceField, 'scripts.install', _).
This isn't possible at the moment because it would look in a field named "scripts.install"
rather than a field named "install"
within an object named "scripts"
.
Describe the drawbacks of your solution
It would require to either write a possibly complex logic, or add a new dependency to the constraints plugin.
Describe alternatives you've considered
I don't see much alternative apart from not supporting nested objects at all, which seems like a waste considering the potential use cases.
Additional context
The logic to access a specific field from an object is already implemented in _.get
. I haven't checked how many bytes it would add to the bundle, though.
Describe the bug
I enforce a particular resolution using the resolutions
field. After removing this particular entry from the field, the overriden resolutions don't get updated back to their original value.
To Reproduce
{
"dependencies": {
"object-assign": "3.0.0"
},
"resolutions": {
"object-assign": "4.0.0"
}
}
Run yarn install
, then remove the resolutions
field from the package.json
, then run yarn install
again, then yarn why object-assign
. You should see 3.0.0
being used, but it got locked to 4.0.0
.
Additional context
We probably need to write into the lockfile which ranges got overriden so that if their matching resolution
entry disappears from the lockfile we can re-resolve them.
Describe the bug
If there's at least one unfixable error in the contstraints, the entire yarn constraints fix
command is useless: it runs but doesn't change any of the package manifests.
To Reproduce
Run yarn constraints fix
in master, hit yes
a couple of times, notice nothing has changed after the command completes.
This is caused by the return in the if (result.invalidDependencies)
block:
Describe the bug
Some packages does not set an install
script in there package.json
even if they have to build a native module.
I tried to use https://github.com/ranisalt/node-argon2, but no argon2.node
file was created at installation.
It's because their pacakge.json
does not contain any information about building the module.
It currently works with npm or yarn because they check if the project contain a binding.gyp
and create a install
script if needed:
To Reproduce
You can clone this repo https://github.com/Embraser01/scratch-repo and run yarn start
.
Screenshots
Error when starting the script.
Environment if relevant (please complete the following information):
Additional context
I also unplugged the package to be sure it wasn't related to the zip system...
It's not really a bug but will prevent an easy migration from Yarn V1
Describe the user story
I'm an application author; my current setup uses node_modules
directories and I don't want to spend time changing that. In order to make my migration as simple as possible, I'd like Yarn to continue installing my dependencies using the traditional node_modules
algorithm.
Describe the solution you'd like
Running yarn install
should generate node_modules
directories instead of a .pnp.js
file. This should be configurable through the use of a configuration settings (such as preferred-linker-node: node-modules
).
Describe the drawbacks of your solution
This requires the implementer to spend time on implementing this - time that won't be spent implementing new features.
This slightly waters down the long-term benefits of PnP by lowering the upgrade incentive our users might have to switch to it. How much of a concern it really is isn't clear though, especially if this behavior is hidden behind a settings or a plugin. In any case, users should be free to choose whatever approach works the best for them, and supporting the default only makes sense.
The node_modules
install doesn't play very well in the grand vision of multi-languages dependency trees. My initial prototypes had significant issues caused by the complexity of supporting a package being installable multiple times.
Describe alternatives you've considered
We could suggest our users to stay on the v1 as long as they require node_modules
. In practice this might lead to a lower adoption if people have concerns about PnP's viability (note to anyone watching this issue: I'm not aware of any, and after having worked on it for some time now I'm more confident than ever that it solves very important problems that have been put aside for too long).
A middle ground option could be to add support for flat installations only. This would solve the multi-language problems (since every package would only have one single location on the disk) and would be relatively straightforward to implement, at the price of making the feature more uncomfortable for our users.
Another middle ground option could be to implement a similar installation mechanism to the one used by pnpm (ie use symlinks and hardlinks so that the packages only live once), but I'm afraid this would have pretty much the same problems than PnP with less benefits, and would be quite complex to get right (even though the pnpm folks would surely be happy to share some of their wisdom on this).
Describe the bug
None of the builds pass for Node 8 on Windows. It's super weird because:
Here's an example of a failing job:
https://dev.azure.com/yarnpkg/berry/_build/results?buildId=781&view=logs
To Reproduce
Open a PR against master and check the CI results.
Environment if relevant (please complete the following information):
From yarnpkg/yarn#6953
I If you're interested into implementing PHP, Python, Ruby package installers without ever leaving Yarn, please open an issue and we'll help you get started!
I am interested in seeing how I can make yarn and Python work together as mentioned in the v2 issue.
Describe the bug
Configuration specified by the user in their ~/.yarnrc
is ignored by berry
To Reproduce
# setup
echo "lastUpdateCheck $(date +%s)" >> ~/.yarnrc
cd /tmp
# yarn v1
yarn config list
# notice the line
# lastUpdateCheck:
# 1555780695
# berry
echo 'yarn-path "/path/to/my/clone/of/berry/scripts/run-yarn.js"' > .yarnrc
yarn config --why
# notice the line
# lastUpdateCheck <default> null
Update The rule itself got implemented in #120, but it isn't fixable yet.
Describe the user story
I'm a project owner and I'd like to enforce that all the packages within my repository are private.
Describe the solution you'd like
The package constraints should expose a gen_workspace_field_requirement
predicate that would instruct the constraint engine to make sure that a specific field from the package.json
contains a specific value.
Describe the drawbacks of your solution
I don't think there is many drawbacks. The main one would be cluttering the code, but I think this might be a common enough use case to warrant it (I'd be happy to use it on this very repository, for example).
Describe alternatives you've considered
We could let it to our users to write their own rules (together with something like eslint
). That could get relatively complex to setup though, and given that the constraints plugin already do very similar tasks it would make more sense to support it there imo.
Describe the user story
My typescript project uses Microsoft's API extractor, which allows filtering my APIs and which rolls up all .d.ts
files into one. Some of my APIs are internal and marked as such via @internal
in the tsdoc.
I want the packages in my workspace to be able to access them, but when published these APIs should be removed from the package's typings.
To go one step further: the API extractor allows marking APIs unstable (e.g. @beta
), and supports filtering out all internal, alpha or beta packages. For instance, if I publish a beta version of my package, I'd like to keep the @beta
APIs but remove the @alpha
and @internal
APIs.
Describe the solution you'd like
By allowing the typings
property to be overridden in publishConfig
, I can provide access to the unfiltered rolled up .d.ts
file for the workspace packages while removing internal APIs in the published packages.
Describe the drawbacks of your solution
main
and module
via publishConfig
Describe alternatives you've considered
prepublishOnly
lifecycle script. That would break the build of all internal packages that consume my internal APIs. If we want to support publishing multiple packages at once this won't work.main
to the typescript file instead of the built javascript, that way the @internal
APIs are available and the typings
property in the package manifest can just point to the trimmed rolled up .d.ts
file.Describe the bug
When I have a .yarnrc file like this (generated by Yarn v1):
# THIS IS AN AUTOGENERATED FILE. DO NOT EDIT THIS FILE DIRECTLY.
# yarn lockfile v1
cache-folder "N:\\Resources\\Yarn\\Cache"
email [email protected]
lastUpdateCheck 1554139102835
username embraser01
It doesn't work because the grammar will split the line by the :
inside quotes and not by the space after cache-folder
.
From https://pegjs.org/online:
{
"cache-folder \"N": "\\Resources\\Yarn\\Cache\"",
"email": "[email protected]",
"lastUpdateCheck": "1554139102835",
"username": "embraser01"
}
For now, the workaround is simply to add the :
after cache-folder
.
To Reproduce
Go to https://pegjs.org/online and use the syml grammar and use the .yarnrc
sample.
Environment if relevant (please complete the following information):
Additional context
It happend because when working on #57, a script was loading my Yarn V1 .yarnrc
placed in C:\Users\marca\
.
Also, I had to remove temporary legacy settings even if I might use them in other project, is it possible to just throw a warning instead of throwing an error?
Describe the user story
As a use I want to test and debug the constraints I wrote in case they misbehave.
Describe the solution you'd like
Two options, one more complicated than the other:
A new yarn constraints query
command that starts a tau-prolog
repl with the constraints rules loaded
A yarn constraints generate
command that creates the full prolog file containing all workspace data and my constraint rules, a file which I could then load into a prolog engine, e.g.
yarn constraints generate full-constraints.pro
swipl -f full-constraints.pro
# this gives me a REPL to query the constraints in
Describe the drawbacks of your solution
The second solution has a major drawback, it depends on another prolog engine to perform the actual debugging. This means users can start making mistakes like testing with predicates that only exist in that other prolog engine. tau
is quite limited, e.g. no string predicates, so this can be a pain.
Describe alternatives you've considered
The alternative is listed above
Additional context
Prolog is not going to come easy for a lot of developers, so giving a debugging environment is going to be very important.
Describe the user story
The yarn link [...]
workflow is weird.
Describe the solution you'd like
I think yarn link [package-name]
should be replaced by yarn link <path-to-package>
. This would allow us to stop having to store data within the global folder when using yarn link
, which is a bit wasteful and arguably dangerous (the registered folder for a given name might have disappeared).
We also should add the [--all]
option which, when set, links all the named non-private workspaces from the target project to the current one. This would make it very easy to work on external monorepo projects like jest
.
Describe the drawbacks of your solution
Changing the behavior of an existing command is a bit extreme, but in this case we have the required tools to ensure a smooth upgrade path (we can print error messages that explain what changed and how to achieve the exact same result under the new workflow).
Describe alternatives you've considered
We could keep the existing workflow (first yarn link
in each linkable folder, then yarn link <package-name>
in each linked folder), but it's really bad from a devx perspective (twice the number of commands for no reason).
What package is covered by this investigations?
Gatsby
Describe the goal of the investigation
I'd like Gatsby to support PnP painlessly (note that this issue being opened doesn't necessarily mean that they aren't compatible, just that they could work better together).
Investigation report
Some problems encoutered:
Gatsby is missing various peer dependencies in its packages
This got fixed in gatsbyjs/gatsby#11994, gatsbyjs/gatsby#11972, and gatsbyjs/gatsby#11971. Noone should have this problem again.
Gatsby uses the
copyFile
function to generate its cache folder, and this function wasn't implemented in the fslib (so Gatsby could copy the files from its zip archive to the cache folder)
This got fixed within Yarn itself, in arcanis@10b18f5. Noone should have this problem again.
Gatsby needs to be aware of PnP in order to build its files.
I've added a gatsby-node.js
file with the following content:
const PnpWebpackPlugin = require(`pnp-webpack-plugin`);
module.exports = {
onCreateWebpackConfig: ({actions}) => {
actions.setWebpackConfig({
resolveLoader: {
plugins: [
PnpWebpackPlugin.moduleLoader(module),
],
},
resolve: {
plugins: [
PnpWebpackPlugin.bind(`${__dirname}/.cache`, module, `gatsby`),
PnpWebpackPlugin.bind(`${__dirname}/public`, module, `gatsby`),
PnpWebpackPlugin,
],
},
});
},
};
Gatsby generates a cache folder inside the project directory and, more annoyingly, uses it to store JS files that aren't compiled yet. This is problematic because since PnP is not aware that this cache folder belongs to the
gatsby
package it doesn't allow the files it contains to require the dependencies from its source package.Q: Why is this folder needed? Why can't Gatsby just reference the files directly from their location within the
gatsby
package? Why do they need to be copied?
I fixed that by adding a new feature in the pnp-webpack-plugin
package. It makes it possible to map a location on the disk to a specific locator (in this case the gatsby
package), which solves the problem.
A similar issue happens with the
public
folder, which contains generated files from thegatsby
package. Unfortunately those dependencies are loaded by Node, so the webpack plugin doesn't help here.
I fixed this issue simply by adding the referenced dependencies (core-js
, lodash
, react
, and react-dom
) in my own package.json, along with gatsby
itself. Since there isn't a lot of them it's relatively manageable - unfortunately it will have to be part of the regular usage directives when using Gatsby with PnP.
The ESLint problem occurred.
I fixed this by adding gatsby
in the list of allowed fallbacks. It's not great, but better than failing. We'll be able to deprecate this behavior starting from ESLint 6.
The Gatsby plugin loader loads plugin using
require.resolve
, but doesn't use the rightpaths
option. That causes the PnP loader to reject the call because the plugins aren't listed in Gatsby's dependencies.
Not solved yet.
So far with the changes described above it seems to work (I can access the website at localhost:8000
). I now have to figure out how to add themes and such and see whether other problems arise. I've put my current progress in the gatsby
branch, feedback welcome!
To test:
$> git clone <this repository>
$> cd packages/website
$> ../packages/berry-cli/bin/berry gatsby develop
Describe the bug
Running yarn add /foo/bar.tgz
used to work; it currently doesn't because the command only expects descriptors (so yarn add bar@/foo/bar.tgz
would work, for example).
Additional context
I believe the fix would have to be in suggestUtils
. We would have to add a branch to the LATEST
strategy to check whether the package name is unknown and its range is a file. If so, it would extract the package name from its manifest (this is a bit annoying because it'll need to go through the fetchers in order to get the archive fs).
https://github.com/yarnpkg/berry/blob/master/packages/plugin-essentials/sources/suggestUtils.ts#L139
Do you want to request a feature or report a bug?
Feature
Since Yarn v2 is currently in development and has a focus on providing consumable APIs, I'd like to ask for the addition of some hooks that will enable pkgsign to automatically work with Yarn once it's installed. Specifically I need two hooks:
I'm pretty open to how these hooks could work - whether that's calling pkgsign on the command line, or require
'ing a plugin to call it's APIs directly - as long as the hooks exist, I should be able to work with them.
In addition, I'd like to ask that Yarn keep it's current behaviour in not modifying the contents of a package (including package.json) once it's extracted. NPM adds metadata fields to the contents of a package, and it means we can't securely verify that package.json hasn't been tampered with.
Context
We parse the lockfiles using a format managed by the @berry/parsers
package (we call it syml, for Simpler Yaml). The idea was that since we didn't need the whole power of yaml, a smaller parser should theoretically be faster.
Well, as it turns out, it's not so simple. Our current parsing time (based on a peg.js-generated parser) is ~180ms for the Berry lockfile. In contrast, using js-yaml
brought the parsing time to ~60ms.
It's very likely that I made the peg grammar non-optimal, so if someone wants to take a shot at optimizing it (or rewriting it with another tool like chevrotain, for example) it would be helpful. If we can't make it faster, I guess we'll switch to js-yaml
.
Describe alternatives you've considered
We could also switch to js-yaml
without looking back. That would be ok for me, but we will first have to check how much it adds/removes to the bundle size. Note that if we do that we'll still need to keep the peg parser in our tree as a fallback at least until the 3.0 (otherwise we wouldn't be able to support old-style lockfiles, which would be bad for the migration story).
Describe the user story
I'm a Yarn user and want to use glob patterns in my scripts. For example:
{
"scripts": {
"grep": "grep -F \"$1\" packages/*/sources"
}
}
Describe the solution you'd like
The shell should support a glob syntax (at least *
and **
).
Describe the drawbacks of your solution
Writing a good globing mechanism is difficult. The simplest would be to reuse an existing library such as glob
, but I wonder how that would fit in the cross-platform story.
Describe alternatives you've considered
We simply could decide not to add support for glob patterns, and let users be more explicit when they wish to use them (grep -F \"$1\" $(bash -c 'echo packages/*/sources.js')
). I guess them not being supported would be a surprise to most users, though.
Additional context
Ideally the syntax and the glob mechanism should be separate - the shell should expose a new glob
option that would accept an async function that would return the match entries for a given argument and cwd, and default this option to whatever implementation we choose (likely the glob
package). This way the shell consumers will be free to choose a different implementation if they want to, and it keeps the shell dependencies on the fs
module low.
What package is covered by this investigations?
ESLint
Describe the goal of the investigation
I'd like ESLint to support PnP painlessly. It currently works, but requires a specific logic on our side, which is slightly suboptimal. This investigation aims to figure out how to improve this state.
Investigation report
ESLint currently loads its plugins using a raw
require.resolve
- meaning that from PnP's point of view,eslint
is the package that needs to list the plugins in its dependencies (instead of the package that contains the ESLint configuration file).This problem is compounded by the fact that ESLint bugs out on absolute paths, meaning that the ESLint configuration files cannot call
require.resolve
themselves in order to explicitly tell ESLint what's the actual location of the plugins they use.
Because of the PnP fallback mechanism, this problem doesn't occur in most cases (ie when the ESLint configuration file is your own). In this case even is ESLint doesn't list the plugin in its dependency PnP will still check which dependencies are listed at the top-level of your application.
This problem only happens when the ESLint configuration is part of a package you depend on. For example react-scripts
, created by create-react-app
. In this instance, the plugins aren't listed in the application package.json, meaning that the fallback cannot kick in and do its job.
In order to fix this, a temporary solution has been put into place: react-scripts
and gatsby
(which are two of the most popular packages using this pattern) are used as secondary fallbacks if the top-level isn't enough. This is but a bandaid, since it causes other problems (other packages can forget to list their own dependencies if react-scripts
or gatsby
happen to depend on them), but should do the trick for now.
The real fix is eslint/eslint#11388, which is expected to land in ESLint 6.
Describe the user story
I'm a package author, and my package depends on another package which has a peer dependency on react
. Since I don't want to provide this package I must indicate that my own package has a peer dependency on react
, but I also need to convey that the required react
version simply is the same one has the one requested by my dependency. I currently have to manually copy it, which is error-prone since it won't be automatically updated when my dependency is upgraded and starts using a different range.
Describe the solution you'd like
Yarn should support a new special range specifier for peer dependencies: inherit
. When inherit
is specified, the peer dependency will automatically become the union of all the ranges of the matching peer dependencies of dependencies of the package.
Describe the drawbacks of your solution
From a technical standpoint it can be seen as making a parent depend on its children (since the exact peer dependency will only be resolvable if the children are available). In practice this shouldn't be a problem though, because the peer dependencies aren't taken into account until after the dependency tree has been computed (because they are a non-binding suggestion that the user is responsible for getting right).
The inherit
range keyword will not be properly understood by lower Yarn versions, which might cause some warnings to appear in such cases. Given that only the packages that deemed this feature useful enough to warrant the potential warning will be affected I don't think it's a blocker.
Describe alternatives you've considered
Instead of an inherit
keyword the *
range could be repurposed. I'm afraid this would be a surprising change, and it would wildly break the semver expectations.
Instead of encoding the inherit status into the range, it could be moved into the peerDependenciesMeta
field. This would mean that the range described in the peerDependencies
field would have no actual impact, which would be quite unexpected. I believe the range is where this feature has the most sense semantically speaking.
Describe the context
The gen_invalid_dependency rules cannot be fixed. In fact they don't even contain enough information to potentially lead to a fix unless the users encode them into the reason. Even an interactive mode wouldn't allow to fix the problems.
Additionally, it might be confusing for our users to figure out whether they want to use the gen_enforced_dependency_range
predicate or the gen_invalid_dependency
one.
Describe the solution you'd like
The gen_invalid_dependency
predicate should disappear.
The engine should be smart enough to detect when gen_enforced_dependency_range
would have multiple conflicting solutions for a same range.
Doing this would have several advantages:
No reason to care about "invalid dependencies" vs "enforced dependencies"
All rules could be made autofixable unless they conflict
In a case of conflict, the interactive mode could be used to let the user figure out the right fix
Describe the drawbacks of your solution
I might be missing some cases where specifying invalid dependencies without having conflicts is actually a good behavior. I don't really see any, though ๐ค
Describe the user story
I want to use the workspace:
protocol to enforce my local packages to use my workspaces rather than the remote versions from the npm registry, but that makes them unpublishable.
Describe the solution you'd like
The workspace:
protocol should be automatically changed at publish-time. Two cases:
workspace:<semver>
should become <semver>
workspace:<path>
should become <version of the workspace at path>
(no caret)
Describe the drawbacks of your solution
Publishing multiple packages at once still requires #62 to be implemented (especially the topological sort one, so that publish can work properly).
Some people might want to use a caret when transforming the workspace:<path>
pattern.
Describe alternatives you've considered
We could manually list the replacements in publishConfig
(similar to what we do for main
and module
), but that seems extremely unintuitive.
We could use a caret when transforming workspace:<path>
, but it's not clear to me what are the implications and I prefer to keep a safe default for now.
We could support a caret prefix (workspace:^<path>
), but it's not clear whether this is a useful feature so I would table it for now.
Describe the solution you'd like
We currently have gen_workspace_dependency_requirement
and gen_enforced_field
. The naming is inconsistent, let's fix it while we still have the chance! ๐
Additional context
Using gen_enforced_*
seems a good way going forward. The changes are mostly scoped to Constraints.ts
, bar a few changes to constraints.pro
and of course the tests.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.