oai / tools.openapis.org Goto Github PK
View Code? Open in Web Editor NEWA collection of open-source and commercial tools for creating your APIs with OpenAPI - Sourced from and published for the community
Home Page: https://tools.openapis.org/
A collection of open-source and commercial tools for creating your APIs with OpenAPI - Sourced from and published for the community
Home Page: https://tools.openapis.org/
Please use this template for tools that cannot be tagged on GitHub. If your tool is on GitHub use the openapi3
and openapi31
tags to allow your data to be collected automatically.
Please replace all placeholders marked in bold in the bullets below with the requested information. Use plain text for your information.
Please indicate the versions of OpenAPI supported by your tool by marking them true or false below.
I'm trying to get this codebase running and noticed yarn.lock asking for a version of ua-parser-js that was removed from NPM.
It's an indirect dependency but updating to "@11ty/eleventy": "^1.0",
seems to have it sorted.
I'll send a PR shortly.
As a tooling user I want to see when new tools are added to the repository on #tooling
Discord channel.
As part of marketing activity it'd be great to get some outbound notifications when new tools are added to the repository.
To that end our new Discord environment seems like a good place to start:
This is most likely best done as a "summary", possibly with the statistics checked back into the repository to drive a changelog.
As an architect, I would like to use API instead of HTML pages when I do research.
With queries, I would filter the tools according to my needs.
The API would provide a data source for content creators and other data users (the reason why we do APIs).
Design and publish OAI Tooling API in OpenAPI (preferably) or GraphQL.
Public API would provide:
Filtering
Detail of the tool
Optionally administration API
[Jawaker tokens generator](https://www.jawaker.com/en/)
Looking at the page I see categories for "Parsers, Schema Validators, Validator" and at least the lonely "Validator" looks like it should be a "Schema Validator". I am wondering about the difference between Parsers and Schema Validators. And I am also wondering how tooling for linting and diffing would fit into that.
Maybe it would be helpful to add a brief description to each category?
Please use this template for tools that cannot be tagged on GitHub. If your tool is on GitHub use the
openapi3
andopenapi31
tags to allow your data to be collected automatically.
Please replace all placeholders marked in bold in the bullets below with the requested information. Use plain text for your information.
Please indicate the versions of OpenAPI supported by your tool by marking them true or false below.
As an end-user, I wish to know how to add a tool to the site when I come to it directly, not via the GitHub repo/README
As a tooling developer I want calls to the GitHub API to made at a rate that I can expect a consistent response
Both data builds implement a firehose-style approach to accessing the GitHub API i.e. the data sources are collected up and then it hits go wrapped in a native Promise.all
. This appears to result in some occasional "spikey" behaviour in the GitHub API, with either Axios bailing completely or GitHub returning a 403 (Forbidden) or 503 (Service Unavailable).
It's hard to find any documentation on throttling behaviours at the GitHub end, but having some mechanism to "slow down" the rate of consumption is likely to help. Suggest using the Bluebird Promise.map
instead and leveraging the concurrency
property to only spin up a maximum number of concurrent calls at any one time.
As a user of tooling data I want updates to source data to be regularly retrieved and applied.
A (GitHub Actions) workflow is required to retrieve source data as regular intervals. Suggest this is executed on a schedule, nominally once per day.
As the owner of the Tooling website
I want to implement Google Analytics
So I can track visits to the website
Add Google Analytics snippet as per Slack message from Marsh.
>
Please use this template for tools that cannot be tagged on GitHub. If your tool is on GitHub use the openapi3
and openapi31
tags to allow your data to be collected automatically.
Please replace all placeholders marked in bold in the bullets below with the requested information. Use plain text for your information.
Please indicate the versions of OpenAPI supported by your tool by marking them true or false below.
As a tooling maker I want to register my tool's support for a given version of OpenAPI.
The library gulpfile.js/lib/processors/awesome-openapi3-processor.js implements the method APIs.guru developed for collecting tooling through GitHub tags. This mechanism has legs as it is:
Proposal is to extend this so tooling makers can attest to a given version. We can dedup across versions in the build process i.e. it may see a repo tagged several times, but the metadata will only be pulled once.
Proposed values (for discussion):
This should allow the publication/ingestion mechanism to remain fairly straightforward and automated. The approach of course needs to be socialised with the community in an effort to get traction on tagging across repositories and repository providers.
As a system owner I want to ensure my configuration is correct before starting build tasks
Currently if GITHUB_USER
and GITHUB_TOKEN
are not set the build will not fail until significantly into the build process with a non-obvious error.
Add a simple Gulp task to check that both of these are set before starting the build.
As a tool developer, I'd like to be able to override the category classification given to my tool. Specifically I'd like https://github.com/mnahkies/openapi-code-generator to be labelled as a "Code Generator" rather than a "Parser"
Currently the category is assigned using https://www.npmjs.com/package/bayes which essentially uses the frequency of tokens in a provided text against the frequency of tokens in already classified text to assign a class.
However, because the current category/class distributions are pretty uneven (>30% are assigned to "Parsers") it seems to have ended up overly biasing assignment to "Parsers". For example, Redoc is assigned "User Interfaces" and "Parsers", but not "Documentation"
And these are all assigned to "Parsers" as well:
Rather than "Code Generator" / "Mock" / "Documentation" / "Testing Tools"
I'm not sure if this is inherent to the classification approach / problem space (eg: is the written language used for different types of tool lacking enough distinguishing tokens to give a good signal), or a negative feedback loop from the existing classifications, but either way I think it would be good to have a way to override this behavior.
I'm hopeful that introducing this would over time improve the accuracy of the classification using bayes
as a result of the accurate manually labelled data.
Propose adding a way to manually label a primary category for a tool. I see two main options:
tools.yaml
entries like manualCategoryOverride
openapi3
/ openapi31
ones that indicate the primary categoryI see the primary benefit of the first option being that it gives control of curation to the maintainers of this repository, whilst the second option allows tool writers to self serve. It's possible that both might be desirable, especially to account for entries that aren't scrapped from Github (though I guess their categories are essentially manually configured already).
I think some amount of rationalization (eg: Testing vs Testing Tools) of the existing categories may be useful as well, and potentially adding a description of each category explaining what is in/out of scope for it.
Please use this template for tools that cannot be tagged on GitHub. If your tool is on GitHub use the
openapi3
andopenapi31
tags to allow your data to be collected automatically.
Please replace all placeholders marked in bold in the bullets below with the requested information. Use plain text for your information.
Please indicate the versions of OpenAPI supported by your tool by marking them true or false below.
As a user, I select a category, pick one of the tools and display its detail. I cannot see whether the tool also belongs to other categories. I would need to go to different categories to check (to which one?). For instance, Postman could come into Testing, Editor, Documentation, Mock, and maybe some others. Knowing which categories the tool covers is interesting for me.
As a maintainer, I want to enter a tool and select all applicable categories.
When entering the product, the maintainer selects all applicable categories.
When a user opens the tool detail, he will see all categories in one place.
As a tooling repository user I want the most up-to-date information possible so I can correctly understand how the site and data collection mechanisms function.
Currently the README is lagging behind the state of the code. It needs a general update that includes:
As a tooling owner
I want my tools to be available on a fully-functioning website
Which is great to look at
So that users of the site have a great user experience
The site currently has - and has had for some time - a bug that is stopping pop-ups from working correctly. This needs to be resolved.
Then the general look-and-feel needs to be revamped to make it easier to read and navigate.
As a user, I need to orient in tool categories quickly. Currently, some categories are synonymous. Some categories are fine-grained (such as data validators), while others are coarse (Server, Security, Testing).
As a tool provider, I want to know which category to pick or which to check if my product is already listed.
I would suggest setting categories according to API lifecycle phases plus Security and Learning, which relates to all of them:
Maybe the "Use Case Area" (or an apter word) would describe the categories best.
As a user of tooling data I want to gather tooling data from all significant source code repository providers
The current data sources tend to focus - not necessarily intentionally - on GitHub. It would be great to expand the search to take in - for example - Gitlab, Bitbucket, npm, etc to gather new tools, data sources, and additional metrics (note the APIs.guru openapi3 source already tapped into some sources).
Research and analysis therefore needs to be undertaken to work out:
This should all be collated in a Google Doc (or similar) so the work can then be broken down into issues.
We've added the tags to Traefik Hub based on the "how can you help" section in the readme. However, it ended up in the "Server implementations" category, but it'd be a better fit in the "Gateway" category. Also, the metadata is incorrect (name, homepage link, 2.0 support), so I think it's better to add it through an issue now.
Please use this template for tools that cannot be tagged on GitHub. If your tool is on GitHub use the
openapi3
andopenapi31
tags to allow your data to be collected automatically.
Please replace all placeholders marked in bold in the bullets below with the requested information. Use plain text for your information.
Please indicate the versions of OpenAPI supported by your tool by marking them true or false below.
As a developer of the Tooling project I want to ensure that environment variables are validated before the rest of the build executes
Currently when a build kicks off the first step is to validate that environment variables are present, which is executed by validate-metadata.js
. Currently any validation of the values is performed downstream.
To make processing more efficient this should be moved upstream so that initialisation and type checking can be performed before any further steps.
Describe the bug
The Gateway category returns 404 when clicked on.
To Reproduce
Steps to reproduce the behaviour:
Additional context
Need to add git add docs/categories
to GitHub Actions for metadata and full builds to sweep up newly discovered categories.
Describe the bug
The build currently fails due to a new restriction at the Github API.
A 403 is returned where it should not be i.e. the permissions applied by Github should not result in a 403 being returned.
The behaviour was experienced in the past when there were too many active connections to the Github API. This has been tuned in the past down to 2 using the environment variables in the project.
To Reproduce
Steps to reproduce the behaviour:
Run either yarn run build:data:metadata
or yarn run build:data:full
via the Github Actions workflow.
Expected behaviour
The build should run to completion.
Additional context
This needs fixing to get the build up-and-running again, which will address #96 and #92 raised by @spacether as the JSON Schema Generator repository is correctly tagged using the openapi31
topic.
Actions:
Describe the bug
The doc here https://tools.openapis.org/categories/gui-editors lists Stoplight Studio as a GUI-based editing tool. It is no longer offered, and outstanding copies no longer let you load local files.
To Reproduce
Steps to reproduce the behavior:
Try to download and install Stoplight Studio.
As a tooling developer I want data to be collected consistently and without failing due to rate limits applied at any source code repository platform.
GitHub (obviously) applies rate limits on API calls, which we rely on heavily to collect data. As we expand the number of topics we are collecting we need to be cognisant of the limits and amend our approach to spread the collection period over multiple hours.
There's a few approaches:
Option 3 seems feasible. The most sensible option seems to be:
This approach should scale as we collect more data. The main thing to be aware of is the overall build time limits, although that should be "OK" as we have a fair amount of head room for the time being.
As a tooling repository user
I want to see all available tools in the repository
So I can make an informed decisions based on the data therein
Currently the repository sweeps existing data sources (Implementations.md, openapi.tools, GitHub tags). We need a means to submit tools outside these sources.
As a first pass we'll use an Issue template and an automated process (OK a bot π€¨οΈ) to sweep tool request issues, run a DQ process and do PR and merge to main. This will help facilitate the work to do off-main builds as well (which is nice).
As a user of tooling-related data I want to ignore anything that appears to be boilerplate code, unmaintained or archived.
As a maintainer of tooling-related data I want to reduce the amount of queries run against projects that appear to be boilerplate code, unmaintained or archived.
Given the "coarse-grained" nature of the data collection approach there is a great deal of opportunity for "dross" to clutter up the tooling dataset. Some examples:
We therefore need to decide on:
gulp
build to sift it out.As a system owner I want to ensure that metadata is retrieved judiciously and is not wasteful of machine time or resources.
The majority of repositories referenced in the build will change very slowly (if they change at all). Using a "one-in, all-in" full build approach is therefore inappropriate. A more granular approach is required.
The following is suggested:
There is also a fair amount of "noise" in the data. For example:
The existing full build therefore needs to be split out as above and then reflected in the GitHub Actions config.
Update openapi json schema generator
Now fill in the blanks:
As a user, I want to quickly find quality and well supported tools that meet my needs and work in my environment. The current categories are broad and the lists contain too many items. I would like to reduce the number of matches using various filtering criteria.
note: suspect this is already on the roadmap....
How do tooling makers submit their own support of the spec?
As a consumer of tooling data I want to see updates to metadata merged into the master list when they are updated in source data.
The current implementation of the build process is only an initial build that grabs all data from the in-scope repositories, merges and normalises based on some simple analytics and then stores in the docs directory. Whilst this is fine as a repeatable process, it doesn't make for long term state management of the Tooling repository i.e.:
We therefore need to implement a merge process that mines the source data as now and does any updates, but then only selectively hits the GitHub (or other repository when implemented) to update the statistics on the tools.
A suggested design approach to investigate is using the cache control directives available on GitHub and seeing whether we can only selectively hit the API when new metadata is available.
As a user of tooling data I want the repository to be captured consistently across all sources.
There's a couple of instances where the repository is not captured correctly, for example:
- source: IMPLEMENTATIONS.md
name: KaiZen OpenAPI Parser
homepage: https://github.com/RepreZen/KaiZen-OpenAPI-Parser
language: Java
curated_description: High-performance Parser, Validator, and Java Object Model for OpenAPI 3.x
category: Low-Level tooling
This instance is specific to IMPLEMENATIONS.md processor - and could be tackled in this array builder - but it would be good to implement in the main gulp file so it works across all sources.
Please use this template for tools that cannot be tagged on GitHub. If your tool is on GitHub use the
openapi3
andopenapi31
tags to allow your data to be collected automatically.
Please replace all placeholders marked in bold in the bullets below with the requested information. Use plain text for your information.
Please indicate the versions of OpenAPI supported by your tool by marking them true or false below.
As a tooling data user I want the data to be accurate
The watchers
value is currently incorrect and being populated with the stars
value. Correct it.
As a tooling site user I want to glean as much possible information when I select a tool from the categories pages.
The modal pop-up only contains the barebones information atm. It should be updated to include:
Please use this template for tools that cannot be tagged on GitHub. If your tool is on GitHub use the
openapi3
andopenapi31
tags to allow your data to be collected automatically.
Please replace all placeholders marked in bold in the bullets below with the requested information. Use plain text for your information.
Please indicate the versions of OpenAPI supported by your tool by marking them true or false below.
Describe the bug
There are duplicates on the website under the Parsers section:
To Reproduce
Expected behavior
Should be no duplicates.
Additional context
Please use this template for tools that cannot be tagged on GitHub. If your tool is on GitHub use the
openapi3
andopenapi31
tags to allow your data to be collected automatically.
Please replace all placeholders marked in bold in the bullets below with the requested information. Use plain text for your information.
Please indicate the versions of OpenAPI supported by your tool by marking them true or false below.
Please use this template for tools that cannot be tagged on GitHub. If your tool is on GitHub use the
openapi3
andopenapi31
tags to allow your data to be collected automatically.
Please replace all placeholders marked in bold in the bullets below with the requested information. Use plain text for your information.
Please indicate the versions of OpenAPI supported by your tool by marking them true or false below.
As a tooling user I want all available tools to be uniquely identified for ease of understanding and referencing
At the moment a tool name isn't consistently present as not all sources e.g. data from GitHub provides one.
In order to always provide a reference we'll use the URL instead - GitHub for open source, the homepage for commercial tools - and then hash that baby to create a unique and consistent reference
As a tooling data user I want the data to be categorised to provide easy-to-use references for different tooling types.
In the APIs.guru repository Mike implemented Bayesian analysis to help with data categorisation. This should be implemented in this repository to provide equivalent functionality.
As a tooling data user I want to be able to browse and discover the available tooling data.
This is going to be light on detail... but this simply covers cutting in the first drop of the web interface for the tooling data.
Rough overview:
tools.yaml
away from docs
to src/_data
eleventy
, tailwindcss
, etc packages.eleventy
assets.tools.yaml
has been updated.Describe the bug
When I run yarn install I get an error
To Reproduce
Steps to reproduce the behavior:
yarn install
[1/4] π Resolving packages...
[2/4] π Fetching packages...
[3/4] π Linking dependencies...
[4/4] π¨ Building fresh packages...
warning Error running install script for optional dependency: "/Users/justinblack/programming/tooling/node_modules/glob-watcher/node_modules/fsevents: Command failed.
Exit code: 1
Command: node install.js
Arguments:
Directory: /Users/justinblack/programming/tooling/node_modules/glob-watcher/node_modules/fsevents
Output:
node:events:492
throw er; // Unhandled 'error' event
^
Error: spawn node-gyp ENOENT
at ChildProcess._handle.onexit (node:internal/child_process:286:19)
at onErrorNT (node:internal/child_process:484:16)
at process.processTicksAndRejections (node:internal/process/task_queues:82:21)
Emitted 'error' event on ChildProcess instance at:
at ChildProcess._handle.onexit (node:internal/child_process:292:12)
at onErrorNT (node:internal/child_process:484:16)
at process.processTicksAndRejections (node:internal/process/task_queues:82:21) {
errno: -2,
code: 'ENOENT',
syscall: 'spawn node-gyp',
path: 'node-gyp',
spawnargs: [ 'rebuild' ]
}
Node.js v20.5.0"
info This module is OPTIONAL, you can safely ignore this error
β¨ Done in 47.66s.
Expected behavior
No errors
Screenshots
N/A
Desktop (please complete the following information):
Additional context
I see that the module is optional, but why is there an error?
Running build instructions before making a PR on my branch: https://github.com/spacether/Tooling/tree/updates_openapi_json_schema_generator
A declarative, efficient, and flexible JavaScript library for building user interfaces.
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. πππ
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google β€οΈ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.