Giter Club home page Giter Club logo

snowy-json's Introduction

Buy Me A Coffee donate button

Snow, a JSON Schema Validator

Version: 0.16.0

The main goal of this project is to be a reference JSON Schema validator. While it provides a few working applications, it's meant primarily as an API for building your own toolset.

See: JSON Schema

Table of contents

  1. Features
    1. Additional features
  2. Quick start
  3. Under the covers
  4. Limitations
  5. Which specification is used for processing?
  6. Options for controlling behaviour
    1. AUTO_RESOLVE
    2. COLLECT_ANNOTATIONS_FOR_FAILED
    3. CONTENT
    4. DEFAULT_SPECIFICATION
    5. FORMAT
    6. SPECIFICATION
  7. Project structure
    1. Module information
    2. Complete programs
    3. API
      1. Annotations and errors
      2. Internal APIs
  8. Building and running
    1. Program execution with Maven
    2. Using Snow in your own projects
  9. The linter
    1. Doing your own linting
      1. Linting by traversing the tree
  10. The coverage checker
  11. Future plans
    1. Possible future plans
  12. References
  13. An ending thought
  14. Acknowledgements
  15. License

Features

This project has the following features:

  1. Full support for all drafts since Draft-06.
  2. Full "format" validation support, with a few platform-specific exceptions.
  3. Full annotation and error support.
    1. There is enough information to provide full output support. The calling app has to sort through and format what it wants, however.
  4. Written for correctness. This aims to be a reference implementation.
  5. Can initialize with known URIs. These can be any of:
    1. Parsed JSON objects.
    2. URLs. The URLs can be anything, including filesystem locations, web resources, and other objects. "URL" here means that the system knows how to retrieve something vs. "URI", which is just an ID.
  6. Options for controlling "format" validation, annotation collection, error collection, and default and non-default specification choice.
  7. There's rudimentary infinite loop detection, but only if error or annotation collection is enabled. It works by detecting that a previous keyword has been applied to the same instance location.
  8. Specification detection heuristics for when there's no $schema declared.
  9. Content validation support for the "base64" encoding and "application/json" media type.

Additional features

These additional features exist:

  1. A rudimentary linter that catches simple but common errors.
  2. A coverage tool.

Quick start

There are more details below, but here are four commands that will get you started right away:

  1. Run the validator on an instance against a schema:
    mvn compile exec:java@main -Dexec.args="schema.json instance.json"
    The two files in this example are named schema.json for the schema and instance.json for the instance. The example assumes the files are in the current working directory.
  2. Clone and then run the test suite:
    mvn compile exec:java@test -Dexec.args="/suites/json-schema-test-suite"
    This assumes that the test suite is in /suites/json-schema-test-suite. Yours may be in a different location. The test suite can be cloned from JSON Schema Test Suite.
  3. Run the linter on a schema:
    mvn compile exec:java@linter -Dexec.args="schema.json"
    The schema file in this example is named schema.json. The example assumes the file is in the current working directory.
  4. Run the schema coverage checker after a validation:
    mvn compile exec:java@coverage -Dexec.args="schema.json instance.json"
    The two files in this example are named schema.json for the schema and instance.json for the instance. The example assumes the files are in the current working directory.

Under the covers

This project uses Google's Gson library under the hood for JSON parsing. ClassGraph is used to support class finding.

This means these things:

  1. The external API for this project uses Gson's JSON object model.

Limitations

This project follows just about everything it can from the latest JSON Schema specification draft. There are a few things it does slightly differently due to some implementation details.

  1. Regular expressions allow or disallow some things that ECMA 262 regular expressions do not. For example, Java allows the \Z boundary matcher but ECMA 262 does not.

Which specification is used for processing?

There are a few ways the validator determines which specification to use when processing and validating a schema. The steps are as follows:

  1. $schema value. If the schema explicitly specifies this value, and if it is known by the validator, then this is the specification that the validator will use.
  2. The SPECIFICATION option or any default.
  3. Guessed by heuristics.
  4. The DEFAULT_SPECIFICATION option or any default.
  5. Not known.

Options for controlling behaviour

This section describes options that control the validator behaviour.

All options are defined in the com.qindesign.json.schema.Option class, and their use is in com.qindesign.json.schema.Options.

Some options are specification-specific, meaning they have different defaults depending on which specification is applied. Everything else works as expected: users set or remove options. It is only the internal defaults that have any specification-specific meanings.

There are two ways to retrieve an option. Both are similar, except one of the ways checks the specification-specific defaults before the non-specification-specific defaults. The steps are as follows, where subsequent steps are followed only if the current step is not successful.

Specification-specific consultation steps, using a specific specification:

  1. Options set by the user.
  2. Specification-specific defaults.
  3. Non-specification-specific defaults.
  4. Not found.

Non-specification-specific consultation steps:

  1. Options set by the user.
  2. Non-specification-specific defaults.
  3. Not found.

Option: AUTO_RESOLVE

Type: java.lang.Boolean

This controls whether the validator should attempt auto-resolution when searching for schemas or when otherwise resolving IDs. This entails adding the original base URI and any root $id as known URLs during validation.

Option: COLLECT_ANNOTATIONS_FOR_FAILED

Type: java.lang.Boolean

This controls, if annotations are collected, whether they should also be retained for failed schemas. This option only has an effect when annotations are being collected.

Option: CONTENT

Type: java.lang.Boolean

This controls whether to treat the "content" values as assertions in Draft-07. This only includes "contentEncoding" and "contentMediaType".

Option: DEFAULT_SPECIFICATION

Type: com.qindesign.json.schema.Specification

This option specifies the default specification to follow if one cannot be determined from a schema, either by an explicit indication, or by heuristics. This is the final fallback specification.

Option: FORMAT

Type: java.lang.Boolean

This is a specification-specific option meaning its default is different depending on which specification is being used. It controls whether to treat "format" values as assertions.

Option: SPECIFICATION

Type: com.qindesign.json.schema.Specification

This indicates which specification to use if one is not explicitly stated in a schema.

Project structure

This project is designed to provide APIs and tools for performing JSON Schema validation. Its main purpose is to do most of the work, but have the user wire in everything themselves. A few rudimentary and runnable test programs are provided, however.

The main package is com.qindesign.json.schema.

Module information

This project defines a module and exports these packages:

  1. com.qindesign.json.schema: This is the main validation package.
  2. com.qindesign.json.schema.net: Provides some URI and hostname processing tools.

It also transitively requires this package:

  1. com.google.gson

Complete programs

The first program is Main. This takes two arguments, a schema file and an instance file, and then performs validation of the instance against the schema.

The second program is Test. This takes one argument, a directory containing the JSON Schema test suite, and then runs all the tests in the suite. You can obtain a copy of the test suite by cloning the test suite repository.

The third program is Linter, a rudimentary linter for JSON Schema files. It takes one argument, the schema file to check.

The fouth program is Coverage, a simple coverage tool for JSON Schemas and instances. It's similar to Main, but prints different output.

API

The main entry point to the API is the Validator constructor and validate method. In addition to the non-optional schema, instance, and base URI, you can pass options, known IDs and URLs, and a place to put collected annotations and errors.

In this version, the caller must organize the errors into the desired output format. An example of how to convert them into the Basic output format is in the Main.basicOutput method.

Providing tools to format the errors into more output formats may happen in the future.

Annotations and errors

Annotations and errors are collected by optionally providing maps to Validator.validate. They're maps from instance locations to an associated Annotation object, with some intervening values.

  • The annotations map follows this structure: instance location → name → schema location → Annotation. The Annotation value is dependent on the source of the annotation.
  • The errors map has this structure: instance location → schema location → Annotation. The Annotation value is a ValidationResult object, and its name will be "error" when the result is false and "annotation" when the result is true.

For annotations, Annotation.isValid() indicates whether the annotation is considered valid or auxiliary. When failed annotations are collected, invalid annotations indicate an annotation that would otherwise exist if the associated schema had not failed.

For errors, Error.isPruned() means that the result is not relevant to the schema result. For example, "oneOf" will pass validation if one subschema passes and all the other subschemas fail. All failing subschemas will indicate an error, but it will be marked as pruned.

This is useful to track coverage vs. a minimal set of useful errors.

The Results class provides some tools for sorting and collecting annotations and errors. It does the work of extracting a list of useful results.

The locations are given as JSON Pointers.

The annotation types for specific keywords are as follows:

  • "additionalItems": java.lang.Boolean, always true if present, indicating that the subschema was applied to all remaining items in the instance array.
  • "additionalProperties": java.util.Set<String>, the set of property names whose contents were validated by this subschema.
  • "contentEncoding": java.lang.String
  • "contentMediaType": java.lang.String
  • "contentSchema": com.google.gson.JsonElement
  • "default": com.google.gson.JsonElement
  • "deprecated": java.lang.Boolean
  • "description": java.lang.String
  • "examples": com.google.gson.JsonArray
  • "format": java.lang.String
  • "items": java.lang.Integer, the largest index in the instance to which a subschema was applied, or java.lang.Boolean (always true) if a subschema was applied to every index.
  • "patternProperties": java.util.Set<String>, the set of property names matched by this keyword.
  • "properties": java.util.Set<String>, the set of property names matched by this keyword.
  • "readOnly": java.lang.Boolean
  • "title": java.lang.String
  • "unevaluatedItems": java.lang.Boolean, always true if present, indicating that the subschema was applied to all remaining items in the instance array.
  • "unevaluatedProperties": java.util.Set<String>, the set of property names whose contents were validated by this subschema.
  • "writeOnly": java.lang.Boolean

Internal APIs

There are a few internal APIs that may be useful for your own projects, outside of schema validation. Note that these are subject to change.

  1. com.qindesign.json.schema.util.Base64InputStream: Converts a Base64-encoded string into a byte stream.
  2. com.qindesign.json.schema.util.LRUCache: An LRU cache implementation.
  3. com.qindesign.json.schema.net.Hostname: Parses regular and IDN hostnames.
  4. com.qindesign.json.schema.net.URI: An RFC 3986-compliant URI parser. As of this writing, Java's URI API is only RFC 2396-compliant and is not sufficient for processing JSON Schemas.
  5. com.qindesign.json.schema.net.URIParser.parseIPv6: Parses IPv6 addresses, per RFC 3986.
  6. com.qindesign.json.schema.net.URIParser.parseIPv4: Parses IPv4 addresses, per RFC 3986.

Please consult the Javadocs for those classes and methods for more information.

Building and running

This project uses Maven as its build tool because it makes managing the dependencies easy. It uses standard Maven commands and phases. For example, to compile the project, use:

mvn compile

To clean and then re-compile:

mvn clean compile

Maven makes it easy to build, execute, and package everything with the right dependencies, however it's also possible to use your IDE or different tools to manage the project. This section only discusses Maven usage.

Program execution with Maven

Maven takes care of project dependencies for you so you don't have to manage the classpath or downloads.

Currently, there are four predefined execution targets:

  1. main: Executes Main. Validates an instance against a schema.
  2. test: Executes Test. Runs the test suite.
  3. linter: Executes Linter. Checks a schema.
  4. coverage: Executes Coverage. Does a schema coverage check after validation.

This section shows some simple execution examples. There's more information about the included programs below.

Note that Maven doesn't automatically build the project when running an execution target. It either has to be pre-built using compile or added to the command line. For example, to compile and then run the linter:

mvn compile exec:java@linter -Dexec.args="schema.json"

To run the main validator without attempting a compile first, say because it's already built:

mvn exec:java@main -Dexec.args="schema.json instance.json"

To compile and run the test suite and tell the test runner that the suite is in /suites/json-schema-test-suite:

mvn compile exec:java@test -Dexec.args="/suites/json-schema-test-suite"

To execute a specific main class, say one that isn't defined as a specific execution, add an exec.mainClass property. For example, if the fully-qualified main class is my.Main and it takes some "program arguments":

mvn exec:java -Dexec.mainClass="my.Main" -Dexec.args="program arguments"

Using Snow in your own projects

Snow is available from the Maven Central Repository. To include it in your own programs, add the following dependency:

<dependency>
  <groupId>com.qindesign</groupId>
  <artifactId>snowy-json</artifactId>
  <version>0.15.0</version>
</dependency>

The linter

The linter's job is to provide suggestions about potential errors in a schema. It shows only potential problems whose presence does not necessarily mean the schema won't work.

The linter is rudimentary and does not check or validate everything about the schema. It does currently check for the following things:

  1. Unknown format values.
  2. Empty items arrays.
  3. additionalItems without a sibling array-form items.
  4. $schema elements inside a subschema that do not have a sibling $id.
  5. Unknown keywords. Similar keywords are noted by doing case-insensitive matching to known keywords.
  6. Property names that start with "$".
  7. Unnormalized $id values.
  8. Locally-pointing $ref values that don't exist.
  9. Any "minimum" keyword that is greater than its corresponding "maximum" keyword. For example, minLength and maxLength.
  10. exclusiveMinimum is not strictly less than exclusiveMaximum.
  11. Expected type checking for appropriate keywords. For example, minimum expects that the type is "number" or "integer" and format expects a "string" type.
  12. Implied type checking for default and const; a type is expected to exist and to match the implied type for these values.
  13. Non-unique enums.
  14. Empty enum, allOf, anyOf, or oneOf.
  15. Draft 2019-09 or later schemas having keywords that were removed in Draft 2019-09.
  16. Pre-Draft 2019-09 schemas having keywords that were added in Draft 2019-09.
  17. Pre-Draft-07 schemas having keywords that were added in Draft-07.
  18. Draft 2019-09 or later, or unspecified, schemas:
    1. minContains without a sibling contains.
    2. maxContains without a sibling contains.
    3. unevaluatedItems without a sibling array-form items.
    4. $id values that have an empty fragment.
  19. Draft-07 or later, or unspecified, schemas:
    1. then without if.
    2. else without if.
  20. Draft-07 or earlier schemas:
    1. $ref members with siblings.

Doing your own linting

It's possible to add your own rules to the linter. There are four important concepts to know about when adding rules:

  1. A rule may optionally be assigned to execute for a specific element type. For example, a rule added via Linter.addStringRule will execute if the current element is a primitive string.
  2. A rule learns about the current state of the JSON tree from a context object parameter, an instance of Linter.Context.
  3. Any detected issues are sent to the context.
  4. The rules operate in addition to the existing linter rules.

The following example snippet tests for the existence of any "anyOf" schema keywords:

JsonElement schema;
// ...load the schema...
Linter linter = new Linter();
linter.addRule(context -> {
  if (context.isKeyword() && context.is("anyOf")) {
    context.addIssue("anyOf detected");
  }
});
Map<JSONPath, List<String>> issues = linter.check();
// ...print the issues...

Linting by traversing the tree

The JSON class has a traverseSchema method that does a preorder tree traversal for JSON schemas. It's what the linter uses internally. It's also possible to use this to write your own linting rules.

The following example snippet also tests for the existence of any "anyOf" schema keywords:

JsonElement schema;
// ...load the schema...
JSON.traverseSchema(schema, (e, parent, path, state) -> {
  if (!state.isNotKeyword() && path.endsWith("anyOf")) {
    System.out.println(path + ": anyOf keyword present");
  }
});

The coverage checker

The coverage checker works similarly to the main validator, except that after validation, it prints out some coverage results.

It outputs two JSON objects:

  1. Seen and unseen schema locations organized by instance location.
  2. Seen schema locations only.

Future plans

There are plans to explore supporting more features, including:

  1. Custom vocabulary support.
  2. More output formatting. All the information is currently there, but the caller must process and organize it.
  3. Better caching. The current implementation doesn't cache things such as URLs and regex Patterns across different instances of ValidatorContext, i.e. across calls to Validator.validate.
  4. Compilation into an internal representation that provides both speed and optimizations for non-dynamic validation paths.
  5. A better representation than maps for annotations and errors.
  6. A better way of filtering (i.e. organizing) errors and annotations for human consumption. For example, not needing to manually prune parent errors. A more fleshed-out way to identify terminal and non-terminal errors, and also which are important.

Possible future plans

These are plans that may or may not be explored:

  1. Linter rule IDs for selective linting.

References

  1. JSON Schema Specification
  2. Gson
  3. ClassGraph
  4. ECMA 262
  5. JSON Schema Test Suite
  6. JSON Schema Draft Sources
  7. JSON Pointer
  8. URI Syntax
  9. IDN Hostnames

An ending thought

I'd love to say this: "The validator isn't wrong, the spec is ambiguous."™
Realistic? No, but fun to say anyway.

Acknowledgements

Thanks to JetBrains for providing an Open Source licence for IntelliJ, my favourite IDE since forever.

License

Snow, a JSON Schema validator
Copyright (c) 2020-2021  Shawn Silverman

This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published
by the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.

This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
GNU Affero General Public License for more details.

You should have received a copy of the GNU Affero General Public License
along with this program.  If not, see <https://www.gnu.org/licenses/>.

Copyright (c) 2020-2021 Shawn Silverman

snowy-json's People

Contributors

ssilverman avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

snowy-json's Issues

Performance and functional comparison

Hi,

I've recently needed to compare the performance and functionality of this and other JVM based JSON validation libraries, and I thought you might like to see how this implementation compares. I've published to the code and the results here: https://github.com/creek-service/json-schema-validation-comparison.

While I've spent time trying to get the best out of each library under test, it's possible that the results for this implementation could be improved if someone with more in-depth knowledge were to look into it. If you feel I'm not showing in its best light, please feel free to raise PRs to fix issues and improve your score.

Please feel free to link to the code and results.

I hope this is of some use to you.

Thanks,

Andy

Unknown schema ID

Using version 0.14.0.

Getting the following error when trying to validate a JSON document with a JSON schema:

Execution error (MalformedSchemaException) at com.qindesign.json.schema.ValidatorContext/schemaError (ValidatorContext.java:963).
http://example.com/product.schema.json#/$schema: unknown schema ID

These are a couple of lines from the JSON schema:

"$schema": "http://json-schema.org/draft/2019-09/
"$id": "http://example.com/product.schema.json"

I instantiate the Validator with the schema read via JSON.parse from a File, the URI for the schema via URI.parse("http://example.com").

I've tried with different URLs passed to the URI.parse("..."), and with and without an Options instance - i.e. AUTO_RESOLVE set to both true and false.

The same error every time.

Any idea what I'm doing wrong?

MalformedSchemaException when using multiple schemas

Hi, it seems that snow is having a problem when parsing a schema that references another schema via $ref. Please see the attached files.

stage-league-table.schema.json: https://pastebin.com/WmQrJGjy
common.schema.json: https://pastebin.com/ae0mcNEF

Both files are in the same directory. CoreRef.apply() fails with an exception:
Exception in thread "main" com.qindesign.json.schema.MalformedSchemaException: file:/C:/Work/java/LiveScore-DATA/src/main/resources/schemas/stage-league-table.schema.json#/stageLeagueTable/properties/categoryId/$ref: unknown reference: common.schema.json#/providerString at com.qindesign.json.schema.ValidatorContext.schemaError(ValidatorContext.java:908) at com.qindesign.json.schema.ValidatorContext.schemaError(ValidatorContext.java:931) at com.qindesign.json.schema.keywords.CoreRef.apply(CoreRef.java:108) at com.qindesign.json.schema.ValidatorContext.applyKeyword(ValidatorContext.java:1196) at com.qindesign.json.schema.ValidatorContext.apply(ValidatorContext.java:1162) at com.qindesign.json.schema.keywords.Properties.apply(Properties.java:70) at com.qindesign.json.schema.ValidatorContext.applyKeyword(ValidatorContext.java:1196) at com.qindesign.json.schema.ValidatorContext.apply(ValidatorContext.java:1162) at com.qindesign.json.schema.keywords.CoreRef.apply(CoreRef.java:116) at com.qindesign.json.schema.ValidatorContext.applyKeyword(ValidatorContext.java:1196) at com.qindesign.json.schema.ValidatorContext.apply(ValidatorContext.java:1162) at com.qindesign.json.schema.Validator.validate(Validator.java:323) at com.qindesign.json.schema.Main.main(Main.java:123)

It seems than Validator.scanIds() does not find the reference to common.schema.json.

Disabling annotations and retrieving output JSON during JSON Schema Validator and Linter run

Hello Snowy Authors / Contributors,

Few questions -

  1. When running the following commands, how can I know the status of the JSON Schema Validator and JSON Schema Linter? When my Build pipeline runs these commands, I want to access only the output JSON string and nothing irrelevant from the console.

JSON Schema Validator command: mvn compile exec:java@main -Dexec.args="schema.json instance.json"

Basic Output: { "valid": true, "errors": [] }

JSON Schema Linter command: mvn compile exec:java@linter -Dexec.args="schema.json"

PS: Linter is a special case, it doesn't output anything if it passes. How can I tackle this Linter scenario?

  1. When the above commands are run, can I pass any additional arguments to the command to disable displaying annotations? I searched the documentation and the Main.java in the actual source code and didn't see any flag which I can use to disable displaying annotations.

Annotations: { "annotations": [ ] }

  1. Now assume I want to run multiple automated JSON Schema and Linter checks from the Maven Java application, how can I run these commands from within the application? Should I run the commands or use the API in the JAR to call methods? Some examples for both scenarios would help.

parseReader not found error

With the Main and the artifact added to my pom.xml I get:

Exception in thread "main" java.lang.NoSuchMethodError: 'com.google.gson.JsonElement com.google.gson.JsonParser.parseReader(java.io.Reader)'
at com.qindesign.json.schema.JSON.parse(JSON.java:132)
at com.qindesign.json.schema.JSON.parse(JSON.java:119)
at com.qindesign.json.schema.JSON.parse(JSON.java:91)
at de.elster.testing.common.validator.Main.main(Main.java:86)

Does that mean to use the Main as an example I needed to copy all classes over ?
How are the classes meant to be used in my project ? There aren't any class files when adding the artifact to the pom and rebuilding my project.

I guess I am being very noob or confused here.

Wait, is the scope of it being a scaffold for how to integrate JSON validation into mvn ? As such it does an excellent job !

Invalid properties not flagged error

I used the following schema document, Values for 'a' and 'b' are not valid schemas.

{
"properties": {
"a":1,
"b":2
}
}

But validator wouldn't flag an error. Isn't this a bug?

com.qindesign.json.schema.Main: Mon Jun 07 12:43:52 PDT 2021 [INFO] Actual spec=null
com.qindesign.json.schema.Main: Mon Jun 07 12:43:52 PDT 2021 [INFO] Guessed spec=DRAFT_2019_09
com.qindesign.json.schema.Main: Mon Jun 07 12:43:52 PDT 2021 [INFO] Validation result: true (0.201s)
Basic output:
{
"valid": true,
"errors": []
}

Annotations:
{
"annotations": []
}

Duplicate root schemas issue

Hi,

In the below example I have two different files for an Organization and Person schema in the same directory. I want to be able to validate an Organization and a Person independently (so none of them are contained into the other's definitions annotation) though they both have a reference to each other.

{
    "$schema": "http://json-schema.org/draft-07/schema#",
    "$id": "http://example.com/schemas/organization.json",
    ...
    "employee": {
            "type": "array",
            "items": {
                    { "$ref": "http://example.com/schemas/person.json" }
            }
        }
{
    "$schema": "http://json-schema.org/draft-07/schema#",
    "$id": "http://example.com/schemas/person.json",
    ...
        "worksFor":  { "$ref": "http://example.com/schemas/organization.json" }       

I am taking the Person schema as the Validators base schema and the Organization as knownURL:

            personSchema = JSON.parse(personSchemaStream);
            ...
            pknownURLs = Map.of(
                    new URI(new java.net.URI("http://example.com/schemas/organization.json")),
                    new URL(new File(organizationSchemaFile).toURI().toString()),

           ....
           personValidator = new Validator(personSchema, schemaID, null, pknownURLs, opts);

What I get is an java.lang.IllegalArgumentException: Duplicate root ID: file:/.../test-classes/: http://example.com/schemas/organization.json

Do you have any idea why that is? It seems to me that it should be possible to ref the other file schema via id even though is not defined in the definitions section, am I missing something?

Change the license?

Under the GNU License, this library is almost useless. Can you please change the license to Apache 2.0?

validating hl7 fhir schema - failure

I am looking at https://www.hl7.org/fhir/fhir.schema.json.zip through the lens of your product to see if I can use snowy to validate json payloads against this schema. the schema purports to be conformant to draft 06. when I run your linter against it, it fails immediately. I was able to get a part of the schema https://www.hl7.org/fhir/patient.schema.json.html to pass the linter, but I can't validate the json against it because of missing $ref(s). I am new to json schemas so I have another question regarding validation of json. fhir is organized according to resources. the full schema consist of all the resources in the data model. if I validate a JSON payload representing the Patient resource, would I run it against the whole schema? thanks in advance for the education.

AGPL

Hi there @ssilverman

I know this is a bit of a sore subject, considering how the last discussion went. Apologies for A) bringing it up again and B) not adding to the previous issue as it was locked to further discussion.

Before jumping in, I just wanted to say thank you for your hard work on this project -- it really is very much appreciated.

I would very much like to use your schema validator for both my open source and closed source projects, however, when it comes to the closed source projects I have similar concerns to those that are outlined in Google's AGPL policy:

https://opensource.google/docs/using/agpl-policy/

Specifically Google does not permit any use of AGPL licensed OSS code in its closed source projects.

I'm definitely not an expert of OSS licensing policies but it seems to me that the AGPL license requires corporate users to open source their closed source projects if they depend on your AGPL library.

Sadly, it's not very likely that we will be able to convince our business partners that all proprietary source code should be open sourced which means -- at least in a corporate setting -- it's unlikely that we will be able to benefit from your hard work.

Perhaps you have already considered this and feel very strongly that all code should be open source and that users who wish to use your code should also open source their code as well both in an OSS setting as well as a corporate setting.

If that is your stance, I totally respect it and please consider this issue resolved. If, on the other hand, you'd like to consider a more permissive OSS license that would enable folks to use your library in corporate settings. I'm sure I and many others would appreciate it.

Performance benchmark question

Hi! Awesome work! By any chance de you have performance benchmarks comparing to other json schema validators in the jvm ecosystem?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.