Giter Club home page Giter Club logo

data-resource-api's Introduction

Data Resource API

An elegant, opinionated framework for deploying BrightHive Data Resources with zero coding.

The "Data Resource" and its related elements

BrightHive is in the business of building Data Trusts, which are legal, technical, and governance frameworks that enable networks of organizations to securely and responsibly share and collaborate with data, generating new insights and increasing their combined impact.

Data Resources are a core element of Data Trusts and may be loosely defined as data that is owned by or stewarded over by members of Data Trusts.

From the technical perspective, a BrightHive Data Resource is an entity comprised of the following elements:

  • Data Model - The data model consists of one or more database tables and associated Object Relational Mapping (ORM) objects for communicating with these tables.
  • RESTful API - Data managed by Data Resources are accessed via RESTful API endpoints. These endpoints support standard operations (i.e. GET, POST, PUT, PATCH, DELETE).
  • Data Resource Schemas - Data Resource Schemas are JSON documents that define the data model, API endpoints, and security constraints placed on the specific Data Resource. These schemas are what allow for new Data Resources to be created without new code being written.

Get started

Spinning up a fresh, out-of-the-box Data Resource API requires minimal setup. Follow these steps:

1. Download install docker-cli and docker-compose based on your machine and preferences (e.g., you may choose to run docker-compose in a virtualenv).

2. Clone or download the project to your local machine

git clone [email protected]:brighthive/data-resource-api.git

3. Build the Docker image with specified tag

# run this command in the root of the data-resource-api repo
docker build -t brighthive/data-resource-api:1.0.0-alpha .

4. Launch Docker containers

# run this command in the root of the data-resource-api repo
docker-compose up

Alternatively, you can run the data resource api in the background:

docker-compose up -d

Visit http://0.0.0.0:8000/programs to test the API URL (though expect an "Access Denied" message).

Note! The default docker-compose.yml launches three containers: postgres, data-model-manager, and data-resource-api. Expect the postgres container to raise an error, at first, something like: ERROR: relation "checksums" does not exist at character 171. This occurs temporarily, as Docker spins up the data-model-manager (i.e., the service that creates and runs migrations). Once the model manager has done its business, the data-resource-manager can ease-fully listen for incoming schema files. A successful launch should continuously log the following happy messages:

data-resource-api_1   | 2019-06-21 21:31:21,392 - data-resource-manager - INFO - Data Resource Manager Running...
data-resource-api_1   | 2019-06-21 21:31:21,393 - data-resource-manager - INFO - Checking data resources...
data-resource-api_1   | 2019-06-21 21:31:21,413 - data-resource-manager - INFO - Completed check of data resources
data-resource-api_1   | 2019-06-21 21:31:21,414 - data-resource-manager - INFO - Data Resource Manager Sleeping for 60 seconds...

Customize your JSON schema

The Data Resource API auto-magically instantiates data stores and corresponding endpoints without any coding.

Each JSON file in the schema directory delineates a resource, namely: its RESTful paths ("api"), and its data model ("datastore"). Create a new resource by adding another JSON file to the schema directory. (Visit the schema directory for example JSON files.)

Enable or disable methods

A resource JSON blob defines RESTful methods. These can be enabled or disabled.

{
  "api": {
    "resource": "name_of_endpoint",
    "methods": [
      {
        "get": {
          // enable the method
          "enabled": true,
          "secured": true,
          "grants": ["get:users"]
        },
        "post": {
          // disable the method
          "enabled": false,
          "secured": true,
          "grants": ["get:users"]
        },
...

Toggle authentication

The Data Resource API makes use of BrightHive authlib for adding authentication to endpoints. Authentication can be toggled on or off – on a per method basis.

{
  "api": {
    "resource": "name_of_endpoint",
    "methods": [
      {
        "get": {
          "enabled": true,
          // do not require authentication
          "secured": false,
          "grants": ["get:users"]
        },
        "post": {
          "enabled": false,
          // require authentication
          "secured": true,
          "grants": ["get:users"]
        },
...

Define the Table Schema

The Data Resource API utilizes the Table Schema from fictionless data. The Table Schema is represented by a "descriptor", or a JSON object with particular attributes. In a Data Resource schema, the descriptor occupies the value of "datastore" >> "schema". A schema can have up to four properties, among them: primaryKey, foreignKeys, and fields.

The fields must be an array of JSON objects, and each object must define a field on the data model. A field definition should include, at minimum, a name, type (defaults to String), and required (defaults to false).

Frictionless data and, correspondingly, the Data Resource API can be particular about what goes inside a field descriptor. TABLESCHEMA_TO_SQLALCHEMY_TYPES defines the available types in the frictionless schema and the corresponding types in sqlalchemy. Stick to these types, or you can anticipate unexpected behavior in your API! (See data-resource-api/data_resource_api/factories/table_schema_types.py for more context.)

TABLESCHEMA_TO_SQLALCHEMY_TYPES = {
    'string': String,
    'number': Float,
    'integer': Integer,
    'boolean': Boolean,
    'object': String,
    'array': String, # Not supported yet
    'date': Date,
    'time': DateTime, # Not supported yet
    'datetime': DateTime,
    'year': Integer,
    'yearmonth': Integer, # Not supported yet
    'duration': Integer,
    'geopoint': String,
    'geojson': String, # Not supported yet
    'any': String
}

Add foreign keys

Creating a foreign key involves making two additions to the JSON.

First, add the foreign key as a field in the schema:

  "datastore": {
    "tablename": "programs",
    "restricted_fields": [],
    "schema": {
      "fields": [
        {
          "name": "location_id",
          "title": "Provider ID",
          "type": "integer",
          "description": "Foreign key for provider",
          "required": false
        },
...

Second, define the foreign key in the foreignKeys array:

"datastore": {
  "tablename": "programs",
  "restricted_fields": [],
  "schema": {
    "fields": [
      // Descriptor for location_id field, plus other fields
...

    ],
    "foreignKeys": [
      {
        "fields": ["location_id"],
        "reference": {
          "resource": "locations",
          "fields": ["id"]
        }
      }
    ]
  }
}

Hide fields

Sometimes, a data resource has fields that contain important information (e.g., a source URL, a UUID) that should not be visible in the API. Hide these fields by adding the field name to the list of restricted_fields.

"datastore": {
  "tablename": "programs",
  "restricted_fields" : ["location_id"],
...

Many to many

To create a many to many resource add the relationship to the API section.

{
  "api": {
    "resource": "programs",
    "methods": [
      {
        "custom": [
          {
            "resource": "/programs/credentials"
          }
        ]
      }
    ]
  },
  ...

This will generate a many to many relationship.

POST

To add a resource you would POST and in the body include the child field as a parameter. ]

GET

To query the relationship you have to go to /programs/1/credentials. The relationship will not currently show up if you simply query /programs/1. ,4

PUT

To replace the relationship perform a PUT to /programs/1/credentials with the full list of primary keys.

{
  "credentials": [2,3]
}

PATCH

To append a primary key to the relationship list perform a PATCH. If you currently have a list of "credentials": [1] and you perform PATCH with "credentials": [2,3] it will return "credentials":[1,2,3]

DELETE

You can perform a DELETE that will remove the given items from the list. If you wish to remove all items perform a PUT with an empty list.

Given that you have a list of relationships, [1,2,3,4,5]:

Performing a DELETE with

{
  "credentials": [2]
}

Results in [1,3,4,5].

You can also give more than one item. Given that you have a list of relationships, [1,2,3,4,5] and we perform the following with DELETE

{
  "credentials": [1,3,5]
}

Results in [2,4].

Configuration

The following parameters can be adjusted to serve testing, development, or particular deployment needs.

RELATIVE_PATH

ABSOLUTE_PATH

ROOT_PATH

MIGRATION_HOME

DATA_RESOURCE_SLEEP_INTERVAL

DATA_MODEL_SLEEP_INTERVAL

SQLALCHEMY_TRACK_MODIFICATIONS

PROPAGATE_EXCEPTIONS

POSTGRES_USER

POSTGRES_PASSWORD

POSTGRES_DATABASE

POSTGRES_HOSTNAME

POSTGRES_PORT

SQLALCHEMY_DATABASE_URI

OAUTH2_PROVIDER

OAUTH2_URL

OAUTH2_JWKS_URL

OAUTH2_AUDIENCE

OAUTH2_ALGORITHMS

SECRET_MANAGER

=======

Running tests

A few tests within the test suite currently require a docker image to be spun up in order to pass the tests. In order for the tests to pass you simply need to have docker running.

For developers to run a test use the following,

  1. First install the requirements
pipenv install --dev
  1. Run the tests with the following command,
pipenv run pytest

To leave the database up after running,

DR_LEAVE_DB=true pipenv run pytest

Troubleshooting

If you have trouble starting the application with docker-compose up the problem may lie in your postgres container.

Try running docker rm postgres-container-id to remove any prevoiusly saved data in the postgres container. This will allow the application to rebuild the database from scratch and should start successfully.

Or try running docker system prune.

data-resource-api's People

Contributors

dependabot[bot] avatar gregmundy avatar loganripplinger avatar reginafcompton avatar

Stargazers

 avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

Forkers

timothyjandrus

data-resource-api's Issues

Root does not publish endpoints

When performing a GET to the root of the application it returns a list of 'endpoints'. These should be the resources that are available. Currently the endpoint list is empty.

Its expected that the endpoint should publish a list of all of the available endpoints.

Alembic fails silently

Under a set of conditions that are not clear right now alembic attempted to drop tables it created in a previous migration. This action caused alembic to fail and it failed silently.

Is there a way we can notify the user if alembic fails?

Support JSONB type when using postgres

Background

In JDX API we use JSONB to store JobSchema+ files (JsonLD).

Current Behavior

Currently the data-resource-api will translate frictionless object to String

This can be cumbersome and require a lot of additionally processing as evident when on JDX we first tried to store JobSchema+ files as strings.

Preferred Behavior

I would like to be able to use postgres JSONB and JSON type.

Threading is unnecessary for data model manager

Threading is unnecessary for data model manager. We can remove the threading calls and just call it synchronously.

This would be a low priority issue as I don't see it really impacting anything. It's a low effort change with minimal impact.

Change error responses

I think this should be resp["message"]. That's what I am seeing as the convention, e.g. here. But maybe everything should be wrapped inside an error, like this. Honestly, whomever ingests our APIs will figure it out, but maybe look into what's best practice....?

Originally posted by @reginafcompton in #23

Oh, I found other resources for this: https://dzone.com/articles/rest-api-error-handling-problem-details-response and https://itnext.io/the-case-for-standardized-error-handling-in-your-web-application-6428ff60cc31

Originally posted by @reginafcompton in #23

End to end tests have long duration

Tests take extra time to run due to the way the app has to be recreated to run two separate sets of tests. As of writing this the tests take roughly 1 minute 40 seconds to finish which isn't long but could be improved.

We currently have two sets of descriptor files that need to be tested. For each set of descriptors we have the app has to tear down and start from scratch. This process currently takes between 30~45 seconds.

Paralleling the tests would cut the test time in half.

Update descriptors to use table schema constraints correctly

Currently

Currently we list required along with the other properties in an object within the fields array.

"datastore": {
      "schema": {
        "fields": [
          {
            "name": "id",
            "title": "Test ID",
            "description": "Test Desc",
            "type": "integer",
            "required": True
          },
          {
            "name": "name",
            "title": "namename",
            "description": "test name",
            "type": "string",
            "required": True
          }
        ],
        "primaryKey": "id"
      }
    }

Desired

Per the table schema spec it appears that required should be nested within a constraints property, https://frictionlessdata.io/specs/table-schema/#constraints

"datastore": {
      "schema": {
        "fields": [
          {
            "name": "id",
            "title": "Test ID",
            "description": "Test Desc",
            "type": "integer",
            "constraints": {
                "required": True
            }
          },
          {
            "name": "name",
            "title": "namename",
            "description": "test name",
            "type": "string",
            "constraints": {
                "required": True
            }
          }
        ],
        "primaryKey": "id"
      }
    }

Raise error when many to many table is not found

DMM = DataModelManagerSync(base)
        DMM.load_descriptor_into_sql_alchemy_model(skills_descriptor)
        DMM.load_descriptor_into_sql_alchemy_model(frameworks_descriptor)  # this will break if we call this first and/or only call this
        table_list = list(base.metadata.tables.keys())

        expect(table_list).to(equal(['skills', 'frameworks/skills', 'frameworks']))

Support n:n relationships

Presently, the Data Resource APIs handles 1:1 and 1:n relationships. This enables the API to support queries such as /endpoint and /endpoint/{id}. In order to support deeper nesting of API queries, such as /endpoint/{id}/other_endpoint, the Data Resource API must be able to dynamically support n:n relationships that will be inferred from the schema.

Over reliance on end to end tests

Currently in the Data Resource API we have an over reliance on end-to-end tests.

In order to test a lot of the functionality at the interface level it requires that we spin up the database and build the ORM models.

I'd like to see more unit tests interacting with SQLAlchemy and the routing. In order to do that we need to create some new testing patterns.

API methods is an array but should not be

Current

Currently in the descriptor file API methods are described as a list within the API -> methods. Currently we only seem to be using one object and it's unclear what the expected behavior should be if you were to add additional method objects to the list.

The same thing is also happening with the custom property within API -> methods -> custom -> methods.

"api": {
	"resource": "programs",
	"methods": [
		{
			"get": {
				"enabled": true,
				"secured": false,
				"grants": ["get:users"]
			},
			"post": {
				"enabled": true,
				"secured": false,
				"grants": ["get:users"]
			},
			"put": {
				"enabled": true,
				"secured": true,
				"grants": ["get:users"]
			},
			"patch": {
				"enabled": true,
				"secured": true,
				"grants": ["get:users"]
			},
			"delete": {
				"enabled": true,
				"secured": true,
				"grants": ["get:users"]
			},
			"custom": [
				{
					"resource": "/programs/credentials",
					"methods": [
						{
							"get": {
								"enabled": true,
								"secured": false,
								"grants": ["get:users"]
							},
							"put": {
								"enabled": true,
								"secured": false,
								"grants": []
							},
							"patch": {
								"enabled": true,
								"secured": false,
								"grants": []
							}
						}
					]
				}
			]
		}
	]
}

Expected

Should be

"api": {
	"resource": "programs",
	"methods": {
		"get": {
			"enabled": true,
			"secured": false,
			"grants": ["get:users"]
		},
		"post": {
			"enabled": true,
			"secured": false,
			"grants": ["get:users"]
		},
		"put": {
			"enabled": true,
			"secured": true,
			"grants": ["get:users"]
		},
		"patch": {
			"enabled": true,
			"secured": true,
			"grants": ["get:users"]
		},
		"delete": {
			"enabled": true,
			"secured": true,
			"grants": ["get:users"]
		},
		"custom": [
			{
				"resource": "/programs/credentials",
				"methods": {
					"get": {
						"enabled": true,
						"secured": false,
						"grants": ["get:users"]
					},
					"put": {
						"enabled": true,
						"secured": false,
						"grants": []
					},
					"patch": {
						"enabled": true,
						"secured": false,
						"grants": []
					}
				}
			}
		]
	}
}

pre-Alpha checklist

  • Documentation: Complete README starter template
    • GitHub repo is public (or documented exception in README)
    • License exists (MIT License, or other license with documented exception to MIT in README)
  • Tests: Some unit tests written
    • Configure coverage tools (Coveralls.io + pytest.cov)
    • Configure continuous integration (Circle CI)
    • Configure style checking (aka code formatting) tool (flake8)
  • Contribute: Ability for teammate to quickly contribute
    • “Getting Started” in README
    • Docker setup: Dockerfile
    • Docker setup: docker-compose.yml (or docker-compose-dev.yml)

Alembic tried to drop tables

Alembic will create migrations to drop existing tables in production if the Data Model Manager (DMM) is restarted and runs against a database that has had migrations run against it.

Steps to reproduce

  1. Create two or three descriptors in the schema directory.
  2. Start an empty database -- through this entire process do not allow the DB to restart.
  3. Start DMM.
  4. Allow DMM to run migrations and upgrade database to match the state represented by the schema directory.
  5. Take down the DMM but allow the database to remain running.
  6. Add one or more new schema descriptors to the schema folder.
  7. Start the DMM.
  8. Allow the new schemas to be processed by the DMM first before processing the original schemas

Expected

The DMM should be aware of the previous state of the database and create a new migration that only adds the new schema data and does not drop any tables from the database.

Actual

The DMM is unaware of the previous state of the database and creates a revision that includes dropping all of the existing tables in the database.

Stack trace does not print out during tests

Current

When running the application live through Docker if an error occurs it will print the stack trace to the console.

When running pytest and an error occurs the stack trace is not printed to console.

Expected

When I run pytest and an error occurs within the Data Resource API it should print out the stack trace to console when using pytest with -s.

Pagination error when offset and limit are specified

On the Virginia Programs data-resource-api, a user reports that when the offset and limit parameters are specified for a GET request, the API returns a null result. When the offset and limit are left off, the behavior (with the default parameters) is as expected.

Data Resource Manager crashes when non-JSON file is encountered

If a non-JSON file is encountered in the location where the data resource API retrieves files it will crash. This sometimes happens when configurations are pulled from a repository that may have additional files (such as README). The Data Resource Manager should gracefully ignore this invalid file and continue to process valid files. Ideally, this issue should be seen in logs.

Fail to upgrade on sleep

Sometimes after starting the DR API I get the following set of error messages.

data-resource-api_1   | 2020-02-24 23:09:09,550 - data-resource-manager - INFO - Completed check of data resources
data-resource-api_1   | 2020-02-24 23:09:09,550 - data-resource-manager - INFO - Data Resource Manager Sleeping for 60 seconds...
data-resource-api_1   | 2020-02-24 23:09:09,557 - data-resource-manager - ERROR - Error checking data resource
data-resource-api_1   | Traceback (most recent call last):
data-resource-api_1   |   File "/data-resource/data_resource_api/app/data_resource_manager.py", line 276, in work_on_schema
data-resource-api_1   |     data_resource.data_resource_object.data_model = data_resource.data_model_object
data-resource-api_1   | AttributeError: 'NoneType' object has no attribute 'data_model'
data-resource-api_1   | 2020-02-24 23:09:09,580 - data-resource-manager - INFO - Completed check of data resources
data-resource-api_1   | 2020-02-24 23:09:09,581 - data-resource-manager - INFO - Data Resource Manager Sleeping for 60 seconds...

I suspect a set of descriptors that are not entirely valid will produce this. Based on the logs it appears something isn't being created correctly and not raising an error.

The data_resource_object is a flask restful object. So that leads me to believe line 295 is failing and not returning an error.

                data_resource.data_resource_object = self.data_resource_factory.create_api_from_dict(
                    api_schema, data_resource_name, table_name, self.api, data_resource.data_model_object, table_schema, restricted_fields)

When the API encounters exceptions they should be logged and the client should be notified.

Current

When the client hits an API route and the application encounters an exception it will only return the exception to the client.

Additionally we don't have a pattern to allow functions to return an error to the client when they encounter an exception.

Expected

resource_handler.py allows exception logging and functions to return errors to the client.

Proposal

In JDX API I used a pattern that when I raised a custom exception it would log and return an error message to the client.

def create_app():
    app = Flask(__name__)
    app = configure_app(app)

    @app.errorhandler(ApiError)
    def handle_api_error(error):
        return error.get_response()
class ApiError(Exception):
    def __init__(self, message, status_code=400, payload=None):
        Exception.__init__(self)
        self.message = message
        self.status_code = status_code
        self.payload = payload or ()

    def get_response(self):
        print(self.payload)
        logger = logging.getLogger('inputoutput')
        logger.exception('ApiError:')
        ret = dict(self.payload)
        ret['message'] = self.message
        return jsonify(ret), self.status_code
class Route(Resource):
    def get(self):
        ...
        check_if_match_table_data_exists(pipeline)
        ...

    def check_if_match_table_data_exists(self, pipeline):
        match_table_data = pipeline.match_table_data
        if not match_table_data:
            raise ApiError('Data not in yet', 402)
        return match_table_data

Error on startup

On startup the data resource manager will print out a series of errors.

Additionally the thread will continue to execute and the sleep thread call will not work until later.

Tableschema failed validation only occurs on HTTP request and does not print errors.

Current
If your table schema in your descriptor file does not pass validation in the resource handler it raises an unhandled error that only occurs when interacting with the API via an HTTP method.

Here is the output

data-resource-api_1   | Traceback (most recent call last):
data-resource-api_1   |   File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1949, in full_dispatch_request
data-resource-api_1   |     rv = self.dispatch_request()
data-resource-api_1   |   File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1935, in dispatch_request
data-resource-api_1   |     return self.view_functions[rule.endpoint](**req.view_args)
data-resource-api_1   |   File "/usr/local/lib/python3.7/site-packages/flask_restful/__init__.py", line 458, in wrapper
data-resource-api_1   |     resp = resource(*args, **kwargs)
data-resource-api_1   |   File "/usr/local/lib/python3.7/site-packages/flask/views.py", line 89, in view
data-resource-api_1   |     return self.dispatch_request(*args, **kwargs)
data-resource-api_1   |   File "/usr/local/lib/python3.7/site-packages/flask_restful/__init__.py", line 573, in dispatch_request
data-resource-api_1   |     resp = meth(*args, **kwargs)
data-resource-api_1   |   File "/data-resource/data_resource_api/api/core/versioned_resource.py", line 171, in post
data-resource-api_1   |     return self.get_resource_handler(request.headers).insert_one(self.data_model, self.data_resource_name, self.table_schema, request)
data-resource-api_1   |   File "/data-resource/data_resource_api/api/v1_0_0/resource_handler.py", line 262, in insert_one
data-resource-api_1   |     if not validate(table_schema):
data-resource-api_1   |   File "/usr/local/lib/python3.7/site-packages/tableschema/validate.py", line 16, in validate
data-resource-api_1   |     Schema(descriptor, strict=True)
data-resource-api_1   |   File "/usr/local/lib/python3.7/site-packages/tableschema/schema.py", line 41, in __init__
data-resource-api_1   |     self.__build()
data-resource-api_1   |   File "/usr/local/lib/python3.7/site-packages/tableschema/schema.py", line 255, in __build
data-resource-api_1   |     raise exception
data-resource-api_1   |   File "/usr/local/lib/python3.7/site-packages/tableschema/schema.py", line 250, in __build
data-resource-api_1   |     self.__profile.validate(self.__current_descriptor)
data-resource-api_1   |   File "/usr/local/lib/python3.7/site-packages/tableschema/profile.py", line 64, in validate
data-resource-api_1   |     raise exceptions.ValidationError(message, errors=errors)
data-resource-api_1   | tableschema.exceptions.ValidationError: There are 1 validation errors (see exception.errors)
data-resource-api_1   | 2019-12-11 19:18:39,102 - data-model-manager - ERROR - Encountered an error while processing a request.
data-resource-api_1   | Traceback (most recent call last):
data-resource-api_1   |   File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1949, in full_dispatch_request
data-resource-api_1   |     rv = self.dispatch_request()
data-resource-api_1   |   File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1935, in dispatch_request
data-resource-api_1   |     return self.view_functions[rule.endpoint](**req.view_args)
data-resource-api_1   |   File "/usr/local/lib/python3.7/site-packages/flask_restful/__init__.py", line 458, in wrapper
data-resource-api_1   |     resp = resource(*args, **kwargs)
data-resource-api_1   |   File "/usr/local/lib/python3.7/site-packages/flask/views.py", line 89, in view
data-resource-api_1   |     return self.dispatch_request(*args, **kwargs)
data-resource-api_1   |   File "/usr/local/lib/python3.7/site-packages/flask_restful/__init__.py", line 573, in dispatch_request
data-resource-api_1   |     resp = meth(*args, **kwargs)
data-resource-api_1   |   File "/data-resource/data_resource_api/api/core/versioned_resource.py", line 171, in post
data-resource-api_1   |     return self.get_resource_handler(request.headers).insert_one(self.data_model, self.data_resource_name, self.table_schema, request)
data-resource-api_1   |   File "/data-resource/data_resource_api/api/v1_0_0/resource_handler.py", line 262, in insert_one
data-resource-api_1   |     if not validate(table_schema):
data-resource-api_1   |   File "/usr/local/lib/python3.7/site-packages/tableschema/validate.py", line 16, in validate
data-resource-api_1   |     Schema(descriptor, strict=True)
data-resource-api_1   |   File "/usr/local/lib/python3.7/site-packages/tableschema/schema.py", line 41, in __init__
data-resource-api_1   |     self.__build()
data-resource-api_1   |   File "/usr/local/lib/python3.7/site-packages/tableschema/schema.py", line 255, in __build
data-resource-api_1   |     raise exception
data-resource-api_1   |   File "/usr/local/lib/python3.7/site-packages/tableschema/schema.py", line 250, in __build
data-resource-api_1   |     self.__profile.validate(self.__current_descriptor)
data-resource-api_1   |   File "/usr/local/lib/python3.7/site-packages/tableschema/profile.py", line 64, in validate
data-resource-api_1   |     raise exceptions.ValidationError(message, errors=errors)
data-resource-api_1   | tableschema.exceptions.ValidationError: There are 1 validation errors (see exception.errors)
data-resource-api_1   | 
data-resource-api_1   | During handling of the above exception, another exception occurred:
data-resource-api_1   | 
data-resource-api_1   | Traceback (most recent call last):
data-resource-api_1   |   File "/usr/local/lib/python3.7/site-packages/gunicorn/workers/base_async.py", line 56, in handle
data-resource-api_1   |     self.handle_request(listener_name, req, client, addr)
data-resource-api_1   |   File "/usr/local/lib/python3.7/site-packages/gunicorn/workers/ggevent.py", line 160, in handle_request
data-resource-api_1   |     addr)
data-resource-api_1   |   File "/usr/local/lib/python3.7/site-packages/gunicorn/workers/base_async.py", line 107, in handle_request
data-resource-api_1   |     respiter = self.wsgi(environ, resp.start_response)
data-resource-api_1   |   File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 2463, in __call__
data-resource-api_1   |     return self.wsgi_app(environ, start_response)
data-resource-api_1   |   File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 2449, in wsgi_app
data-resource-api_1   |     response = self.handle_exception(e)
data-resource-api_1   |   File "/usr/local/lib/python3.7/site-packages/flask_restful/__init__.py", line 269, in error_router
data-resource-api_1   |     return original_handler(e)
data-resource-api_1   |   File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1866, in handle_exception
data-resource-api_1   |     reraise(exc_type, exc_value, tb)
data-resource-api_1   |   File "/usr/local/lib/python3.7/site-packages/flask/_compat.py", line 38, in reraise
data-resource-api_1   |     raise value.with_traceback(tb)
data-resource-api_1   |   File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 2446, in wsgi_app
data-resource-api_1   |     response = self.full_dispatch_request()
data-resource-api_1   |   File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1951, in full_dispatch_request
data-resource-api_1   |     rv = self.handle_user_exception(e)
data-resource-api_1   |   File "/usr/local/lib/python3.7/site-packages/flask_restful/__init__.py", line 269, in error_router
data-resource-api_1   |     return original_handler(e)
data-resource-api_1   |   File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1821, in handle_user_exception
data-resource-api_1   |     return handler(e)
data-resource-api_1   |   File "/data-resource/data_resource_api/app/exception_handler.py", line 92, in handle_errors
data-resource-api_1   |     return e.get_response()
data-resource-api_1   | AttributeError: 'ValidationError' object has no attribute 'get_response'

Expected
We should handle it somehow and print the errors.
We should probably run the check on load.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.