pecanproject / betydb-yaba Goto Github PK
View Code? Open in Web Editor NEWYet Another BETYdb API (for metadata upload)
License: BSD 3-Clause "New" or "Revised" License
Yet Another BETYdb API (for metadata upload)
License: BSD 3-Clause "New" or "Revised" License
Right now a generic message is returned when a CSV file has too few or too many of the required columns. Returning a message that lists the column names expected, and indicates that the CSV files must contain only those columns, and all of them, would be more helpful
The array needs to be initialized inside the loop at the following location:
Line 145 in 14625fb
Right now, it accumulates every point from each geometry object and doesn't start fresh for each object.
This issue aims to generate a coverage report for python tests in Py_test action. Right now, it produces a warning and exits
For now, you can see the run for actions here: BETYdb-YABA/actions --> YABA app and client tests
@gsrohde brought up the following points in this comment: PecanProject/betydbdoc-dataentry#4 (comment)
These questions should be answered in the README and / or the betydbdoc-dataentry documentation:
Need to define api endpoints that will be used in the metadata upload workflow https://osf.io/v7f9t/wiki/Metadata%20Entry%20Workflow/
The idea is that you will have an endpoint for each of the add_* functions in https://github.com/az-digitalag/betydbtools/pull/2/files
But probably easiest to follow the sql here https://osf.io/v7f9t/wiki/Metadata%20Entry%20Workflow/
Completion criteria
@dlebauer @kimberlyh66 @gsrohde I have added the schema of CSV in Swagger under examples .Now i implemented for experiments endpoint only.
https://app.swaggerhub.com/apis/y51/bet-ydb_yaba/1.0.5#/experiments/Meta.insert_experiments
Here only i added the columns and one row as example.
Please review it.
The following line should be removed since data
variable already has the notes:
Line 154 in 14625fb
Causes a crash due to data_g
probably not having notes field.
When loading plots through the YABA interface plot names are not correctly matched up with the shapes in the shapefile leading to incorrect plot boundaries.
The goal of this issue is to implement data validation for shapefiles, spreadsheets before API data ingestion by modifying existing API routes functionality.
Lines 52 to 72 in 64b540e
Like, the code in this function responds to /yaba/v1/experiments (for inserting experiments in the BETY database), we can modify this, by accepting one more parameter as status(which can be true or false), if false then return a valid response after validating data otherwise continue with data insertion. In this example, we can check the status after line 64, if true we can return a response from there only and stop further execution otherwise continue with data insertion.
if(all(x in accepted_columns for x in columns)):
>> if(status == false)
>>>> msg = {'Message' : true }
>>>> return make_response(jsonify(msg), 200)
So, what do you say?
@dlebauer @KristinaRiemer @saurabh1969 @Chris-Schnaufer
This is not needed b/c
Why isn't this display style part of the "Link" element CSS? The bar may change its elements spacing when the user clicks it & it may be better to start out with correct spacing.
Originally posted by @Chris-Schnaufer in #50 (comment)
Need to draft what the UI for an end user who wants to upload metadata to BETYdb will look like.
This will be needed for Phase II of GSOC but it will help to have this sooner. See also https://osf.io/v7f9t/wiki/GSOC%202019%20Workplan/
This may require a few iterations.
Completion criteria - a document with images of each web page in the metadata upload workflow to be implemented.
There should be a page in that I can point data providers to that describes how to prepare and upload data. This would loosely follow the steps outlined in the workflow that uses SQL https://osf.io/v7f9t/wiki/Metadata%20Entry%20Workflow/
This can be drafted in markdown and later moved to OSF
Currently the design
column is required in the CSV file. This requirement should be removed from the code
Line 28 in 64b540e
Write stylesheets for application endpoints to upload metadata
Put together list of deliverables and specific tasks in a new OSF page for the first deadline
@kimberlyh66 please
Created a shape-visual.js
and gmap.js
component that uses react-leaflet
to visualize the GeoJSON data.
We are going to merge #5. Then we can refactor to combine the contents of input_files and yaba_test/input_files, which are duplicates
Actually, when I was inserting sites table using the client route,
curl -F "fileName=@input_files/sites.csv" \
-F "shp_file=@input_files/S8_two_row_polys.shp" \
-F "dbf_file=@input_files/S8_two_row_polys.dbf" \
-F "prj_file=@input_files/S8_two_row_polys.prj" \
-F "shx_file=@input_files/S8_two_row_polys.shx" \
http://localhost:6001/sites
the app returned the response 410
and logged this error,
ERROR:root:Traceback (most recent call last):
yaba_api_1 | File "fiona/_shim.pyx", line 74, in fiona._shim.gdal_open_vector
yaba_api_1 | File "fiona/_err.pyx", line 270, in fiona._err.exc_wrap_pointer
yaba_api_1 | fiona._err.CPLE_OpenFailedError: '/code/temp/shp_file' not recognized as a supported file format.
yaba_api_1 |
yaba_api_1 | During handling of the above exception, another exception occurred:
yaba_api_1 |
yaba_api_1 | Traceback (most recent call last):
yaba_api_1 | File "/code/Meta.py", line 132, in insert_sites
yaba_api_1 | data_g1=gpd.read_file(shp_file_target)
yaba_api_1 | File "/usr/local/lib/python3.7/site-packages/geopandas/io/file.py", line 77, in read_file
yaba_api_1 | with reader(path_or_bytes, **kwargs) as features:
yaba_api_1 | File "/usr/local/lib/python3.7/site-packages/fiona/env.py", line 398, in wrapper
yaba_api_1 | return f(*args, **kwargs)
yaba_api_1 | File "/usr/local/lib/python3.7/site-packages/fiona/__init__.py", line 254, in open
yaba_api_1 | layer=layer, enabled_drivers=enabled_drivers, **kwargs)
yaba_api_1 | File "/usr/local/lib/python3.7/site-packages/fiona/collection.py", line 154, in __init__
yaba_api_1 | self.session.start(self, **kwargs)
yaba_api_1 | File "fiona/ogrext.pyx", line 484, in fiona.ogrext.Session.start
yaba_api_1 | File "fiona/_shim.pyx", line 81, in fiona._shim.gdal_open_vector
yaba_api_1 | fiona.errors.DriverError: '/code/temp/shp_file' not recognized as a supported file format.
But when I use the yaba_api route for inserting sites table,
curl -F "fileName=@input_files/sites.csv" \
-F "shp_file=@input_files/S8_two_row_polys.shp" \
-F "dbf_file=@input_files/S8_two_row_polys.dbf" \
-F "prj_file=@input_files/S8_two_row_polys.prj" \
-F "shx_file=@input_files/S8_two_row_polys.shx" \
http://localhost:5001/yaba/v1/sites
it succeeded in doing so. Also, when I am making a request using the browser, both routes log the same error. Don't know what is causing this.
Completion criteria:
The following line needs to check for a file name before trying to remove the file - there's a chance it hasn't been set:
Line 195 in 14625fb
This issue aims at writing tests for the interface part of the YABA app, using React Testing Library and Jest.
Is BETYdb YABA only going to be used to load data from the Phoenix area? If not, a means of allowing the user to specify the GMT offset should be added
Originally posted by @Chris-Schnaufer in #50 (comment)
The code at
Line 148 in 14625fb
This needs to be changed to first check for a Z value, and if it's missing adding on a 0 (zero) Z value
Created a requests.js
file to hit client routes for data upload.
For GitHub actions, can you use a matrix for testing the app, interface, etc. folders? It's easier to maintain a single YAML file than multiples of the same with minor variations.
The idea was to see if matrix could be used to combine the two Python tests into one file. I feel that putting Python and Node tests into the same file defeats simplification. If putting the Python app-test and client-tests into the same file using matrix makes things less simple, then that shouldn't be done.
Originally posted by @Chris-Schnaufer in #63 (comment)
Draft design documents from: #1
Note that these are a very crude approximation of the core functionality. We can discuss design, layout, content in the review.
@saurabh1969 please describe and work done to date and put in a feature branch
A proposed validation / review page. The goal of this is to make sure that the person uploading the metadata has a chance to confirm that the metadata they are uploading is correct / consistent with how they think it should be It can look like this:
Overview -
select name as cultivar, sitename as plot from cultivars join sites_cultivars on cultivars.id = sites_cultivars.cultivar_id join sites on sites_cultivars.site_id = sites.id
Bonus: (can create a new issue for this): a page overlaying plots on a satellite or drone imagery to confirm the correct positions in space.
Implementation of metadata upload api with respect to: https://app.swaggerhub.com/apis-docs/y51/bet-ydb_yaba/1.0.1
SQL Query as per : https://osf.io/v7f9t/wiki/Metadata%20Entry%20Workflow/
Upload will be done to the following tables:
1.Experiment
2.Sites
3.Treatments
4.cultivars
5.citations
6.experiments_sites
7.experiments_treatments
8.sites_cultivars
9.citations_sites
Completion criteria:
☐ Setup Flask
☐ Creating endpoints for each table
☐ Pull Request
This issue aims at
Also, address CORS security issue in #50 (comment)
We need to define the user story, how an extractor developer would define and put this information in. Mostly a placeholder for now until we have a use case to determine the need and mechanism, e.g. from @cmbrown89 and/or @Chris-Schnaufer
This should provide a GUI front end to the python client. https://github.com/PecanProject/BETYdb-YABA/blob/master/client/client.py#L27
It should accept the fields:
And should be inserted into the experiments table in BETYdb using the API client
Currently it appears elevation is set to 115 by default
Line 140 in 31d6287
The aim of this issue is to enable CORS (Cross-Origin Resource Sharing) for the application to allow resource sharing from other origins. Actually, the frontend application(currently under-development) needs to post data to the YABA app but it isn't allowed as of cross-origin policy, for which we need to enable CORS in the YABA app.
It can be done using Flask-CORS extension
https://flask-cors.readthedocs.io/en/latest/
Go through process (perhaps in next season) and take notes on what needs to be improved to streamline the process.
Can include improved template and also validation, e.g. at least send error messages back in response
Set up the environment for development using node js, create-react-app module.
This should be documented in README so that someone else can quickly get the application up and running
The aim of this issue is to make the python codebase follow all standard rules by a linter. Right now, some warnings are ignored in the pylintrc file, make the codebase follow all these standard rules. This will make the YABA app better.
Actually, there were a lot of warnings for inconsistent-return-statements, missing-module-docstring and somehow Pylint action treats them as an error, not warning and so it fails the test for linting.
Originally posted by @im-prakher in #63 (comment)
Created an error page if an error occurs while uploading tables to the YABA app.
After the API key is entered in the Welcome page, it should be validated (#18)
select count(*) from users where apikey = $user_provided_apikey
should return 1
Implement functions in Python (like R package) that enable use of the YABA API (unlike the existing R package, this will use the API rather than ODBC database connection).This will be used for both scripted workflows and as a backend to the UI.
Completion criteria:
This should go in
User content:
Developer content:
Created a success page after successful upload.
remove leading and trailing whitespace,
eg. this: ABC DEF ,Lactuca sativa,,
should become ABC DEF,Lactuca sativa,,
for all fields
Would it make sense to set environment variables here that reference the other YABA server addresses that can be accessed through ReactJS's
process.env
? This way only one place needs to change if the server information changes
Originally posted by @Chris-Schnaufer in #69 (comment)
The aim of this issue is to set port
and host
environment variables for the interface and visualization part in the docker-compose file itself so that can be used in these parts through ReactJS's and Node.js process.env
For reference- Reading an environment variable in react which was set by docker
There are currently two issues:
credentials.yaml
, the PGHOST
setting varies depending on the user's IP and breaks (at least on Macs) when set as localhost
@Chris-Schnaufer might be able to help @saurabh1969 troubleshoot #1?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.