I was trained as an archaeologist but found my passion in data science. Here are some things I like and the public repos to prove it. You can read more details on everything at deppen8.github.io.
Building open source tools to enable data science and research
All survey points have a FieldNumber. Ideally, the Easting and Northing should lie geographically within that field. However, that is not always the case: sometimes field boundaries are too difficult to see, sometimes the GPS is not quite accurate enough near field edges, and sometimes there are simply errors in recording.
We should create/update a new column called geo_field that finds the actual geographic FieldNumber of the point. This will require loading the fields shapefile and using geopandas or something similar to find the intersection.
Not sure if this should be part of the checks.py module or something else.
The bot created this issue to inform you that pyup.io has been set up on this repo.
Once you have closed it, the bot will open pull requests for updates as soon as they are available.
There are some key decisions to be made still: 1. Use IPy widgets instead of bokeh widgets? IPy widgets CAN control bokeh plots (see bokeh docs), so that seems promising.
2. Given question 1, should we stick with bokeh or try things with altair instead? My inclination is to stick with bokeh because of altair's problems with really big datasets (though this might be a problem with bokeh too).
When importing artifacts from the database, 10 of them did not have corresponding survey points. This raises a couple of questions:
How did these artifacts get entered if there are no corresponding points? Were the points deleted accidentally?
What should we do about it?
For now, I have added a .fillna(0) to the Eastings and Northings in the leiap.spatial.find_geo_field() function. This allows the spatial join (gpd.sjoin()) to proceed without throwing an error, but it will cause headaches later if you try to map these artifacts, so it is not ideal behavior.