Giter Club home page Giter Club logo

deeposm's People

Contributors

andrewljohnson avatar clockwerx avatar dbdean avatar migurski avatar nvkelso avatar silberman avatar skylion007 avatar zain avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

deeposm's Issues

Getting the import error

python bin/run_analysis.py
Traceback (most recent call last):
File "bin/run_analysis.py", line 7, in
from src.run_analysis import analyze, render_results_as_images
File "/DeepOSM/src/run_analysis.py", line 6, in
import label_chunks_cnn_cifar
File "/DeepOSM/src/label_chunks_cnn_cifar.py", line 11, in
import tflearn
File "/usr/local/lib/python2.7/dist-packages/tflearn/init.py", line 21, in
from .layers import normalization
File "/usr/local/lib/python2.7/dist-packages/tflearn/layers/init.py", line 10, in
from .recurrent import lstm, gru, simple_rnn, bidirectional_rnn,
File "/usr/local/lib/python2.7/dist-packages/tflearn/layers/recurrent.py", line 8, in
from tensorflow.contrib.rnn.python.ops.core_rnn import static_rnn as _rnn,
ImportError: No module named core_rnn

and this

File "bin/run_analysis.py", line 7, in
from src.run_analysis import analyze, render_results_as_images
File "/DeepOSM/src/run_analysis.py", line 6, in
import label_chunks_cnn_cifar
File "/DeepOSM/src/label_chunks_cnn_cifar.py", line 11, in
import tflearn
File "/usr/local/lib/python2.7/dist-packages/tflearn/init.py", line 4, in
from . import config
File "/usr/local/lib/python2.7/dist-packages/tflearn/config.py", line 5, in
from .variables import variable
File "/usr/local/lib/python2.7/dist-packages/tflearn/variables.py", line 7, in
from tensorflow.contrib.framework.python.ops import add_arg_scope as contrib_add_arg_scope
ImportError: cannot import name add_arg_scope

while running the github project https://github.com/zilongzhong/DeepOSM
In this one I am able to create training data successfully but not able to run_analysis.py and while running I'm getting this error and

thoughts on better neural net

The current neural net uses something really made for handwritten digit classification (MNIST).

We could do stuff like use Alexnet instead, or implement the neural nets described in Mnih/Hinton too. Literature also describes using a sequence of pre and post processing neural nets, where you can fill in gaps in road networks.

Expanding on these vague comments, there is a whole body of literature about how to use CNNs, RNNs, global topology, lidar elevation data, and much more to improve the accuracy of satellite imagery label. We should be able to get above 90% on the pixel level, just using semi-local RGB data, and push past that with multiple neural nets, more data, and other documented improvements in the last 2-3 years since Mnih's thesis.

See the README for a list of readings.

handle 504s in tile downloads

Traceback (most recent call last):
File "label_chunks_cnn.py", line 109, in
odn.download_tiles()

raise HTTPError(req.full_url, code, msg, hdrs, fp)

urllib.error.HTTPError: HTTP Error 504: Gateway Time-out

thoughts on roadmap - start with Delaware

Improve Delaware OSM

Let’s get Delaware OSM improver running first, because:

  • it’s useful
  • focuses dev efforts on product/launch
  • tees up further work

Launch Requirements

  • front-end
    • two react views, fed by Django
      • list view of potential OSM errors
      • click in list view to show details for an error (a map view and a button to goto OSM to edit)
  • analysis - quick and dirty (use what we got)
    • train a brain with the current tech
      • more or less already done
      • run it on AWS and save the results (all of Delaware, all 4 bands, run the conv net option)
    • output predictions from brain for all test data, put in Postgres/django
      • UI shows these in order of confidence
      • database props:
        • ne_lat
        • ne_lon
        • sw_lat
        • sw_lon
        • grid_x (redundant with the lat/long but meh, data exists in tiles so stick in PG for now)
        • grid_y (ditto)
        • source_naip
        • percent_on
        • state (always delaware now)
      • attach lat/long to each prediction (using existing code)
      • this data drives the UI for deeposm.org/state/delaware
  • django back-end
    • login/signup user tables
    • thumbsdown votes on edits we suggest v1? maybe punt this too?

Punt for now?

  • need the following for scale, but not Delaware, punt?
    • naips in Postgres
    • OSM data via Overpass
  • better neural net
    • could do a relu + conv to be slightly better, or even deeper
    • see how good our current brain works, ~86%, in the delaware scenario
    • deeper it is, the longer the training and slower the iteration
    • need to look at paper to see how much computation another ~4% costs us

Other Doable Product Ideas

  • other states
    • might as well scope to one state for now
  • produce a geojson overlay to send over when they edit on iD
    • improvement on the tools for editing OSM
    • idea is to keep iterating on the tools unti it’s automated
  • other features like tennis courts
    • this might be a cool next move
    • might be easier and quickly useful to generate the “best tennis court search”
  • provide training data as a service
    • also: provide hosted jupyter + training data as a service
    • unclear how people want this packaged?
    • serve ourselves first?
  • let people upload and classify drone imagery
    • would need production style data pipeline
    • would need more work on analysis
    • one of the harder product ideas

@zain @silberman Here are some thoughts on how to roadmap this... roadmap runs through Delaware and onwards?

Offer of additional imagery sources

I run a number of imagery sites... You are welcome to use any of them.

I can also create custom urls for different tile sizes etc, as required. I can also enable WMS if required.

Tiles are generated on-demand and cached. Please set a HTTP User-Agent that describes the app if scraping.

Deeposm.org UI: Add "Flag as fixed" control

#59 talks a bit about "The scripts to gather data, train, and upload findings should run on a cycle, not manually when I press a button".

A simple 'this has been fixed' control could easily flag things for reassessment, or at least tag the record as probably fixed / shift it to another list.

add training/label data for buildings

The analysis could be simultaneously analyzing images for buildings, along with the road analysis it already does.

  1. download_labels.py needs to be extended to extract buildings, which isn't very hard
  2. way_bitmap_for_naip needs to be extended to draw and then shade in buildings probably. This still isn't very hard, maybe a tad harder than step 1.

Pretrained models

Are there any pretrained models available for deep OSM to test out before training it locally?

Regards,
Abhishek

Getting warning while Training neural net

Getthing this while running the train_neural_net.py

WARNING:tensorflow:Error encountered when serializing data_augmentation.
Type is unsupported, or the types of the items don't match field type in CollectionDef.
'NoneType' object has no attribute 'name'
WARNING:tensorflow:Error encountered when serializing summary_tags.
Type is unsupported, or the types of the items don't match field type in CollectionDef.
'dict' object has no attribute 'name'
WARNING:tensorflow:Error encountered when serializing data_preprocessing.
Type is unsupported, or the types of the items don't match field type in CollectionDef.
'NoneType' object has no attribute 'name'

Clock in docker is wrong, so aws doesn't allow download

This appears when I'm using mac and linux centos7 both, after i do 'make dev' and working container is successfully launched, the clock in container is far away from the local clock, and seems the offset is random, and i can't set clock inside container.
This may cause downloading failure when runing create_training_data.py in container.
And I'm in timezone +8.

Getting Error while training the neural net !!!

  1. Installed the Docker
  2. make dev command executed successfully.
  3. after that when I'm executing the python bin/create_training_data.py I'm getting some errors like :

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
An unexpected error has occurred.
Please try reproducing the error using
the latest s3cmd code from the git master
branch found at:
https://github.com/s3tools/s3cmd
and have a look at the known issues list:
https://github.com/s3tools/s3cmd/wiki/Common-known-issues-and-their-solutions
If the error persists, please report the
following lines (removing any private
info as necessary) to:
[email protected]

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

Invoked as: /usr/local/bin/s3cmd ls --recursive --skip-existing s3://aws-naip/in/2014/1m/rgbir/ --requester-pays
Problem: gaierror: [Errno -2] Name or service not known
S3cmd: 1.6.0
python: 2.7.6 (default, Oct 26 2016, 20:30:19)
[GCC 4.8.4]
environment LANG=None

Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/s3cmd-1.6.0-py2.7.egg/EGG-INFO/scripts/s3cmd", line 2805, in
File "/usr/local/lib/python2.7/dist-packages/s3cmd-1.6.0-py2.7.egg/EGG-INFO/scripts/s3cmd", line 2713, in main
File "/usr/local/lib/python2.7/dist-packages/s3cmd-1.6.0-py2.7.egg/EGG-INFO/scripts/s3cmd", line 120, in cmd_ls
File "/usr/local/lib/python2.7/dist-packages/s3cmd-1.6.0-py2.7.egg/EGG-INFO/scripts/s3cmd", line 153, in subcmd_bucket_list
File "build/bdist.linux-x86_64/egg/S3/S3.py", line 293, in bucket_list
for dirs, objects in self.bucket_list_streaming(bucket, prefix, recursive, uri_params):
File "build/bdist.linux-x86_64/egg/S3/S3.py", line 320, in bucket_list_streaming
response = self.bucket_list_noparse(bucket, prefix, recursive, uri_params)
File "build/bdist.linux-x86_64/egg/S3/S3.py", line 339, in bucket_list_noparse
response = self.send_request(request)
File "build/bdist.linux-x86_64/egg/S3/S3.py", line 1061, in send_request
conn = ConnMan.get(self.get_hostname(resource['bucket']))
File "build/bdist.linux-x86_64/egg/S3/ConnMan.py", line 179, in get
conn.c.connect()
File "/usr/lib/python2.7/httplib.py", line 1216, in connect
self.timeout, self.source_address)
File "/usr/lib/python2.7/socket.py", line 553, in create_connection
for res in getaddrinfo(host, port, 0, SOCK_STREAM):
gaierror: [Errno -2] Name or service not known

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
An unexpected error has occurred.
Please try reproducing the error using
the latest s3cmd code from the git master
branch found at:
https://github.com/s3tools/s3cmd
and have a look at the known issues list:
https://github.com/s3tools/s3cmd/wiki/Common-known-issues-and-their-solutions
If the error persists, please report the
above lines (removing any private
info as necessary) to:
[email protected]
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
DOWNLOADING 1 PBFs...
downloads took 3.7s
EXTRACTED WAYS with locations from pbf file /DeepOSM/data/openstreetmap/delaware-latest.osm.pbf, took 7.1s


please help
Using macOS Sierra

INSTALL ISSUES

hi~
I have installed a docker but cd /DeepOSM-master make dev then run python bin/create_training_data.py ,i have a error : File "bin/create_training_data.py", line 6, in
from src.training_data import download_and_serialize ImportError: No module named src.training_data
can you give me a detailed tutorial

Simple offline prediction

I searched out the code but couldn't find any simple function for prediction rather then pushing result to c3.

database/models for deeposm.org (to import errors)

Finding Model

ne_lat
ne_lon
sw_lat
sw_lon
raster_filename
raster_tile_x
raster_tile_y
created_date
solved_date
flagged_count

Most props are self-explanatory, and:

  • solved_date - gets set if deeposm.org gets a new import from the same NAIP, which doesn't include that tile (makes some assumptions about the pipeline... like we will always tile NAIPs the same)
  • raster_tile_x, raster_tile_y - used to determine if a finding is solved
  • flagged_count - track how many people flag a prediction, to hide bad ones

Fail to run train_neural_net.py

“For output, DeepOSM will produce some console logs, and then JPEGs of the ways, labels, and predictions overlaid on the tiff.”
i run train_neural_net.py successfully . but there no JPEGs of the ways, labels, and predictions overlaid on the tiff.

Is web mercator transform correct?

In geo_util.py, this line gives me the correct lat lon in web mercator.

x2, y2 = transform(in_proj, out_proj, ulon, ulat)

But this last line (just before the return) seems to transform it back to lat lon in degrees:

x2, y2 = out_proj(x2, y2, inverse=True)

which I believe is incorrect. Am I missing something here?

Paths issues

The code on the master branch doesn't work, there are paths issues.
Those issues seems to be solved in the unmerged pull request.
I merged the PR in my fork of the project, but more issues arose.

rotate training images

Benefits

  • produce more training data
  • make analysis more applicable using one data set to train, and another to set (less overfit locally and regionally/globally hopefully)

Notes

  • each rotation needs to be unique, so that the same image/label pair doesn't accidentally get put into both training and test data

Training stops with 'NoneType' object has no attribute 'name'

..any idea? or a hint, where I might get deeper information about it?

Kind regards,

Daniel

Training samples: 199
Validation samples: 23

Training Step: 899 | total loss: 0.23639
Training Step: 899 | total loss: 0.23639 acc: 0.9115 | val_loss: 0.29569 - val_| Momentum | epoch: 001 | loss: 0.23639 - acc: 0.9115 | val_loss: 0.29569 - val_acc: 0.8261 -- iter: 199/199

Training Step: 903 | total loss: 0.36168
Training Step: 903 | total loss: 0.36168 acc: 0.8505 | val_loss: 0.23084 - val_| Momentum | epoch: 002 | loss: 0.36168 - acc: 0.8505 | val_loss: 0.23084 - val_acc: 0.9565 -- iter: 199/199

Training Step: 907 | total loss: 0.34085
Training Step: 907 | total loss: 0.34085 acc: 0.8723 | val_loss: 0.11795 - val_| Momentum | epoch: 003 | loss: 0.34085 - acc: 0.8723 | val_loss: 0.11795 - val_acc: 0.9565 -- iter: 199/199

Training Step: 911 | total loss: 0.61323
Training Step: 911 | total loss: 0.61323 acc: 0.7900 | val_loss: 0.16038 - val_| Momentum | epoch: 004 | loss: 0.61323 - acc: 0.7900 | val_loss: 0.16038 - val_acc: 0.9565 -- iter: 199/199

Training Step: 915 | total loss: 0.55020
Training Step: 915 | total loss: 0.55020 acc: 0.7925 | val_loss: 0.23164 - val_| Momentum | epoch: 005 | loss: 0.55020 - acc: 0.7925 | val_loss: 0.23164 - val_acc: 0.9565 -- iter: 199/199

WARNING:tensorflow:Error encountered when serializing data_augmentation.
Type is unsupported, or the types of the items don't match field type in CollectionDef.
'NoneType' object has no attribute 'name'
WARNING:tensorflow:Error encountered when serializing summary_tags.
Type is unsupported, or the types of the items don't match field type in CollectionDef.
'dict' object has no attribute 'name'
WARNING:tensorflow:Error encountered when serializing data_preprocessing.
Type is unsupported, or the types of the items don't match field type in CollectionDef.
'NoneType' object has no attribute 'name'

classify cycleways and footways too

The training data and analyses could be expanded to include cycleways and footways.

The current analysis can only do roads (or alternately tennis courts). There is also an open issue to add buildings: #25

visualize training on TensorBoard

The label_chunks_cnn (mnist) analysis doesn't do TensorBoard. The label_chunks_cnn_cifar (cifar10) does produce the necessary files.

This may just get closed as we throw away the initial, rudimentary analysis.

todos needed to make this actually work

Doodling about a production form of this, because the current version is like Proof-of-Proof-of-Concept, and I cut many obvious corners playing with TF and geodata.

Need to do to get this to work at all:

  • use NAIP instead of processed tiles, so we can get IR and better images in general (noted in issue #4)
  • build infrastructure for consuming more data (NAIPs and OSM data, probably a server running Postgres and some other stuff)
  • build a multi-layer neural net that outputs predictions about pixels/objects in the image, instead of just guessing if the image has a feature or not
    • tinker with parameters like what color bands to care about, what size images to analyze, etc
    • see papers from Hinton and others

todos in progress

In Progress

Ideas

  • set aside validation images, when doing a state run (single_layer_network.py)
  • pickle models between runs
  • use same model through all states?
  • show view/fix-click count in table

Done

New Contributor

Hey Guys, I am B.Tech. CS graduate and I have been working on image processing since 1.5 years. I would like to contribute to this project. Can anyone assign me some task, doesn't matter how simple or tough ?

explain how NAIPs are indexed on the S3 bucket

I don't understand how I can make an index of the NAIP bucket on S3. The parameters are as follows: ['md', '2013', '1m', 'rgbir', '38077']

I understand all the parameters but the last one... I assume that is some sort of grid or ordinal ID.

If I knew what the final parameter was, I could write a script to get training data from any state/year. Now it's limited to a certain place in Maryland, around Washington DC.

Pillow version is low, cause error when saving JPEG files

I met this error when I rendering findings to JPEG, using render_results_for_analysis method in file src/training_visualization.py, traceback:
Wrong JPEG library version: library is 90, caller expects 80
Traceback (most recent call last):
File "upload_data.py", line 31, in <module>
main()
File "upload_data.py", line 27, in main
render_findings(raster_data_paths, model, training_info, model_info['bands'], True)
File "/DeepOSM/src/s3_client_deeposm.py", line 30, in render_findings
training_info['tile_size'])
File "/DeepOSM/src/training_visualization.py", line 34, in render_results_for_analysis tile_size)
File "/DeepOSM/src/training_visualization.py", line 54, in render_predictions
predictions=predictions_by_naip)
File "/DeepOSM/src/training_visualization.py", line 111, in render_results_as_image
im.save(outfile, "JPEG")
Upgrade Pillow python library version to 4.0.0 can solve this.

btw, Since deeposm.org site no longger exists, so can you provide codes that generates local website contents? Is there ways to view the results other than JPEG with findings on map?

gdal version mismatch?

Looks like the gdal swig binary is out of sync with newer python bindings? I'm running this inside the non-gpu docker image. I gather the GNM bits are relatively new, so it makes sense that they'd get hit.

root@f059367028e3:/DeepOSM# python ./bin/create_training_data.py 
Traceback (most recent call last):
  File "./bin/create_training_data.py", line 6, in <module>
    from src.training_data import download_and_serialize
  File "/DeepOSM/src/training_data.py", line 10, in <module>
    from osgeo import gdal
  File "/usr/local/lib/python2.7/dist-packages/osgeo/gdal.py", line 86, in <module>
    from gdalconst import *
  File "/usr/local/lib/python2.7/dist-packages/osgeo/gdalconst.py", line 148, in <module>
    OF_GNM = _gdalconst.OF_GNM
AttributeError: 'module' object has no attribute 'OF_GNM'

Also, I've tried going straight to https://hub.docker.com/r/homme/gdal/ and I can't reproduce there.

I'm wondering if one of the apt-get installs brought along an out-of-date gdal binary?

Here are a few more files:

root@f059367028e3:/usr/local/lib/python2.7/dist-packages# ls -lha /usr/local/lib/libgdal*
-rw-r--r-- 1 root root 288M Oct 28 11:24 /usr/local/lib/libgdal.a
-rwxr-xr-x 1 root root 1.6K Oct 28 11:24 /usr/local/lib/libgdal.la
lrwxrwxrwx 1 root root   17 Oct 28 11:24 /usr/local/lib/libgdal.so -> libgdal.so.20.1.0
lrwxrwxrwx 1 root root   17 Oct 28 11:24 /usr/local/lib/libgdal.so.20 -> libgdal.so.20.1.0
-rwxr-xr-x 1 root root 116M Oct 28 11:24 /usr/local/lib/libgdal.so.20.1.0
root@f059367028e3:/usr/local/lib/python2.7/dist-packages# ls -lha *gdal*
-rw-r--r-- 1 root staff 128 Oct 28 10:15 gdal.py
-rw-r--r-- 1 root staff 244 Oct 28 11:25 gdal.pyc
-rw-r--r-- 1 root staff 143 Oct 28 10:15 gdalconst.py
-rw-r--r-- 1 root staff 274 Oct 28 11:25 gdalconst.pyc
-rw-r--r-- 1 root staff 147 Oct 28 10:15 gdalnumeric.py
-rw-r--r-- 1 root staff 279 Oct 28 11:25 gdalnumeric.pyc

I think there is errata in "false positives" words

I think the word "false positive" in README file:

render, as JPEGs, "false positive" predictions in the OSM data - i.e. where OSM lists a road, but DeepOSM thinks there isn't one.

It is not correct usage. According to wiki, it should be false negative. I hope you do not mind if I have this issue.
Best regards,

Additional trainning set

NAIP offers free 1 meter resolution images for the whole CONUS. They are georectified .tiff files by state, but not on z/x/y.

Data is accessible as "requester pays" on s3://aws-naip.

  • Is this an upgrade to the current training set? (NAIP also has NIR bands).
    • If so, I understand there should be a logic built to go from tiff to zxy. Or request access to NAIP layer to a provider which has done that. e.g. mapbox

split data creation and analysis into separate Docker apps

Currently, Dockerfile.devel-gpu inherits from GDAL Dockerfile, and then adds in both stuff needed for 1) data creation, and stuff needed for 2) analysis... including a nested stack of Tensorflow Dockerfiles that got copied in.

Two Dockers - Data Creation and Data Analysis

It would be cleaner if one Dockerfile created training data and saved it to disk, and one Dockerfile used training data from disk to analyze. The training data docker file could be mostly like the existing non-GPU Dockerfile, and the analysis Docker file would inherit from stock Tensorflow Dockerfile and be short, simple, and easy to maintain/deploy to AWS.

In production, I guess one Dockerfile saves data to S3, and the other mounts that data bucket for analysis. In development, it create a directory on disk, no S3 involved.

API

The data creation Docker app could also serve an API, to give people training data for their own models, or to support an MNIST like open research competition for maps.

import NAIP data into a postgres database

Putting the data in Postgres seems like a good mid-game/end-game move. Do this after we put up deeposm.org, want to scale, and/or want to provide a place for researchers to run arbitrary experiments.

Benefits include:

  • make rotating tiles easier (issue #24) - current pipeline could me modified too, but more hacky
  • maybe easier to do bounding box queries, than if data is cached in a non-relational way from NAIPs to disk
  • enable an API that would allow for more arbitrary training data, in less disk space

automate/improve infrastructure

This issue describes how deeposm works now. Then it describes changes needed to improve the infrastructure. Also see notes/scripts on these issues: #8, #23, #30, and #39 (these issues were closed to merge with this issue, not completed).

Test Data and Training

  • one app does both the data prep, and neural net training
  • it uses a GPU/Tensorflow on a Linux box in my office
  • findings are then uploaded to S3

Display on deeposm.org

  • when a deeposm.org page is loaded, it checks S3, grabs findings, and updates the database
  • deeposm.org shows where DeepOSM detects mis-registered roads

Issues with this Setup

  • The scripts to gather data, train, and upload findings should run on a cycle, not manually when I press a button
  • The data prep and training modules should be separate - DeepOSM has one monster Dockerfile.gpu-devel that includes GDAL, Tensorflow, and more. This makes the build fragile and hard to deploy.
  • Actual work includes:
    • move the analysis to AWS, run on a cycle
    • set up a cron job to have deeposm.org check for new findings
    • parallelize the analysis, so we can do deeper nets and more area
    • import NAIPs into Postgres, instead of hacking them up and caching files
    • use Overpass or other approach to getting OSM data, instead of hacking up PBF extracts with Osmium

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.