trailbehind / deeposm Goto Github PK
View Code? Open in Web Editor NEWTrain a deep learning net with OpenStreetMap features and satellite imagery.
License: MIT License
Train a deep learning net with OpenStreetMap features and satellite imagery.
License: MIT License
Is it possible instead of using NAIPS which is mentioned in "AWS Credential" step?
python bin/run_analysis.py
Traceback (most recent call last):
File "bin/run_analysis.py", line 7, in
from src.run_analysis import analyze, render_results_as_images
File "/DeepOSM/src/run_analysis.py", line 6, in
import label_chunks_cnn_cifar
File "/DeepOSM/src/label_chunks_cnn_cifar.py", line 11, in
import tflearn
File "/usr/local/lib/python2.7/dist-packages/tflearn/init.py", line 21, in
from .layers import normalization
File "/usr/local/lib/python2.7/dist-packages/tflearn/layers/init.py", line 10, in
from .recurrent import lstm, gru, simple_rnn, bidirectional_rnn,
File "/usr/local/lib/python2.7/dist-packages/tflearn/layers/recurrent.py", line 8, in
from tensorflow.contrib.rnn.python.ops.core_rnn import static_rnn as _rnn,
ImportError: No module named core_rnn
and this
File "bin/run_analysis.py", line 7, in
from src.run_analysis import analyze, render_results_as_images
File "/DeepOSM/src/run_analysis.py", line 6, in
import label_chunks_cnn_cifar
File "/DeepOSM/src/label_chunks_cnn_cifar.py", line 11, in
import tflearn
File "/usr/local/lib/python2.7/dist-packages/tflearn/init.py", line 4, in
from . import config
File "/usr/local/lib/python2.7/dist-packages/tflearn/config.py", line 5, in
from .variables import variable
File "/usr/local/lib/python2.7/dist-packages/tflearn/variables.py", line 7, in
from tensorflow.contrib.framework.python.ops import add_arg_scope as contrib_add_arg_scope
ImportError: cannot import name add_arg_scope
while running the github project https://github.com/zilongzhong/DeepOSM
In this one I am able to create training data successfully but not able to run_analysis.py and while running I'm getting this error and
I'm going to spin up a t2.small
EC2 instance, install Overpass, and load the north-america-latest.osm.bz2
OSM extract from geofabrik.de into it.
I‘ve configurated s3cfg, and i can download naip image via s3cmd fine, but when running bin/create_training_data.py, it ends up 403 error when downloading naip images.
The current neural net uses something really made for handwritten digit classification (MNIST).
We could do stuff like use Alexnet instead, or implement the neural nets described in Mnih/Hinton too. Literature also describes using a sequence of pre and post processing neural nets, where you can fill in gaps in road networks.
Expanding on these vague comments, there is a whole body of literature about how to use CNNs, RNNs, global topology, lidar elevation data, and much more to improve the accuracy of satellite imagery label. We should be able to get above 90% on the pixel level, just using semi-local RGB data, and push past that with multiple neural nets, more data, and other documented improvements in the last 2-3 years since Mnih's thesis.
See the README for a list of readings.
Traceback (most recent call last):
File "label_chunks_cnn.py", line 109, in
odn.download_tiles()
raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 504: Gateway Time-out
Let’s get Delaware OSM improver running first, because:
@zain @silberman Here are some thoughts on how to roadmap this... roadmap runs through Delaware and onwards?
I run a number of imagery sites... You are welcome to use any of them.
http://[a|b|c].aerial.openstreetmap.org.za/ngi-aerial/{zoom}/{x}/{y}.jpg
http://[a|b|c].os.openstreetmap.org/layer/gb_os_sv_2016_04/{zoom}/{x}/{y}.png
http://[a|b|c].hampshire.aerial.openstreetmap.org.uk/layer/gb_hampshire_aerial_rgb/{zoom}/{x}/{y}.png
http://[a|b|c].hampshire.aerial.openstreetmap.org.uk/layer/gb_hampshire_aerial_fcir/{zoom}/{x}/{y}.png
http://[a|b|c].surrey.aerial.openstreetmap.org.uk/layer/gb_surrey_aerial/{zoom}/{x}/{y}.png
http://[a|b|c].agri.openstreetmap.org/layer/au_ga_agri/{zoom}/{x}/{y}.png
I can also create custom urls for different tile sizes etc, as required. I can also enable WMS if required.
Tiles are generated on-demand and cached. Please set a HTTP User-Agent that describes the app if scraping.
#59 talks a bit about "The scripts to gather data, train, and upload findings should run on a cycle, not manually when I press a button".
A simple 'this has been fixed' control could easily flag things for reassessment, or at least tag the record as probably fixed / shift it to another list.
Is Travis something free for FOSS projects?
Also, is it worth enforcing pep8 as a test on PRs?
The analysis could be simultaneously analyzing images for buildings, along with the road analysis it already does.
I'm having a hard time projecting pixels on the naip tiffs, into EPSG 3857. The flawed method is here.
I asked a question on GIS Stack Exchange to explain. Any help?
Are there any pretrained models available for deep OSM to test out before training it locally?
Regards,
Abhishek
Getthing this while running the train_neural_net.py
WARNING:tensorflow:Error encountered when serializing data_augmentation.
Type is unsupported, or the types of the items don't match field type in CollectionDef.
'NoneType' object has no attribute 'name'
WARNING:tensorflow:Error encountered when serializing summary_tags.
Type is unsupported, or the types of the items don't match field type in CollectionDef.
'dict' object has no attribute 'name'
WARNING:tensorflow:Error encountered when serializing data_preprocessing.
Type is unsupported, or the types of the items don't match field type in CollectionDef.
'NoneType' object has no attribute 'name'
This appears when I'm using mac and linux centos7 both, after i do 'make dev' and working container is successfully launched, the clock in container is far away from the local clock, and seems the offset is random, and i can't set clock inside container.
This may cause downloading failure when runing create_training_data.py in container.
And I'm in timezone +8.
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
An unexpected error has occurred.
Please try reproducing the error using
the latest s3cmd code from the git master
branch found at:
https://github.com/s3tools/s3cmd
and have a look at the known issues list:
https://github.com/s3tools/s3cmd/wiki/Common-known-issues-and-their-solutions
If the error persists, please report the
following lines (removing any private
info as necessary) to:
[email protected]
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Invoked as: /usr/local/bin/s3cmd ls --recursive --skip-existing s3://aws-naip/in/2014/1m/rgbir/ --requester-pays
Problem: gaierror: [Errno -2] Name or service not known
S3cmd: 1.6.0
python: 2.7.6 (default, Oct 26 2016, 20:30:19)
[GCC 4.8.4]
environment LANG=None
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/s3cmd-1.6.0-py2.7.egg/EGG-INFO/scripts/s3cmd", line 2805, in
File "/usr/local/lib/python2.7/dist-packages/s3cmd-1.6.0-py2.7.egg/EGG-INFO/scripts/s3cmd", line 2713, in main
File "/usr/local/lib/python2.7/dist-packages/s3cmd-1.6.0-py2.7.egg/EGG-INFO/scripts/s3cmd", line 120, in cmd_ls
File "/usr/local/lib/python2.7/dist-packages/s3cmd-1.6.0-py2.7.egg/EGG-INFO/scripts/s3cmd", line 153, in subcmd_bucket_list
File "build/bdist.linux-x86_64/egg/S3/S3.py", line 293, in bucket_list
for dirs, objects in self.bucket_list_streaming(bucket, prefix, recursive, uri_params):
File "build/bdist.linux-x86_64/egg/S3/S3.py", line 320, in bucket_list_streaming
response = self.bucket_list_noparse(bucket, prefix, recursive, uri_params)
File "build/bdist.linux-x86_64/egg/S3/S3.py", line 339, in bucket_list_noparse
response = self.send_request(request)
File "build/bdist.linux-x86_64/egg/S3/S3.py", line 1061, in send_request
conn = ConnMan.get(self.get_hostname(resource['bucket']))
File "build/bdist.linux-x86_64/egg/S3/ConnMan.py", line 179, in get
conn.c.connect()
File "/usr/lib/python2.7/httplib.py", line 1216, in connect
self.timeout, self.source_address)
File "/usr/lib/python2.7/socket.py", line 553, in create_connection
for res in getaddrinfo(host, port, 0, SOCK_STREAM):
gaierror: [Errno -2] Name or service not known
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
An unexpected error has occurred.
Please try reproducing the error using
the latest s3cmd code from the git master
branch found at:
https://github.com/s3tools/s3cmd
and have a look at the known issues list:
https://github.com/s3tools/s3cmd/wiki/Common-known-issues-and-their-solutions
If the error persists, please report the
above lines (removing any private
info as necessary) to:
[email protected]
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
DOWNLOADING 1 PBFs...
downloads took 3.7s
EXTRACTED WAYS with locations from pbf file /DeepOSM/data/openstreetmap/delaware-latest.osm.pbf, took 7.1s
please help
Using macOS Sierra
hi~
I have installed a docker but cd /DeepOSM-master make dev then run python bin/create_training_data.py ,i have a error : File "bin/create_training_data.py", line 6, in
from src.training_data import download_and_serialize ImportError: No module named src.training_data
can you give me a detailed tutorial
I searched out the code but couldn't find any simple function for prediction rather then pushing result to c3.
ne_lat
ne_lon
sw_lat
sw_lon
raster_filename
raster_tile_x
raster_tile_y
created_date
solved_date
flagged_count
Most props are self-explanatory, and:
“For output, DeepOSM will produce some console logs, and then JPEGs of the ways, labels, and predictions overlaid on the tiff.”
i run train_neural_net.py successfully . but there no JPEGs of the ways, labels, and predictions overlaid on the tiff.
In geo_util.py, this line gives me the correct lat lon in web mercator.
x2, y2 = transform(in_proj, out_proj, ulon, ulat)
But this last line (just before the return) seems to transform it back to lat lon in degrees:
x2, y2 = out_proj(x2, y2, inverse=True)
which I believe is incorrect. Am I missing something here?
The code on the master branch doesn't work, there are paths issues.
Those issues seems to be solved in the unmerged pull request.
I merged the PR in my fork of the project, but more issues arose.
Since deeposm.org site no longger exists, so can you provide codes that generates local website contents? So user can have a presentation of the results(findings, errors .etc) other than label files.
Here's a FOSS repo doing TensorFlow on AWS using Docker. This is a familiar stack.
Exists:
Missing:
I thought I'd start a list, probably other missing bits as alluded to in this Github issue.
There is a major highway not getting drawn/labeled in DC in this NAIP: m_3807708_se_18_1_20130924-20160510.jpeg
I just did this for convenience. Script should keep the unchopped tiles around so you can re-chop without re-downloading.
corollary: don't download rasters/vectors if already downloaded
maybe separate download/prep steps + cache
..any idea? or a hint, where I might get deeper information about it?
Kind regards,
Daniel
WARNING:tensorflow:Error encountered when serializing data_augmentation.
Type is unsupported, or the types of the items don't match field type in CollectionDef.
'NoneType' object has no attribute 'name'
WARNING:tensorflow:Error encountered when serializing summary_tags.
Type is unsupported, or the types of the items don't match field type in CollectionDef.
'dict' object has no attribute 'name'
WARNING:tensorflow:Error encountered when serializing data_preprocessing.
Type is unsupported, or the types of the items don't match field type in CollectionDef.
'NoneType' object has no attribute 'name'
The training data and analyses could be expanded to include cycleways and footways.
The current analysis can only do roads (or alternately tennis courts). There is also an open issue to add buildings: #25
The label_chunks_cnn (mnist) analysis doesn't do TensorBoard. The label_chunks_cnn_cifar (cifar10) does produce the necessary files.
This may just get closed as we throw away the initial, rudimentary analysis.
Doodling about a production form of this, because the current version is like Proof-of-Proof-of-Concept, and I cut many obvious corners playing with TF and geodata.
Need to do to get this to work at all:
Hey Guys, I am B.Tech. CS graduate and I have been working on image processing since 1.5 years. I would like to contribute to this project. Can anyone assign me some task, doesn't matter how simple or tough ?
I don't understand how I can make an index of the NAIP bucket on S3. The parameters are as follows: ['md', '2013', '1m', 'rgbir', '38077']
I understand all the parameters but the last one... I assume that is some sort of grid or ordinal ID.
If I knew what the final parameter was, I could write a script to get training data from any state/year. Now it's limited to a certain place in Maryland, around Washington DC.
I met this error when I rendering findings to JPEG, using render_results_for_analysis
method in file src/training_visualization.py
, traceback:
Wrong JPEG library version: library is 90, caller expects 80
Traceback (most recent call last):
File "upload_data.py", line 31, in <module>
main()
File "upload_data.py", line 27, in main
render_findings(raster_data_paths, model, training_info, model_info['bands'], True)
File "/DeepOSM/src/s3_client_deeposm.py", line 30, in render_findings
training_info['tile_size'])
File "/DeepOSM/src/training_visualization.py", line 34, in render_results_for_analysis tile_size)
File "/DeepOSM/src/training_visualization.py", line 54, in render_predictions
predictions=predictions_by_naip)
File "/DeepOSM/src/training_visualization.py", line 111, in render_results_as_image
im.save(outfile, "JPEG")
Upgrade Pillow python library version to 4.0.0 can solve this.
btw, Since deeposm.org site no longger exists, so can you provide codes that generates local website contents? Is there ways to view the results other than JPEG with findings on map?
Looks like the gdal swig binary is out of sync with newer python bindings? I'm running this inside the non-gpu docker image. I gather the GNM bits are relatively new, so it makes sense that they'd get hit.
root@f059367028e3:/DeepOSM# python ./bin/create_training_data.py
Traceback (most recent call last):
File "./bin/create_training_data.py", line 6, in <module>
from src.training_data import download_and_serialize
File "/DeepOSM/src/training_data.py", line 10, in <module>
from osgeo import gdal
File "/usr/local/lib/python2.7/dist-packages/osgeo/gdal.py", line 86, in <module>
from gdalconst import *
File "/usr/local/lib/python2.7/dist-packages/osgeo/gdalconst.py", line 148, in <module>
OF_GNM = _gdalconst.OF_GNM
AttributeError: 'module' object has no attribute 'OF_GNM'
Also, I've tried going straight to https://hub.docker.com/r/homme/gdal/ and I can't reproduce there.
I'm wondering if one of the apt-get installs brought along an out-of-date gdal binary?
Here are a few more files:
root@f059367028e3:/usr/local/lib/python2.7/dist-packages# ls -lha /usr/local/lib/libgdal*
-rw-r--r-- 1 root root 288M Oct 28 11:24 /usr/local/lib/libgdal.a
-rwxr-xr-x 1 root root 1.6K Oct 28 11:24 /usr/local/lib/libgdal.la
lrwxrwxrwx 1 root root 17 Oct 28 11:24 /usr/local/lib/libgdal.so -> libgdal.so.20.1.0
lrwxrwxrwx 1 root root 17 Oct 28 11:24 /usr/local/lib/libgdal.so.20 -> libgdal.so.20.1.0
-rwxr-xr-x 1 root root 116M Oct 28 11:24 /usr/local/lib/libgdal.so.20.1.0
root@f059367028e3:/usr/local/lib/python2.7/dist-packages# ls -lha *gdal*
-rw-r--r-- 1 root staff 128 Oct 28 10:15 gdal.py
-rw-r--r-- 1 root staff 244 Oct 28 11:25 gdal.pyc
-rw-r--r-- 1 root staff 143 Oct 28 10:15 gdalconst.py
-rw-r--r-- 1 root staff 274 Oct 28 11:25 gdalconst.pyc
-rw-r--r-- 1 root staff 147 Oct 28 10:15 gdalnumeric.py
-rw-r--r-- 1 root staff 279 Oct 28 11:25 gdalnumeric.pyc
Literature indicates lidar helps a great deal with road/building classification (and also with obscured footpaths?)
Here are some sources to consider:
Got this error when importing tensorflow:
RuntimeError: module compiled against API version a but this version of numpy is 9
Installing the newest numpy (currently 1.10.4) made it behave.
I think the word "false positive" in README file:
render, as JPEGs, "false positive" predictions in the OSM data - i.e. where OSM lists a road, but DeepOSM thinks there isn't one.
It is not correct usage. According to wiki, it should be false negative. I hope you do not mind if I have this issue.
Best regards,
NAIP offers free 1 meter resolution images for the whole CONUS. They are georectified .tiff files by state, but not on z/x/y.
Data is accessible as "requester pays" on s3://aws-naip
.
Currently, Dockerfile.devel-gpu inherits from GDAL Dockerfile, and then adds in both stuff needed for 1) data creation, and stuff needed for 2) analysis... including a nested stack of Tensorflow Dockerfiles that got copied in.
It would be cleaner if one Dockerfile created training data and saved it to disk, and one Dockerfile used training data from disk to analyze. The training data docker file could be mostly like the existing non-GPU Dockerfile, and the analysis Docker file would inherit from stock Tensorflow Dockerfile and be short, simple, and easy to maintain/deploy to AWS.
In production, I guess one Dockerfile saves data to S3, and the other mounts that data bucket for analysis. In development, it create a directory on disk, no S3 involved.
The data creation Docker app could also serve an API, to give people training data for their own models, or to support an MNIST like open research competition for maps.
Putting the data in Postgres seems like a good mid-game/end-game move. Do this after we put up deeposm.org, want to scale, and/or want to provide a place for researchers to run arbitrary experiments.
Benefits include:
This issue describes how deeposm works now. Then it describes changes needed to improve the infrastructure. Also see notes/scripts on these issues: #8, #23, #30, and #39 (these issues were closed to merge with this issue, not completed).
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.