songtaohe / sat2graph Goto Github PK
View Code? Open in Web Editor NEWSat2Graph: Road Graph Extraction through Graph-Tensor Encoding
License: MIT License
Sat2Graph: Road Graph Extraction through Graph-Tensor Encoding
License: MIT License
Hi, very impressive work here! When I use "go run main.go ***" to evaluate the example files in terms of APLS metric, it works fine. Then I convert model outputs for test set as well as gt graphs to .json format, and then use go command to evaluate them, I got bugs. I checked that the converted output and gt jsons are in the same format as the converted example files. Is there any other variables I should modify? Do you know what's wrong?
I want to work model files but I keep getting this error (train.py , localserver.py ..)
Traceback (most recent call last):
File "localserver.py", line 22, in
model = Sat2GraphModel(sess, image_size=352, resnet_step = 8, batchsize = 1, channel = 12, mode = "test")
File "/content/Sat2Graph/model/model.py", line 66, in init
self.imagegraph_output = self.BuildDeepLayerAggregationNetWithResnet(self.input_sat, input_ch = image_ch, output_ch =2 + MAX_DEGREE * 4 + (2 if self.joint_with_seg==True else 0), ch=channel)
File "/content/Sat2Graph/model/model.py", line 342, in BuildDeepLayerAggregationNetWithResnet
conv1, _, _ = common.create_conv_layer('cnn_l1', net_input, input_ch, ch, kx = 5, ky = 5, stride_x = 1, stride_y = 1, is_training = self.is_training, batchnorm = False)
File "/content/Sat2Graph/model/tf_common_layer.py", line 56, in create_conv_layer
input_tensor = tf.pad(input_tensor, [[0, 0], [kx/2, kx/2], [kx/2, kx/2], [0, 0]], mode="CONSTANT")
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/ops/array_ops.py", line 2840, in pad
result = gen_array_ops.pad(tensor, paddings, name=name)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/ops/gen_array_ops.py", line 6399, in pad
"Pad", input=input, paddings=paddings, name=name)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/op_def_library.py", line 632, in _apply_op_helper
param_name=input_name)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/op_def_library.py", line 61, in _SatisfiesTypeConstraint
", ".join(dtypes.as_dtype(x).name for x in allowed_list)))
TypeError: Value passed to parameter 'paddings' has DataType float32 not in list of allowed values: int32, int64
请问有没有pytorch版或者TF2版本的代码
This is really a great job, and thank you so much for sharing your source code. Here I have a question.
If I want to try Sat2Graph on my datasets, how to obtain the sample points (_refine_gt_graph_samplepoints.json) and the neighbors (_refine_gt_graph.p) from the ground-truth (_gt.png) please?
Appreciate your help!
I cloned the repo inside the docker and I am running it for custom image from inside the docker I ran :
root@27442c0c7c4c:/usr/src/app/Sat2Graph/docker/scripts# python infer_custom_input.py -input /usr/src/app/BIAL_train/1.png -gsd 0.5 -model_id 3 -output /usr/src/app/BIAL_train/out1.json
But I keep on getting the error :
<type 'str'>
Traceback (most recent call last):
File "infer_custom_input.py", line 56, in <module>
x = requests.post(url, data = json.dumps(msg))
File "/root/.local/lib/python2.7/site-packages/requests/api.py", line 119, in post
return request('post', url, data=data, json=json, **kwargs)
File "/root/.local/lib/python2.7/site-packages/requests/api.py", line 61, in request
return session.request(method=method, url=url, **kwargs)
File "/root/.local/lib/python2.7/site-packages/requests/sessions.py", line 542, in request
resp = self.send(prep, **send_kwargs)
File "/root/.local/lib/python2.7/site-packages/requests/sessions.py", line 655, in send
r = adapter.send(request, **kwargs)
File "/root/.local/lib/python2.7/site-packages/requests/adapters.py", line 498, in send
raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: ('Connection aborted.', BadStatusLine('No status line received - the server has closed the connection',))
Please guide.
when i run main.go. i meets the questions:
apls/main.go:107:49: cannot use rtreego.Point literal.ToRect(tol) (type rtreego.Rect) as type *rtreego.Rect in return argument
apls/main.go:356:13: cannot use &gNode (type *gpsnode) as type rtreego.Spatial in argument to rt.Insert:
*gpsnode does not implement rtreego.Spatial (wrong type for Bounds method)
have Bounds() *rtreego.Rect
want Bounds() rtreego.Rect
apls/main.go:373:28: impossible type assertion:
*gpsnode does not implement rtreego.Spatial (wrong type for Bounds method)
have Bounds() *rtreego.Rect
want Bounds() rtreego.Rect
apls/main.go:377:25: impossible type assertion:
*gpsnode does not implement rtreego.Spatial (wrong type for Bounds method)
have Bounds() *rtreego.Rect
want Bounds() rtreego.Rect
apls/main.go:378:36: impossible type assertion:
*gpsnode does not implement rtreego.Spatial (wrong type for Bounds method)
have Bounds() *rtreego.Rect
want Bounds() rtreego.Rect
apls/main.go:381:32: impossible type assertion:
*gpsnode does not implement rtreego.Spatial (wrong type for Bounds method)
have Bounds() *rtreego.Rect
want Bounds() rtreego.Rect
have you meet the same question?thank you
This is my first time using GPS. The propagation distance in the topo function is set to 300 meters. I want to know if the size of my input image is 512*512 pixels, where r = 0.00300, how many pixels does the propagation distance correspond to?
Looking forward to your reply
Great work you have done here, I wanted to see how you handled the distance computation for the two vertexes connected to the same edge (whether you computed distance for one edge or on both edge) but the paper didn't emphasize a lot on the implementation of the decoder, code looks rather complicated and wonder if there is some decoder pseudocode to look at?
This is really a great job, and thank you so much for sharing your source code. Here I have a question.
If I want to try Sat2Graph on my datasets, how to obtain the sample points (_refine_gt_graph_samplepoints.json) and the neighbors (_refine_gt_graph.p) from the ground-truth (_gt.png) please?
Appreciate your help!
In the current implementation, we take the ground truth graph (from OpenStreetMap) as input (in graph format) and generate the corresponding segmentation mask (_gt.png), the sample points (_refine_gt_graph_samplepoints.json), and the interpolated ground truth graphs (_refine_gt_graph.p). For this part, you can check the code in prepare_dataset/download.py
If your ground-truth is in segmentation format, then you may have to first convert it to graph format. Unfortunately, there is no code in this repo. I can try to add one if you need it.
The code to create the sample points and the refined ground truth graphs (_refine_gt_graph.p).
Originally posted by @songtaohe in #2 (comment)
From the code in model.py, I could not find out which version of the Deep Layer Aggregation architecture was used for the Sat2Graph model. In your paper I see only mentioned that you used residual blocks for the aggregation function.
Did you use one of the versions presented in the DLA paper or did you implement an architecture of your own?
I need support and advice to how train this network with 30 cm satellite image, can any one describe a free workflow to annotate data and how can i train this network please,
Hi, really cool work here. Could you specify the training environment for this project? i understand that you are usign tensorflow 14.0 but is it using any specific cuda and cudnn ?
Thank you very much for the open source code, there is an error that ModuleNotFoundError: No module named 'mapdriver' when run prepare_data/download.py. Is package--mapdriver is your own file,because I could not install it.Looking forward to your reply
HI, really interesting project.
I'm having some problem running the CPU docker version.
it's fine when using
python infer_custom_input.py -input sample.png -gsd 0.5 -model_id 2 -output out.json
written in the instruction.
but when I give it a different file, the whole system crush,
python infer_custom_input.py -input test.png -gsd 0.5 -model_id 2 -output out.json
(I cut the image to match the size with sample.png 704*704)
and this is what I get on docker side
(704, 704, 3)
INFO:root:POST request,
Path: /
Headers:
Host: localhost:8007
Connection: keep-alive
Accept-Encoding: gzip, deflate
Accept: */*
User-Agent: python-requests/2.25.1
Content-Length: 311
Progress (50.0%) >>>>>>>>>>>>>>>>>>>>--------------------('GPU time (pass 1):', 3.8675999641418457)
('Decode time (pass 1):', 0.06645011901855469)
Progress (100.0%) >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>('GPU time (pass 2):', 3.893293857574463)
('Decode time (pass 2):', 0.06614208221435547)
begin
it stop at "begin" forever!
And another thing, is there any specific input format for the custom file?
seems like only take 24 bit-depth png?
any suggestion will be helpful, thanks
Hi,
Thanks for your work. I've used the script for training the model at the 2048 dataset image size and 352 input image size but I'm struggling to get it to work with sizes different than those.
Is it possible? The images from my dataset are 512 x 512 and lowres, so I'd like to experiment with input images smaller than 352 x 352.
Could any please explain the theory behind the formula used in the GPSDistance function? I'm curious about its calculation and why it was chosen for the project.
Thanks for your time and expertise!
what is the number 16 used for in the below code?
Lines 189 to 190 in 93ee603
Hi,
This is a great work, and I would like to train your model on spacenet. Can you please share the code for dataloader_spacenet.py ?
It would be perfect if the trained model on spacenet can be released. I am looking forward to hearing back from you.
Thank you very much!
Best
Yang
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.