orion-ai-lab / kurosiwo Goto Github PK
View Code? Open in Web Editor NEWCode and data for Kuro Siwo flood mapping dataset
Home Page: https://orion-ai-lab.github.io/publication/bountos-2023-kuro/
License: MIT License
Code and data for Kuro Siwo flood mapping dataset
Home Page: https://orion-ai-lab.github.io/publication/bountos-2023-kuro/
License: MIT License
To use the pretrained models, we need to preprocess the Sentinel-1 images identically to the KuroSiwo dataset. Else we risk degraded performance from subtle data distribution misalignment.
In the paper, Section 3 describes it as:
[Using Sentinal Application Platform (SNAP) to apply] precise orbit application, removal of thermal and border noise, land and sea masking, calibration, speckle filtering, and terrain correction using an external digital elevation model.
Could you provide either a script or a configuration file or something like that which would allow others to exactly replicate this preprocessing on other Sentinel-1 data?
P.S. Sorry for raising so many issues. I'm excited to use the model is all.
Dear @ngbountos and colleagues,
First of all, congratulations and thank you for the impressive work. I have the following question: how should I proceed if I wanted to download only the labeled images relative to the flood events that took place in Europe?
Thank you in advance and best regards,
Daniele Francario
Forgive me for submitting a speculative issue, but I thought the potential benefit was worth it. I was loading Sentinel-1 images from asf_search
, and I noticed a few things.
asf_search
are 'complex_int16'
dtype. That is, each part of the complex number is an int8.numpy
doesn't have a 'complex_int16'
dtype, so, rasterio
loads the images as 'complex64'
by default.utilities/kurosiwo_slc.py
doesn't have code to change dtype.'float32'
.So, perhaps the final result could be type cast back to int8 or int16 without any effective loss of precision? Of course, it still makes sense to do all the processing as float32 to not lose precision between operations, but changing the output dtype could halve or quarter the total file size while also better explaining the true precision.
Yassou @ngbountos & colleagues,
congratulations and thanks for your nice work! This is to ask whether the benchmarked models' checkpoints are available anywhere or whether you are planning to share them. Specifically, it would be convenient to have access to the checkpoints of FlodViT and SNUNet, as these two perform very well and cover both segmentation plus change detection.
I understand that the aim of the publication is in providing a benchmark rather than new SOTAs and that the training scripts are kindly provided by you ... BUT it would be awesomely convenient to having access to these checkpoints for some rapid prototyping and quick inference-only purposes : )
Cheers
Patrick
Hello! Thanks for making an interesting dataset and providing pretrained weights. I'm trying to use the pretrained weights and I'm coming up against a problem.
The ViT model doesn't seem to run. Here's a small example to observe the problem (run from the root of this git)
import torch
model = torch.load('./best_segmentation.pt', map_location=torch.device('cpu'))
inp = torch.randn(4, 6, 224, 224)
out = model(inp)
It gives an einops error:
einops.EinopsError: Error while processing rearrange-reduction pattern "b (h w) c -> b (c) h w".
Input tensor shape: torch.Size([4, 1024]). Additional info: {'h': 14, 'w': 14}.
Wrong shape: expected 3 dims. Received 2-dim tensor.
The error points me to models/model_utilities.py:86
. So far as I can see, this line is supposed to take the output of a vit_pytorch.vit.ViT
model and put it through an einops.rearrange(x, "b (h w) c -> b (c) h w")
to reshape it into an image-like structure. But the ViT always outputs a tensor shaped [B, C]
, not [B, N, C]
. So, I'm a bit confused. The shapes don't match! Am I using it wrong? What should it be doing?
P.S. I am writing a hubconf.py
so that these pretrained models can be more easily used by others (and me). I will put in a pull request later, once I sort this out. I reckon I've figured out the SNUNet already.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.