sentinel-hub / eo-learn Goto Github PK
View Code? Open in Web Editor NEWEarth observation processing framework for machine learning in Python
Home Page: https://eo-learn.readthedocs.io/en/latest/
License: MIT License
Earth observation processing framework for machine learning in Python
Home Page: https://eo-learn.readthedocs.io/en/latest/
License: MIT License
Hello,
I'm about to integrate Sentinel-1 in the workflow. I attempt to request data via S1IWWCSInput
and it works smoothly. But when I requested layers of Sentinel-1 and -2 to an eopatch, the workflow fails to execute (IndexError: During the execution of task S1IWWCSInput: index -1 is out of bounds for axis 0 with size 0
). The request can be done with S1 and S2 individually. Any idea on how to do it right?
And it was briefly mentioned in the tree cover example notebook as
include Sentinel-1 after harmonized with Sentinel-2
Could you give some suggestions in terms of how to do it in steps?
Many thanks in advance!
Enhancement suggestion.
The SI_LULC_pipeline example is written expecting an increasing integer sequence [SI_LULC_pipeline] while a real life example in my region has attributes like
#land_cover_val = [0,1,2,3,4,5,6,7,8,9,10]
land_cover_val = [0,1,4,5,7,10,11,12,14,16,18,19,22,23,24,26,28,29,31,32,34,35,36,41,45,51]
as inevitably the smaller section has not all landuse classifiers in it.
A good proportion needs to be rewritten as loads of the code is somehow based on an increasing integer sequence and fails on a list.
Here is an example of my landuse data.
https://hlamap.org.uk/data-download/data-download-form
The Slovenia link is dead.
http://rkg.gov.si/GERK/documents/RABA_2018_10_31.RAR
linked from
https://eo-learn.readthedocs.io/en/latest/examples/land-cover-map/SI_LULC_pipeline.html
As Sentinel Hub introduced rate limiting, some eo-learn (or sentinelhub-py?) requests result in 429 errors as they trigger large number of requests behind the scenes.
It would be good to somehow be able to configure the rate limit and then take this into account when querying Sentinel Hub.
And whenever 429 error comes, system should retry along the following lines:
https://en.wikipedia.org/wiki/Exponential_backoff
Hi
I'm try running CloudMaskTask notebook, but return bad request in download images:
See message:
DownloadFailedException: During execution of task S2L1CWMSInput: Failed to download from:
https://services.sentinel-hub.com/ogc/wms/MY_INSTANCE_ID?SERVICE=wms&BBOX=37.656454%2C128.689942%2C37.677434%2C128.722946&FORMAT=image%2Ftiff%3Bdepth%3D32f&CRS=EPSG%3A4326&WIDTH=252&HEIGHT=200&LAYERS=TRUE-COLOR-S2-L1C&REQUEST=GetMap&VERSION=1.3.0&TIME=2016-01-06T02%3A17%3A17%2F2016-01-06T02%3A17%3A17&MAXCC=100.0&ShowLogo=False&Transparent=True
with HTTPError:
400 Client Error: Bad Request for url: https://services.sentinel-hub.com/ogc/wms/MY_INSTANCE_ID?SERVICE=wms&BBOX=37.656454%2C128.689942%2C37.677434%2C128.722946&FORMAT=image%2Ftiff%3Bdepth%3D32f&CRS=EPSG%3A4326&WIDTH=252&HEIGHT=200&LAYERS=TRUE-COLOR-S2-L1C&REQUEST=GetMap&VERSION=1.3.0&TIME=2016-01-06T02%3A17%3A17%2F2016-01-06T02%3A17%3A17&MAXCC=100.0&ShowLogo=False&Transparent=True
Server response: "Layer TRUE-COLOR-S2-L1C not found"
I replaced my instance id by MY_INSTANCE_ID. And the instance id generated based on the template "FULL WMS Instance" in Sentinel Hub Configurations.
This is problem with instance id value? Is necessary another configurations?
I am working on another reference dataset which have similiar characteristics with your data. When i implemented to masking for no_data label, i got this following error..
ValueError: y contains previously unseen labels
This error occurs generally in case Train data and Test data have different categories but we mask no_data for both of datasets
I'm new to this tool, so when I took some experiments with file /eo-learn/examples/water-monitor/WaterMonitorWorkflow.ipynb
, I got following error in Step 3:
TypeError Traceback (most recent call last)
in
2 input_task: {
3 'bbox': dam_bbox,
----> 4 'time_interval': time_interval
5 },
6 })~/.local/lib/python3.6/site-packages/eolearn/core/eoworkflow.py in execute(self, input_args, monitor)
166 input_args = self.parse_input_args(input_args)
167
--> 168 _, intermediate_results = self._execute_tasks(input_args=input_args, out_degs=out_degs, monitor=monitor)
169
170 return WorkflowResults(intermediate_results)~/.local/lib/python3.6/site-packages/eolearn/core/eoworkflow.py in _execute_tasks(self, input_args, out_degs, monitor)
205 input_args=input_args,
206 intermediate_results=intermediate_results,
--> 207 monitor=monitor)
208
209 intermediate_results[dep] = result~/.local/lib/python3.6/site-packages/eolearn/core/eoworkflow.py in _execute_task(self, dependency, input_args, intermediate_results, monitor)
238
239 LOGGER.debug("Computing %s(*%s, **%s)", str(task), str(inputs), str(kw_inputs))
--> 240 return task(*inputs, **kw_inputs, monitor=monitor)
241
242 def _relax_dependencies(self, *, dependency, out_degrees, intermediate_results):~/.local/lib/python3.6/site-packages/eolearn/core/eotask.py in call(self, monitor, *eopatches, **kwargs)
50 # return self.execute_and_monitor(*eopatches, **kwargs)
51
---> 52 return self._execute_handling(*eopatches, **kwargs)
53
54 def execute_and_monitor(self, *eopatches, **kwargs):~/.local/lib/python3.6/site-packages/eolearn/core/eotask.py in _execute_handling(self, *eopatches, **kwargs)
71 exception, traceback = caught_exception
72 raise type(exception)('During execution of task {}: {}'.format(self.class.name,
---> 73 exception)).with_traceback(traceback)
74
75 self.private_task_config.end_time = datetime.datetime.now()~/.local/lib/python3.6/site-packages/eolearn/core/eotask.py in _execute_handling(self, *eopatches, **kwargs)
64 caught_exception = None
65 try:
---> 66 return_value = self.execute(*eopatches, **kwargs)
67 except BaseException as exception:
68 caught_exception = exception, exc_info()[2]~/.local/lib/python3.6/site-packages/eolearn/io/sentinelhub_service.py in execute(self, eopatch, bbox, time_interval)
211 download_frames = get_common_timestamps(request_dates, eopatch.timestamp)
212
--> 213 images = request.get_data(raise_download_errors=self.raise_download_errors, data_filter=download_frames)
214
215 if not self.raise_download_errors:~/.local/lib/python3.6/site-packages/sentinelhub/data_request.py in get_data(self, save_data, data_filter, redownload, max_threads, raise_download_errors)
102 """
103 self._preprocess_request(save_data, True)
--> 104 data_list = self._execute_data_download(data_filter, redownload, max_threads, raise_download_errors)
105 return self._add_saved_data(data_list, data_filter, raise_download_errors)
106~/.local/lib/python3.6/site-packages/sentinelhub/data_request.py in _execute_data_download(self, data_filter, redownload, max_threads, raise_download_errors)
157 for future in download_data(filtered_download_list, redownload=redownload, max_threads=max_threads):
158 try:
--> 159 data_list.append(future.result(timeout=SHConfig().download_timeout_seconds))
160 except ImageDecodingError as err:
161 data_list.append(None)/usr/lib/python3.6/concurrent/futures/_base.py in result(self, timeout)
423 raise CancelledError()
424 elif self._state == FINISHED:
--> 425 return self.__get_result()
426
427 self._condition.wait(timeout)/usr/lib/python3.6/concurrent/futures/_base.py in __get_result(self)
382 def __get_result(self):
383 if self._exception:
--> 384 raise self._exception
385 else:
386 return self._result/usr/lib/python3.6/concurrent/futures/thread.py in run(self)
54
55 try:
---> 56 result = self.fn(*self.args, **self.kwargs)
57 except BaseException as exc:
58 self.future.set_exception(exc)~/.local/lib/python3.6/site-packages/sentinelhub/download.py in execute_download_request(request)
266
267 if request.return_data:
--> 268 return decode_data(response_content, request.data_type, entire_response=response)
269 return None
270~/.local/lib/python3.6/site-packages/sentinelhub/download.py in decode_data(response_content, data_type, entire_response)
415 return json.loads(response_content.decode('utf-8'))
416 if MimeType.is_image_format(data_type):
--> 417 return decode_image(response_content, data_type)
418 if data_type is MimeType.XML or data_type is MimeType.GML or data_type is MimeType.SAFE:
419 return ElementTree.fromstring(response_content)~/.local/lib/python3.6/site-packages/sentinelhub/download.py in decode_image(data, image_type)
444 bytes_data = BytesIO(data)
445 if image_type.is_tiff_format():
--> 446 image = tiff.imread(bytes_data)
447 else:
448 image = np.asarray(Image.open(bytes_data))~/.local/lib/python3.6/site-packages/tifffile/tifffile.py in imread(files, **kwargs)
620 if isinstance(files, basestring) or hasattr(files, 'seek'):
621 with TiffFile(files, **kwargs_file) as tif:
--> 622 return tif.asarray(**kwargs)
623 else:
624 with TiffSequence(files, **kwargs_seq) as imseq:~/.local/lib/python3.6/site-packages/tifffile/tifffile.py in asarray(self, key, series, out, validate, maxworkers)
2087 elif len(pages) == 1:
2088 result = pages[0].asarray(out=out, validate=validate,
-> 2089 maxworkers=maxworkers)
2090 else:
2091 result = stack_pages(pages, out=out, maxworkers=maxworkers)~/.local/lib/python3.6/site-packages/tifffile/tifffile.py in asarray(self, out, squeeze, lock, reopen, maxsize, maxworkers, validate)
3854 if lsb2msb:
3855 strip = bitorder_decode(strip, out=strip)
-> 3856 strip = decompress(strip, out=outsize)
3857 strip = unpack(strip)
3858 size = min(result.size, strip.size, stripsize,TypeError: During execution of task S2L1CWCSInput: 'out' is an invalid keyword argument for this function
Please tell me if anything more needed.
The eolearn docs page (https://eo-learn.readthedocs.io/en/latest/) states in two places that local data can be uploaded to eolearn.
and
eo-learn-io - Input/output subpackage that deals with obtaining data from Sentinel Hub services or saving and loading data locally.
However, I have not discovered a way to load local data. Is this possible? If so, how?
Thank you.
Hi,
I am trying to add a DATA_TIMELESS DEM layer to a patch using:
add_dem = DEMWCSInput('DEM')
However, my patch already contains multitemporal DATA from:
input_task = S2L1CWCSInput('BANDS-S2-L1C', resx=f'{resolution}m', resy=f'{resolution}m', maxcc=0.8, instance_id=INSTANCE_ID)
When I try to access the timestamps of DATA, using eopatch.timestamp
, I get None
as return, and as a result, I cannot access the DATA layers properly.
This might be an underlying problem of adding DATA_TIMELESS to a patch which contains time-dependent DATA. This should be feasible. In my case, I want to use the DEM to process hillshades for the individual DATA layers based on the DATA_TIMELESS DEM and the solar angle (another problem which I found a work-around for using pysolar because the native .SAFE metadata is not fully accessible through eo-learn, or at least it is undocumented), but merging DATA_TIMELESS and DATA in the same patch is not allowing me to iterate through the DATA layers because of the missing time stamps.
As a work-around, I create an eopatch for the DATA_TIMELESS DEM, an eopatch for the DATA BANDS-S2-L1C, and merge them into a single patch after executing the workflows separately. It's inconvenient however.
An example to support my claim:
Patch with DATA_TIMELESS DEM:
EOPatch(
data: {
CLP: <class 'numpy.ndarray'>, shape=(26, 1304, 1095, 1), dtype=float32
NDWI: <class 'numpy.ndarray'>, shape=(26, 1304, 1095, 1), dtype=float32
TRUE-COLOR-S2-L1C: <class 'numpy.ndarray'>, shape=(26, 1304, 1095, 3), dtype=float32
}
mask: {
CLM: <class 'numpy.ndarray'>, shape=(26, 1304, 1095, 1), dtype=uint8
IS_DATA: <class 'numpy.ndarray'>, shape=(26, 1304, 1095, 1), dtype=uint8
VALID_DATA_SH: <class 'numpy.ndarray'>, shape=(26, 1304, 1095, 1), dtype=bool
}
scalar: {}
label: {}
vector: {}
data_timeless: {
DEM: <class 'numpy.ndarray'>, shape=(1304, 1095, 1), dtype=float32
}
mask_timeless: {
VALID_COUNT_SH: <class 'numpy.ndarray'>, shape=(1304, 1095, 1), dtype=int64
}
scalar_timeless: {}
label_timeless: {}
vector_timeless: {}
meta_info: {
maxcc: 0.8
service_type: 'wcs'
size_x: '10m'
size_y: '10m'
time_difference: datetime.timedelta(-1, 86399)
time_interval: <class 'list'>, length=2
}
bbox: BBox(((633073.0940938871, 5642152.392471639), (644020.5514449456, 5655192.965695879)), crs=EPSG:32631)
**timestamp: <class 'list'>, length=1**
)
Patch without DATA_TIMELESS DEM:
EOPatch(
data: {
CLP: <class 'numpy.ndarray'>, shape=(26, 1304, 1095, 1), dtype=float32
NDWI: <class 'numpy.ndarray'>, shape=(26, 1304, 1095, 1), dtype=float32
TRUE-COLOR-S2-L1C: <class 'numpy.ndarray'>, shape=(26, 1304, 1095, 3), dtype=float32
}
mask: {
CLM: <class 'numpy.ndarray'>, shape=(26, 1304, 1095, 1), dtype=uint8
IS_DATA: <class 'numpy.ndarray'>, shape=(26, 1304, 1095, 1), dtype=uint8
VALID_DATA_SH: <class 'numpy.ndarray'>, shape=(26, 1304, 1095, 1), dtype=bool
}
scalar: {}
label: {}
vector: {}
data_timeless: {}
mask_timeless: {
VALID_COUNT_SH: <class 'numpy.ndarray'>, shape=(1304, 1095, 1), dtype=int64
}
scalar_timeless: {}
label_timeless: {}
vector_timeless: {}
meta_info: {
maxcc: 0.8
service_type: 'wcs'
size_x: '10m'
size_y: '10m'
time_difference: datetime.timedelta(-1, 86399)
time_interval: <class 'list'>, length=2
}
bbox: BBox(((633073.0940938871, 5642152.392471639), (644020.5514449456, 5655192.965695879)), crs=EPSG:32631)
**timestamp: <class 'list'>, length=26**
)
DEM patch + DATA patch summed up afterwards:
EOPatch(
data: {
CLP: <class 'numpy.ndarray'>, shape=(26, 1304, 1095, 1), dtype=float32
NDWI: <class 'numpy.ndarray'>, shape=(26, 1304, 1095, 1), dtype=float32
TRUE-COLOR-S2-L1C: <class 'numpy.ndarray'>, shape=(26, 1304, 1095, 3), dtype=float32
}
mask: {
CLM: <class 'numpy.ndarray'>, shape=(26, 1304, 1095, 1), dtype=uint8
IS_DATA: <class 'numpy.ndarray'>, shape=(27, 1304, 1095, 1), dtype=uint8
VALID_DATA_SH: <class 'numpy.ndarray'>, shape=(26, 1304, 1095, 1), dtype=bool
}
scalar: {}
label: {}
vector: {}
data_timeless: {
DEM: <class 'numpy.ndarray'>, shape=(1304, 1095, 1), dtype=float32
}
mask_timeless: {
VALID_COUNT_SH: <class 'numpy.ndarray'>, shape=(1304, 1095, 1), dtype=int64
}
scalar_timeless: {}
label_timeless: {}
vector_timeless: {}
meta_info: {
maxcc: 0.8
service_type: 'wcs'
size_x: '10m'
size_y: '10m'
time_difference: datetime.timedelta(-1, 86399)
time_interval: <class 'list'>, length=2
}
bbox: BBox(((633073.0940938871, 5642152.392471639), (644020.5514449456, 5655192.965695879)), crs=EPSG:32631)
**timestamp: <class 'list'>, length=27**
)
The timestamp of the DEM was added to the sum, which is None
:
[datetime.datetime(2018, 4, 6, 10, 53, 31), datetime.datetime(2018, 4, 8, 10, 45, 39), datetime.datetime(2018, 4, 16, 10, 56, 19), datetime.datetime(2018, 4, 18, 10, 45, 12), datetime.datetime(2018, 4, 21, 10, 56, 29), datetime.datetime(2018, 4, 23, 10, 44, 41), datetime.datetime(2018, 4, 26, 10, 52, 2), datetime.datetime(2018, 5, 1, 10, 50, 29), datetime.datetime(2018, 5, 3, 10, 42, 9), datetime.datetime(2018, 5, 6, 10, 54, 23), datetime.datetime(2018, 5, 8, 10, 40, 25), datetime.datetime(2018, 5, 11, 10, 55, 18), datetime.datetime(2018, 5, 18, 10, 40, 24), datetime.datetime(2018, 5, 21, 10, 54, 16), datetime.datetime(2018, 5, 23, 10, 41, 24), datetime.datetime(2018, 5, 26, 10, 56, 35), datetime.datetime(2018, 5, 28, 10, 46, 13), datetime.datetime(2018, 5, 31, 10, 52, 56), datetime.datetime(2018, 6, 7, 10, 40, 22), datetime.datetime(2018, 6, 10, 10, 50, 26), datetime.datetime(2018, 6, 15, 10, 54, 1), datetime.datetime(2018, 6, 20, 10, 52, 11), datetime.datetime(2018, 6, 22, 10, 40, 21), datetime.datetime(2018, 6, 25, 10, 52, 53), datetime.datetime(2018, 6, 27, 10, 40, 23), datetime.datetime(2018, 6, 30, 10, 54, 40), None]
However, this shouldn't happen, DATA_TIMELESS should have no impact on the metadata tags of time-dependent variables.
To conclude:
Can we make sure that the timestamps of the time-dependent variables always override that of *_TIMELESS?
Cheers!
after cell 16 :
`%%time
pbar = tqdm(total=len(patchIDs))
for i in range(9):
if save_choice:
result = workflow.execute({
load: {'eopatch_folder': 'eopatch_{}'.format(i)},
save: {'eopatch_folder': 'eopatch_{}'.format(i)}
})
del result
else:
result = workflow.execute({
load: {'eopatch': eopatches[i]}
})
# update old patches
eopatches[i] = result[list(result.keys())[-1]]
del result
pbar.update(1)`
the script raise this error
TypeError: During execution of task PointSamplingTask: init() got an unexpected keyword argument 'disk_radius'
Hi
I've been trying to rework your land-cover-map in order to make a classification of another region, which means I'm using another shape for the region and another reference map.
When i get to part 2, where I have to use the SimpleFilterTask, i get this error:
Traceback (most recent call last):
File "<ipython-input-13-b40e61641a96>", line 8, in <module>
workflow.execute(extra_param)
File "/opt/anaconda3/lib/python3.7/site-packages/eolearn/core/eoworkflow.py", line 167, in execute
_, intermediate_results = self._execute_tasks(input_args=input_args, out_degs=out_degs, monitor=monitor)
File "/opt/anaconda3/lib/python3.7/site-packages/eolearn/core/eoworkflow.py", line 206, in _execute_tasks
monitor=monitor)
File "/opt/anaconda3/lib/python3.7/site-packages/eolearn/core/eoworkflow.py", line 239, in _execute_task
return task(*inputs, **kw_inputs, monitor=monitor)
File "/opt/anaconda3/lib/python3.7/site-packages/eolearn/core/eotask.py", line 52, in __call__
return self._execute_handling(*eopatches, **kwargs)
File "/opt/anaconda3/lib/python3.7/site-packages/eolearn/core/eotask.py", line 73, in _execute_handling
exception)).with_traceback(traceback)
File "/opt/anaconda3/lib/python3.7/site-packages/eolearn/core/eotask.py", line 66, in _execute_handling
return_value = self.execute(*eopatches, **kwargs)
File "/opt/anaconda3/lib/python3.7/site-packages/eolearn/features/feature_manipulation.py", line 61, in execute
idx in good_idxs])
File "/opt/anaconda3/lib/python3.7/site-packages/eolearn/core/eodata.py", line 787, in __setitem__
value = self._parse_feature_value(value)
File "/opt/anaconda3/lib/python3.7/site-packages/eolearn/core/eodata.py", line 819, in _parse_feature_value
'dimension{}'.format(self.feature_type, self.ndim, 's' if self.ndim > 1 else ''))
ValueError: During execution of task SimpleFilterTask: Numpy array of FeatureType.DATA feature has to have 4 dimensions
I run the same workflow as in the example.
I don't understand the error since it appears to me that the arrays have the correct dimensions.
When i run EOPatch.load
i get the following:
EOPatch(
data: {
BANDS: numpy.ndarray(shape=(60, 1004, 998, 6), dtype=float32)
CLP: numpy.ndarray(shape=(60, 1004, 998, 1), dtype=float32)
NDVI: numpy.ndarray(shape=(60, 1004, 998, 1), dtype=float32)
NDWI: numpy.ndarray(shape=(60, 1004, 998, 1), dtype=float32)
NORM: numpy.ndarray(shape=(60, 1004, 998, 1), dtype=float32)
}
mask: {
CLM: numpy.ndarray(shape=(60, 1004, 998, 1), dtype=bool)
IS_DATA: numpy.ndarray(shape=(60, 1004, 998, 1), dtype=bool)
IS_VALID: numpy.ndarray(shape=(60, 1004, 998, 1), dtype=bool)
}
scalar: {}
label: {}
vector: {}
data_timeless: {}
mask_timeless: {
LULC: numpy.ndarray(shape=(1005, 999, 1), dtype=uint8)
VALID_COUNT: numpy.ndarray(shape=(1004, 998, 1), dtype=int64)
}
scalar_timeless: {}
label_timeless: {}
vector_timeless: {}
meta_info: {
maxcc: 0.8
service_type: 'wcs'
size_x: '10m'
size_y: '10m'
time_difference: datetime.timedelta(days=-1, seconds=86399)
time_interval: ['2017-01-01', '2017-12-31']
}
bbox: BBox(((582613.2881555471, 6351934.719896286), (592595.572278152, 6361973.034025557)), crs=EPSG:32632)
timestamp: [datetime.datetime(2017, 1, 3, 10, 44, 28), ..., datetime.datetime(2017, 12, 29, 10, 44, 34)], length=60
)
I have carried out this process on one patch. In next step, how can i change patch_id number. Which order i can use. Because it is not like 0,1,2,3
before is the MAC in the local environment, MAC successfully run your case, I have successfully changed the our own data,
But now I'm going to install the project to the new environment of ubuntu,is an example of an official, ,I didn't change anything, but the project stopped in this place
System:ubuntu18 gpu:nvidia1080ti
I was trying to run the example land-cover-map over Switzerland on my local machine but ran into a problem:
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-20-54b437ee3056> in <module>()
8 results = workflow.execute({input_task:{'bbox':bbox, 'time_interval':time_interval},
9 export_val_sh:{'filename':f'CH_data/valid_count-L1C/{tiff_name}'},
---> 10 save:{'eopatch_folder':patch_name}
11 })
~\Anaconda3\lib\site-packages\eolearn\core\eoworkflow.py in execute(self, input_args)
280 input_args = {WorkflowResult.get_key(k): v for k, v in input_args.items()} if input_args else {}
281
--> 282 _, intermediate_results = self._execute_tasks(input_args=input_args, outdegs=outdegs)
283
284 return WorkflowResult(intermediate_results)
~\Anaconda3\lib\site-packages\eolearn\core\eoworkflow.py in _execute_tasks(self, input_args, outdegs)
303 result = self._execute_task(input_args=input_args,
304 intermediate_results=intermediate_results,
--> 305 task_id=t_id)
306
307 intermediate_results[t_id] = result
~\Anaconda3\lib\site-packages\eolearn\core\eoworkflow.py in _execute_task(self, input_args, intermediate_results, task_id)
331 inputs = tuple(intermediate_results[t_dep] for t_dep in self.deps[task_id])
332 LOGGER.debug("Computing %s(*%s, **%s)", str(task), str(inputs), str(kw_inputs))
--> 333 return task(*inputs, **kw_inputs)
334
335 def _relax_dependencies(self, *, intermediate_results, out_degrees, current_task_id):
~\Anaconda3\lib\site-packages\eolearn\core\eotask.py in __call__(self, *eopatches, **kwargs)
39
40 def __call__(self, *eopatches, **kwargs):
---> 41 return self.execute(*eopatches, **kwargs)
42
43
~\Anaconda3\lib\site-packages\eolearn\mask\cloud_mask.py in execute(self, eopatch)
246 # Raise error if last channel dimension is less than required
247 if new_data.shape[-1] < len(self.classifier.band_idxs):
--> 248 raise ValueError("Data field has less than the required 10 bands")
249
250 # Compute cloud mask and add as feature to EOPatch
ValueError: Data field has less than the required 10 bands
This message was shown when running the cell [20]
in eo-learn/examples/land-cover-map/2_eopatch-L1C.ipynb with some minor tweaks.
Now I know it might be an issue due to my own adaptation to the region of Switzerland. So I would like to try run the notebook exactly as shown in this repository. Therefore I need to set up the layer in the configuartor. What is the specification of the TRUE-COLOR-S2-L1C layer you mention in the 2_eopatch-L1C.ipynb?
I worked through the python notebooks and I got it to work the way I wanted. So a big thanks for sharing it!
In the process, I thought of something which would be particularly useful, and that is a class which performs the prediction on a patch basis instead of on the reshaped raster for the entire AOI. The problem is that if the time series used is very long, and the AOI is very large, MemoryError is almost guaranteed.
I came up with this instead:
class PredictOutput(EOTask):
"""
The task performs the ML prediction patch-wise.
"""
def __init__(self, model):
self.model = model
def execute(self, eopatch):
feature = eopatch.data['FEATURES']
t, w, h, f = feature.shape
feature = np.swapaxes(feature,0,2).reshape(h*w,t*f)
plabels = self.model.predict(feature)
plabels = np.swapaxes(plabels.reshape(h,w),0,1)
plabels = plabels[...,np.newaxis]
eopatch.add_feature(FeatureType.DATA_TIMELESS, 'PRED', plabels)
return eopatch
I do not know if you think it is worth including in the eo-learn framework, or should just be included in the notebook as a way to show how to implement patch-wise prediction. Of course, only 9 patches were predicted, so there is no real need for it in the notebook, but they are all about showcasing what eo-learn is capable of.
Regards
While using EOExecutor, my use-case requires the results of the last Task in the Workflow as I need to do some processing on those results. As of now, the Executor doesn't return anything, it only executes the Workflow for multiple arguments.
A work around for this is to add the processing logic in a custom Task and add this task to the existing workflow - which works fine. Though, I wanted to know if there is a way for the Executor to return the results of the last Task performed in the Workflow.
Currently the documented pip install and install_all.py break for me.
Core installs ok, then when moving on to features:
pip3 install git+https://github.com/sentinel-hub/eo-learn#subdirectory=coregistration
Collecting git+https://github.com/sentinel-hub/eo-learn#subdirectory=coregistration
Cloning https://github.com/sentinel-hub/eo-learn to /tmp/pip-8bt4doir-build
Collecting eo-learn-core (from eo-learn-coregistration==0.1.0)
Could not find a version that satisfies the requirement eo-learn-core (from eo-learn-coregistration==0.1.0) (from versions: )
No matching distribution found for eo-learn-core (from eo-learn-coregistration==0.1.0)
I got it working by cloning the repo and did the typical:
python3 setup.py build
python3 setup.py build install
on each module and it seems to work OK.
Ubuntu 18.04.1 fresh install with updates.
eo-learn is a is really great python package and in particular your LULC classification example is very helpful! Thanks for the good work!
However, I often get nonsense results in the classified map. This applies to the most recent eo-learn versions 0.4.0 and 0.4.1 and most recent LULC Notebook - I am not entirely sure if my issue applies to previous versions, though.
I managed to trace back the issue to the linear interpolation task; when I display one of the features in the notebook after the interpolation, I can clearly see that something does not work as it is supposed to:
I am experiencing this error only on Windows 10 (tested it on 2 different machines).
Running the exact same Notebook works just fine on Ubuntu 18.10. Same cell displayed in Ubuntu:
I wonder if anyone else is experiencing the same problem on Windows?
Code snippet (you can insert it just before "6. Model construction and training"):
# Load and display 7th feature (should be NDVI) for the 19th timestep
time_idx=18
timestamp = EOPatch.load(f'{path_out_sampled}eopatch_0/').timestamp[time_idx]
print(f'NDVI at {timestamp}')
img = EOPatch.load(f'{path_out_sampled}eopatch_0/').data['FEATURES'][time_idx,:,:,6]
plt.imshow(img)
Hi,
If I have a set of points(not a polygon) in WGS84 that I want to map to S2L1C and display them (not with a mask, but manually plot them - projection from WGS84 to image).
I found this: https://github.com/sentinel-hub/sentinelhub-py/blob/master/sentinelhub/geo_utils.py#L178
Then I found also utility function to get a transform from the bbox, that is used for EOPatch - I think that there is a bug in the code in the line that I have linked bellow (min_x vs. x_min etc.)
https://github.com/sentinel-hub/sentinelhub-py/blob/master/sentinelhub/geometry.py#L309
I have manually created the transform and it worked, though UTM is really important as with WGS84 patch, there is quite a bit of error with the projections.
Additionally I would like to ask if there are any limitations on size of the requested area. For example I have tried getting an area of San Francisco (in 1 EOPatch - without splitting the area to smaller patches) an got back EOPatch of 5000x5000 which was mostly black. When I reduced the size, I got the satellite image that I have expected, for the same dates.
When creating EOPatches and filling them with Sentinel-2 data, I would be interested in also accessing the corresponding metadata information for each capture date (eg. Solar Irradiance List, U). Can this be achieved through eo-learn?
Thanks
When I try to make executor report my notebook's kernel deads. I'm following the documentation examples showed here.
I don't know why this is happening. I'm working with python 3.6 with:
eo-learn-core v0.3.1
eo-learn-coregistration v0.3.0
eo-learn-features v0.3.1
eo-learn-geometry v0.3.1
eo-learn-io v0.3.1
eo-learn-mask v0.3.1
eo-learn-ml-tools v0.3.0
Fiona v1.7.11
rasterio v0.36.0
I'm trying to install eo-learn on a windows machine. After I use pip install eo-learn
, I get the following error:
The error claims that more information is available in the README, but the README does not mention anything about windows installation. I did add the path to my GDAL_VERSION as a system variable. I am using a python 3.5 environment with Anaconda.
I have been acquiring Sentinel-2 data sand saving the EOPatch to disk using the following:
input_task = S2L1CWCSInput('TRUE-COLOR-S2-L1C', resx='20m', resy='20m', maxcc=maxcc, instance_id=INSTANCE_ID, time_difference=datetime.timedelta(days=1))
add_bands = S2L1CWCSInput(layer='BANDS-S2-L1C', resx='20m', resy='20m', maxcc=maxcc, instance_id=INSTANCE_ID, time_difference=datetime.timedelta(days=1))
save = SaveToDisk(str('data/eopatch'), overwrite_permission=OverwritePermission.OVERWRITE_PATCH)
time_interval = [start_dat, end_dat]
workflow = LinearWorkflow(input_task, add_bands, save)
result = workflow.execute({
input_task: {
'bbox': bbox,
'time_interval': time_interval
},
add_bands: {
'bbox': bbox,
'time_interval': time_interval
},
save:{'eopatch_folder':'eopatch_{}'.format(sitename)}
})
I now wish to change start_dat
and end_dat
in my time_interval - saving any additional data to the already created EOPatch on disk. Changing the OverwritePermission to eg. OverwritePermission.ADD_ONLY has been unsuccessful.
Is there an efficient way to achieve this?
Hello guys,
i'm currently tying to replicate the water monitoring study. But i have some issues when I try to import the classes AddS2L1CFeature, LoadFromDisk, SaveToDisk from eolearn.io. I took a look inside "eo-learn/io/eolearn/io/sentinelhub_service.py" and I couldn't mange to find these methods. Could it be that the methods are not on the master branch?
If you need further details I'm very happy to give them to you.
Thank you very much for your attention and good luck with the grate work!
Cheers,
Constantin (big fan of EO Browser)
Hello,
This is not an issue as such, but Im wondering if anyone has tried implementing other methods than LightGBM? I'd like to see how different Sci-kit models perform in comparison, such as SVM, MLP etc.
Id appreciate if anyone could help me on how to rewrite the model = definition and model.fit parameter so that its the same pipeline but with different ML method.
Thank you in advance.
Hi,
I'm trying to generate my own masks for the patches that go through a workflow using a geojson file.
This is the task that I'm trying to execute:
vec_to_raster = VectorToRaster((FeatureType.MASK, 'REGION'), region_geojson, 1, (1000, 1000))
The vector data - region_geojson
is a polygon that I used to generate the splits for patches. It should either totally encompass or intersect a patch.
When I try to run it I get this error:
File "/home/karlis/PythonProjects/sentinel_mining/L1C_patches.py", line 151, in <module>
save: {'eopatch_folder': patch_name}
File "/home/karlis/PythonProjects/sentinel_mining/venv/lib/python3.6/site-packages/eolearn/core/eoworkflow.py", line 165, in execute
_, intermediate_results = self._execute_tasks(input_args=input_args, out_degs=out_degs, monitor=monitor)
File "/home/karlis/PythonProjects/sentinel_mining/venv/lib/python3.6/site-packages/eolearn/core/eoworkflow.py", line 188, in _execute_tasks
monitor=monitor)
File "/home/karlis/PythonProjects/sentinel_mining/venv/lib/python3.6/site-packages/eolearn/core/eoworkflow.py", line 215, in _execute_task
return task(*inputs, **kw_inputs, monitor=monitor)
File "/home/karlis/PythonProjects/sentinel_mining/venv/lib/python3.6/site-packages/eolearn/core/eotask.py", line 49, in __call__
return self.execute(*eopatches, **kwargs)
File "/home/karlis/PythonProjects/sentinel_mining/venv/lib/python3.6/site-packages/eolearn/geometry/utilities.py", line 86, in execute
eopatch[self.feature_type][self.feature_name] = raster[..., np.newaxis]
File "/home/karlis/PythonProjects/sentinel_mining/venv/lib/python3.6/site-packages/eolearn/core/eodata.py", line 653, in __setitem__
raise ValueError('{} feature has to be {} of dimension {}'.format(self.feature_type, np.ndarray, self.ndim))
ValueError: FeatureType.MASK feature has to be <class 'numpy.ndarray'> of dimension 4
Am I doing something wrong or it could be a bug?
Thanks a lot :)
I'm trying to get S2-L1C bands for custom area with slightly changed code from examples directory.
input_task = S2L1CWCSInput('TRUE-COLOR-S2-L1C', resx='10m', resy='10m', maxcc=0.8, instance_id=INSTANCE_ID)
workflow = LinearWorkflow(input_task)
time_interval = ['2017-01-01','2017-12-31']
from sentinelhub import WebFeatureService, BBox, CRS, DataSource
bbox = BBox(bbox=[ 150.7450033,-32.5219569,150.7583071,-32.5383657], crs=CRS.WGS84)
results = workflow.execute({input_task:{'bbox':bbox, 'time_interval':time_interval}})
but got an Error:
Server response: "Layer TRUE-COLOR-S2-L1C not found"
Is there any way to do?
Hi,
This is related to the 'Land Use and Land Cover (LULC) classification' example - more specifically 2_eopatch-L1C.ipynb
notebook. When using a shape file of my own region, generated via 1_split-AOI.ipynb
notebook - for example this geojson of California, I get the following error, when trying to execute the workflow in 2_eopatch-L1C.ipynb
:
File "/home/karlis/PythonProjects/sentinel_mining/L1C_patches.py", line 108, in <module>
save:{'eopatch_folder':patch_name}})
File "/home/karlis/PythonProjects/sentinel_mining/venv/lib/python3.6/site-packages/eolearn/core/eoworkflow.py", line 165, in execute
_, intermediate_results = self._execute_tasks(input_args=input_args, out_degs=out_degs, monitor=monitor)
File "/home/karlis/PythonProjects/sentinel_mining/venv/lib/python3.6/site-packages/eolearn/core/eoworkflow.py", line 188, in _execute_tasks
monitor=monitor)
File "/home/karlis/PythonProjects/sentinel_mining/venv/lib/python3.6/site-packages/eolearn/core/eoworkflow.py", line 215, in _execute_task
return task(*inputs, **kw_inputs, monitor=monitor)
File "/home/karlis/PythonProjects/sentinel_mining/venv/lib/python3.6/site-packages/eolearn/core/eotask.py", line 49, in __call__
return self.execute(*eopatches, **kwargs)
File "/home/karlis/PythonProjects/sentinel_mining/venv/lib/python3.6/site-packages/eolearn/io/sentinelhub_service.py", line 220, in execute
self._add_data(eopatch, np.asarray(images))
File "/home/karlis/PythonProjects/sentinel_mining/venv/lib/python3.6/site-packages/eolearn/io/sentinelhub_service.py", line 133, in _add_data
valid_mask = data[..., -1]
IndexError: index -1 is out of bounds for axis 0 with size 0
In 1_split-AOI.ipynb
notebook in BBoxSplitter() function I try to set the numbers, so that the patch sizes are approximately 1000x1000 pixels, which seemed to be the case in the original example with Slovenia.
I've tried multiple locations and have so far observed this with USA (California, Texas, Nevada, Maine), also North Korea. The time interval was usually set across several months in 2018, but changing it didn't seem to affect anything, so it might be related to location or the shapefiles coming from 1_split-AOI.ipynb
notebook.
Thanks
Please mention possibility to get free R&D account at this link:
https://earth.esa.int/aos/OSEO
This should be added somewhere here:
In order to run the example you'll need a Sentinel Hub account. You can get a trial version here.
Anyone got this error?
ValueError Traceback (most recent call last)
in
2 input_task: {
3 'bbox': dam_bbox,
----> 4 'time_interval': time_interval
5 },
6 })~/.local/lib/python3.6/site-packages/eolearn/core/eoworkflow.py in execute(self, input_args, monitor)
166 input_args = self.parse_input_args(input_args)
167
--> 168 _, intermediate_results = self._execute_tasks(input_args=input_args, out_degs=out_degs, monitor=monitor)
169
170 return WorkflowResults(intermediate_results)~/.local/lib/python3.6/site-packages/eolearn/core/eoworkflow.py in _execute_tasks(self, input_args, out_degs, monitor)
205 input_args=input_args,
206 intermediate_results=intermediate_results,
--> 207 monitor=monitor)
208
209 intermediate_results[dep] = result~/.local/lib/python3.6/site-packages/eolearn/core/eoworkflow.py in _execute_task(self, dependency, input_args, intermediate_results, monitor)
238
239 LOGGER.debug("Computing %s(*%s, **%s)", str(task), str(inputs), str(kw_inputs))
--> 240 return task(*inputs, **kw_inputs, monitor=monitor)
241
242 def _relax_dependencies(self, *, dependency, out_degrees, intermediate_results):~/.local/lib/python3.6/site-packages/eolearn/core/eotask.py in call(self, monitor, *eopatches, **kwargs)
50 # return self.execute_and_monitor(*eopatches, **kwargs)
51
---> 52 return self._execute_handling(*eopatches, **kwargs)
53
54 def execute_and_monitor(self, *eopatches, **kwargs):~/.local/lib/python3.6/site-packages/eolearn/core/eotask.py in _execute_handling(self, *eopatches, **kwargs)
71 exception, traceback = caught_exception
72 raise type(exception)('During execution of task {}: {}'.format(self.class.name,
---> 73 exception)).with_traceback(traceback)
74
75 self.private_task_config.end_time = datetime.datetime.now()~/.local/lib/python3.6/site-packages/eolearn/core/eotask.py in _execute_handling(self, *eopatches, **kwargs)
64 caught_exception = None
65 try:
---> 66 return_value = self.execute(*eopatches, **kwargs)
67 except BaseException as exception:
68 caught_exception = exception, exc_info()[2]~/.local/lib/python3.6/site-packages/eolearn/geometry/utilities.py in execute(self, eopatch)
132 if not bbox_map.empty:
133 rasterio.features.rasterize([(bbox_map.cascaded_union.buffer(0), self.raster_value)], out=raster,
--> 134 transform=data_transform, dtype=self.raster_dtype)
135
136 eopatch[self.feature_type][self.feature_name] = raster[..., np.newaxis]~/.local/lib/python3.6/site-packages/rasterio/env.py in wrapper(*args, **kwds)
371 else:
372 with Env.from_defaults():
--> 373 return f(*args, **kwds)
374 return wrapper
375~/.local/lib/python3.6/site-packages/rasterio/features.py in rasterize(shapes, out_shape, fill, out, transform, all_touched, merge_alg, default_value, dtype)
264 if not is_valid_geom(geom):
265 raise ValueError(
--> 266 'Invalid geometry object at index {0}'.format(index)
267 )
268ValueError: During execution of task VectorToRaster: Invalid geometry object at index 0
I think that it happened because of my WKT file. My WKT at: https://pastebin.com/aWSUawvH
What is the license for this library?
Hi, was digging into the package over the last couple of days and really like it, especially the patch container concept!
One thing I could not yet figure out: If I have multiple satdatasets in the same patch (e.g. Sentinel 2 and Landsat 8) they will have different timestamps and array dimensions (n_time ) but patch.timestamps is a single list. Ist this practical?
What I found in the eolearn.core.eodata.EOPatch documentation is that "Currently the EOPatch object doesn’t enforce that the length of timestamp be equal to n_times dimensions of numpy arrays in other attributes." so I guess you have put some thought into this.
I came across this when experimenting with manually ingesting multiple local dataset arrays to the patch. Unfortunately I wasn't able to combine two input tasks from the docu examples for the same patch to see how you are handling this. I guess this is not the right way to do that.
input_task_1 = S2L1CWCSInput('TRUE-COLOR-S2-L1C', resx='10m', resy='10m', maxcc=0.8)
input_task_2 = L8L1CWCSInput('TRUE-COLOR-L8', resx='30m', resy='30m')
...
workflow = LinearWorkflow(input_task_1, input_task_2, save)
results = workflow.execute({input_task_1:{'bbox':bbox, 'time_interval':time_interval},
input_task_2:{'bbox':bbox, 'time_interval':time_interval},
save:{'eopatch_folder':patch_name}})
# IndexError: index -1 is out of bounds for axis 0 with size 0
Hi eo-learn team,
it would be super to have pre-built eo-learn Docker images packaging up everything user will need for both CPU as well as GPU environments. This will greatly ease the eo-learn adoption.
Any plans on this topic?
@mlubej following your talk at GeoPython ;)
I noticed this pattern in the geometry subpackage that uses geopandas:
eo-learn/geometry/eolearn/geometry/transformations.py
Lines 127 to 129 in d57fb48
I wanted to give a heads up that we might be going to change the return value of the crs
attribute to a pyproj.CRS
object, and then such code will not work anymore.
See geopandas/geopandas#1003 for a related issue.
Seeing how it is used here, we might need to think about backwards compatibility when doing such a change, but not sure how that could be achieved.
For Sentinel-1, it is important to be able to use descending or ascending passes separately from one another, so a parameter for orbitDirection
should be included in sentinelhub_service.py
to do this, at least for the S1IWWCSInput()
and S1IWWMSInput()
methods.
The WMS configurator caters for this so I am sure it is a very quick fix that requires implementation.
Hi eo-learn team,
is it possible to export segmented area (only area with tree cover) as polygon? I'm referring to example https://github.com/sentinel-hub/eo-learn/blob/934b4e6328706b4d44905d54e58b5fa7dc267ec1/examples/tree-cover-keras/tree-cover-keras.ipynb
Best, Tomaz
I just tried running one of the demo notebooks — examples/water-monitor/WaterMonitorWorkflow.ipynb
— using Binderhub (direct link) and get the following error on trying to run code cell 28:
result = workflow.execute({input_task:{'bbox':dam_bbox, 'time_interval':time_interval},
})
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-28-643ac18faa0d> in <module>()
----> 1 result = workflow.execute({input_task:{'bbox':dam_bbox, 'time_interval':time_interval},
2 })
/srv/conda/lib/python3.6/site-packages/eolearn/core/eoworkflow.py in execute(self, input_args, monitor)
165 raise ValueError('Invalid input argument {}, should be an instance of EOTask'.format(task))
166
--> 167 _, intermediate_results = self._execute_tasks(input_args=input_args, out_degs=out_degs, monitor=monitor)
168
169 return WorkflowResults(intermediate_results)
/srv/conda/lib/python3.6/site-packages/eolearn/core/eoworkflow.py in _execute_tasks(self, input_args, out_degs, monitor)
188 input_args=input_args,
189 intermediate_results=intermediate_results,
--> 190 monitor=monitor)
191
192 intermediate_results[dep] = result
/srv/conda/lib/python3.6/site-packages/eolearn/core/eoworkflow.py in _execute_task(self, dependency, input_args, intermediate_results, monitor)
215 inputs = tuple(intermediate_results[self.uuid_dict[input_task.uuid]] for input_task in dependency.inputs)
216 LOGGER.debug("Computing %s(*%s, **%s)", str(task), str(inputs), str(kw_inputs))
--> 217 return task(*inputs, **kw_inputs, monitor=monitor)
218
219 def _relax_dependencies(self, *, dependency, out_degrees, intermediate_results):
/srv/conda/lib/python3.6/site-packages/eolearn/core/eotask.py in __call__(self, monitor, *eopatches, **kwargs)
47 return self.execute_and_monitor(*eopatches, **kwargs)
48
---> 49 return self.execute(*eopatches, **kwargs)
50
51 @staticmethod
/srv/conda/lib/python3.6/site-packages/eolearn/io/sentinelhub_service.py in execute(self, eopatch, bbox, time_interval)
197 request_params, service_type = self._prepare_request_data(eopatch, bbox, time_interval)
198 request = {ServiceType.WMS: WmsRequest,
--> 199 ServiceType.WCS: WcsRequest}[service_type](**request_params)
200
201 request_dates = request.get_dates()
/srv/conda/lib/python3.6/site-packages/sentinelhub/data_request.py in __init__(self, resx, resy, **kwargs)
469 """
470 def __init__(self, *, resx='10m', resy='10m', **kwargs):
--> 471 super().__init__(service_type=ServiceType.WCS, size_x=resx, size_y=resy, **kwargs)
472
473
/srv/conda/lib/python3.6/site-packages/sentinelhub/data_request.py in __init__(self, layer, bbox, time, service_type, data_source, size_x, size_y, maxcc, image_format, instance_id, custom_url_params, time_difference, **kwargs)
304 self.wfs_iterator = None
305
--> 306 super().__init__(**kwargs)
307
308 def _check_custom_url_parameters(self):
/srv/conda/lib/python3.6/site-packages/sentinelhub/data_request.py in __init__(self, data_folder)
39 self.download_list = []
40 self.folder_list = []
---> 41 self._create_request()
42
43 @abstractmethod
/srv/conda/lib/python3.6/site-packages/sentinelhub/data_request.py in _create_request(self)
321 acceptable cloud coverage.
322 """
--> 323 ogc_service = OgcImageService(instance_id=self.instance_id)
324 self.download_list = ogc_service.get_request(self)
325 self.wfs_iterator = ogc_service.get_wfs_iterator()
/srv/conda/lib/python3.6/site-packages/sentinelhub/ogc.py in __init__(self, **kwargs)
97 """
98 def __init__(self, **kwargs):
---> 99 super().__init__(**kwargs)
100
101 self.wfs_iterator = None
/srv/conda/lib/python3.6/site-packages/sentinelhub/ogc.py in __init__(self, base_url, instance_id)
36
37 if not self.instance_id:
---> 38 raise ValueError('Instance ID is not set. '
39 'Set it either in request initialization or in configuration file. '
40 'Check http://sentinelhub-py.readthedocs.io/en/latest/configure.html for more info.')
ValueError: Instance ID is not set. Set it either in request initialization or in configuration file. Check http://sentinelhub-py.readthedocs.io/en/latest/configure.html for more info.
Is there some config info needs setting anywhere?
If you save logs for the report, you also get the DEBUG
output printed to the notebook. This prints out a lot of unnecessary info and, If I understood correctly, this was also not the planned behaviour, but I seem to get it locally on jupyter notebook as well as jupyter lab.
I tried to fix this by playing around with the logging
package, but I was unsuccessful...
Hello E0-Learn Team,
I am still facing this issue, can you please help.
I have installed the latest eo-learn version . But still when I do this
from eolearn.io import L8L1CWMSInput
I get the following
from eolearn.io import L8L1CWMSInput
Traceback (most recent call last):
File "", line 1, in
File "C:\Users\hp\Anaconda3\envs\tensorflow\lib\site-packages\eolearn\io_init_.py", line 8, in
from .geopedia import AddGeopediaFeature
File "C:\Users\hp\Anaconda3\envs\tensorflow\lib\site-packages\eolearn\io\geopedia.py", line 7, in
from rasterio import transform, warp
File "C:\Users\hp\Anaconda3\envs\tensorflow\lib\site-packages\rasterio_init_.py", line 23, in
from rasterio._base import gdal_version
ImportError: DLL load failed: The specified module could not be found.
What am I doing wrong here ?
even when I am importing S2L1CWCSInput, I get the following
ImportError Traceback (most recent call last)
in
----> 1 from eolearn.io import *
~\Anaconda3\envs\tensorflow\lib\site-packages\eolearn\io_init_.py in
6 S2L1CWCSInput, L8L1CWMSInput, L8L1CWCSInput, S2L2AWMSInput, S2L2AWCSInput, DEMWMSInput, DEMWCSInput,
7 AddSen2CorClassificationFeature
----> 8 from .geopedia import AddGeopediaFeature
9 from .local_io import ExportToTiff
10
~\Anaconda3\envs\tensorflow\lib\site-packages\eolearn\io\geopedia.py in
5 import numpy as np
6 import logging
----> 7 from rasterio import transform, warp
8
9 from sentinelhub import MimeType, CustomUrlParam, CRS, GeopediaWmsRequest, transform_bbox
~\Anaconda3\envs\tensorflow\lib\site-packages\rasterio_init_.py in
21 pass
22
---> 23 from rasterio._base import gdal_version
24 from rasterio.drivers import is_blacklisted
25 from rasterio.dtypes import (
ImportError: DLL load failed: The specified module could not be found.
Different numbers of samples for Sentinel-1
key = 'TRUE-COLOR-S1-IW'
input_task = S1IWWCSInput(key, resx='10m', resy='10m', instance_id=INSTANCE_ID, maxcc=0.8)
workflow = LinearWorkflow(input_task)
time_interval = ['2010-01-01','2018-12-31']
bbox = BBox(bbox=[ 150.6450033,-32.4219569,150.6583071,-32.4383657], crs=CRS.WGS84)
results = workflow.execute({input_task:{'bbox':bbox, 'time_interval':time_interval}})
For this request I got 41 result for area of bbox since 2017, but in eo_browser available results from 2014 (126 samples). it there any way to got all results for this period of time (2014-2017) from eo_learn package?
Hello,
I've been downloading some eopatches
, by following the 'Land Use and Land Cover (LULC) classification' example, but most of the time, depending on the location, they are just black/empty. For example, a patch with a bounding box (near Moscow): 1862275.3544260084, 6357435.0055978615, 1881901.5999472605, 6377236.73728714
- in epsg:32633, for the time interval starting from 2018-01-01 contains 75 timestamped entries, which match the ones that come up in the 'Sentinel Hub playground'. But the true colour layer seems to be just full of zeros, for all the entries.
I've tried 'TRUE-COLOR-S2-L1C' and 'TRUE-COLOR-S2-L2A' and both seem to have this problem. I've tried multiple locations in: USA, Europe, Afrika. Several locations in UK, Latvia and Afrika, worked fine, but haven't got a single successful result within USA. The workflow settings are the same as were provided in the LULC example with Slovenia. Also I have a trial account, if that might have anything to do with it.
Thank you
While creating EOPatches, Sentinel Hub service returned 503 error. This happens occasionally and it would make sense to handle it somehow:
-retry the request (it happens rarely enough that you can only retry once, if the error persists, it is probably best to stop and check what is happening)
-this error stopped my process and when I restarted it, it went from start. It would make sense to save the data in some intermediate steps, otherwise there is a significant chance that process will crash every time for large areas?
During the tutorial "Land Cover Classification with eo-learn: Part 1" we encountered the following error:
TypeError: During execution of task S2L1CWCSInput: 'out' is an invalid keyword argument for decompress()
This was resolved by conda install tifffile
so this should probably be added as an additional dependency.
My question is: can I prepare the L2A level data by S2L2AWCSInput and then add cloud data via AddCloudMaskTask?
Thanks a lot!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.