Giter Club home page Giter Club logo

csis's People

Contributors

fgeyer16 avatar ghilbrae avatar p-a-s-c-a-l avatar

Stargazers

 avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

csis's Issues

Define workflow for generating maps of extreme event patterns for prior and future decades cities

On Climate twins adaptation
The initial climate twins paper as published in 2011 describes the climate twins application “being designed to communicate climate changes in an intuitive and understandable way by showing regions which have now similar climate conditions as a given Point of Interest (POI) in the future to learn from adaptation options”.

Making use of the climate twins concept for CLARITY focusing on climate services objectives dealing with climate risk (=extreme events) / vulnerability and adaptation requires to adapt the research approach not finding similarities in average climate signals with respect to daily temperature maximum distribution and daily precipitation volume patterns for selected decades towards extreme event occurrence. Thus the appropriate objective must be finding similarities in extreme events to learn from cities which have experienced similar events earlier and frequently to cope better with climate risk, to better adapt to climate change and to better mitigate vulnerability.

The CLARITY concept shall focus on extracting extreme event occurrence patters (n of heat waves of certain length, n of summer days or tropical nights during prior decades and years) and extracting the same indicators for extreme events screening future climate data at the European scale, from appropriate climate simulations (in our case from Euro-Cordex ensembles).

By identifying cities or regions with a certain number of prior extreme events of critical magnitude and extracting the expectations on extreme events for the own city allows to review the activities of those critically exposed cities which show a similar environmental and spatial framework (characteristics of terrain and land use) in earlier years or decades.

By reviewing the damage repair and adaptation activities in cities with similar earlier experience, each target city can learn from activities in those earlier exposed cities to get better prepared and to cope better with future climate hazards.

Besides generating maps of extreme event patterns for prior and future decades cities can mark their location and provide a link where their adaption measures have successfully responded to such extreme events. In this way those cities can actively help other cities, facing future climate hazards, in learning from their earlier experience.

Best practice: "relevant" taxonomy terms in Drupal 8

Due to our taxonomy design (e.g. hazards and elements at risk), only a subset of the "relevant" taxonomy terms should be shown to the users when choosing e.g. hazards or elements at risk. This issue is illustrated on example of hazards taxonomy below:

image

Currently, the user can only choose "heat", not "temperature", "extreme-heat" or "hot-days-75p" for filling in certain fields. For example in "Adaptation Options Effects" bundle.

Proposed best practice for assuring that users can only use the chosen terms from this taxonomy when adding/editing entities AND also only choose these terms in view filters is described in following posts.

CLARITY Data Package: Adaptation Options Resource specification

This task is responsible for defining the data structure of the Adaptation Options resource that will be included in CLARITY data package. This includes indicating:

how is the resource encoded (e.g., store the information in raster or vector format). Besides, in the case of using vector based data, a set of "standard" attributes must be defined for containing the information, so that all resources of this type have always the same set of attributes (and names) and CLARITY tools can process them in the same manner.
what formats are supported for this specific type of resource (e.g., for raster data: netcdf, geotiff, etc.; for vector data: geojson, shapefile, etc.)
what metadata attributes are used to describe this resource (refer to CLARITY General Data Package specification and https://frictionlessdata.io/specs/data-resource/).

The definition of the Adaptation Options resource will be based on EMIKAT

In this task the following partners MUST be involved: AIT

CLARITY Data Package: Risk Maps Resource specification

This task is responsible for defining the data structure of the Risk Map(s) that will be included in CLARITY data package. This includes indicating:

how is the resource encoded (e.g., store the information in raster or vector format). Besides, in the case of using vector based data, a set of "standard" attributes must be defined for containing the information, so that all resources of this type have always the same set of attributes (and names) and CLARITY tools can process them in the same manner.
what formats are supported for this specific type of resource (e.g., for raster data: netcdf, geotiff, etc.; for vector data: geojson, shapefile, etc.)
what metadata attributes are used to describe this resource (refer to CLARITY General Data Package specification and https://frictionlessdata.io/specs/data-resource/).
In this task the following partners MUST be involved: PLINIVS

CLARITY Data Package: Vulnerability Maps Resource specification

This task is responsible for defining the data structure of the Vulnerability Map(s) that will be included in CLARITY data package. This includes indicating:

how is the resource encoded (e.g., store the information in raster or vector format). Besides, in the case of using vector based data, a set of "standard" attributes must be defined for containing the information, so that all resources of this type have always the same set of attributes (and names) and CLARITY tools can process them in the same manner.
what formats are supported for this specific type of resource (e.g., for raster data: netcdf, geotiff, etc.; for vector data: geojson, shapefile, etc.)
what metadata attributes are used to describe this resource (refer to CLARITY General Data Package specification and https://frictionlessdata.io/specs/data-resource/).

In this task the following partners MUST be involved: METEOGRID/PLINIVS/ZAMG

Implement GUI functions for viewing and retrieval

GUI / Tagging functions:
• Mark areas /cities with an extreme event marker (heavy precipitation, extreme heat)
• provide a link where information on the local event is stored:

  • year(s) of occurrence,
  • observed hazards, approximate costs to repair
  • photos depicting the impact
  • set adaption measures
  • photos depicting the adaptation measures
    Links can be stored in a database with variables coordinates, city(region name, event keyword, link to provide documentation

GUI / Viewing functions:
View areas with threshold x above a number of extreme event days:
• average annual number of summer days
• average annual number of tropical nights
• average annual number of heat episodes > 3 day, >5 days, > 10 days
• average annual number of frost days
• average annual number of frost episodes > 3 day, >5 days, > 10 days
• average annual number of storm days

GUI / Retrieval functions
(1) Pre-selection of events by event type above a threshold (frequency, magnitude
(2) Spatial selection of the pre-selected events by drawing a polygon or a rectangle
(3) Printing a map of the selection (pdf)
(4) Point on the event markers to search for links were hazards and adaptation measures are documented - or
(5) list the links provided in the selection for reading or printing (pdf)

See also: https://github.com/clarity-h2020/map-component

Implement Table Components

Implement the different Table Components for HC, EE, VA, ... as React Web Applications:

The following 4-steps approach, which can also be applied to the development of the Map Component or any other external AJAX Application that is integrated into the main Drupal site, has to be followed:

  1. GUI implementation ("html wireframe") as React AJAX web Application following the respective Product Mock-Ups
  2. Integration into the respective GL-Step as External ReactMount Application
  3. Design of a simplified internal state data model in JSON (the "table model") and definition of static JSON constants (example) as initial reference / example data based on
    a) the example table content from the mock-up screens and, when available,
    c) this excel sheet or
    b) the content of the Reference Data Package
  4. Mapping of the actual Data Package Data Model to the internal JSON state data model with help of the JSON:API and/or REST Views and additional REST Micro Services for simple data aggregation and transformation.

Modelling Workflow Implementation

In order to perform automated screening studies, the complete modelling workflow should IMHO be implemented as follows:

Ideally all i/o data we need for HC+LE, Impact & adaptation calculation is in one database (or a cluster). But ATM only for Impact Calculation a "database" (AIT EMIKAT) is used. So either implement the whole process in EMIKAT (which cannot be done by AIT personnel only), or perform / store HC+LE, Impact & adaptation calculation in a PostGIS database and let EMIKAT access these data via Postgres REST API or Geoserver/WCS.

Hazard Events / Indexes / ...

pre-calculate (all?) Hazard Events (HeatWaveOccurence Matrixes, 12x12 km raster) for Europe and put them into a PostGIS database.
this process should be semi-automated (= a script). we have to re-calculate when bias corrected EU-CORDEX data is available is available, right? The (R) scripts should be made available on github, if possible (open science and such ...). I don't think that size matters here, Netdcf files are less than 3 MB. Apart form the re-calculation on bias corrected data, this process has to be performed only once, the result is stored permanently in the database and used for HC+LE downscaling.

Local Effects Input Layers

Pre-calculate (important: as 500x500 (?) grids, not features) the Local Effects Input Layers (those derived by Mario from pan-European datasets like UrbanAtlas) for Europe / major European cities and put them into a PostGIS database.
the process is semi-automated. Feature extraction scripts are on github. So doing the calculation for whole Europe / major European cities shouldn't be a problem. Re-calcualtion is only needed when e.g. Urban Atlas, etc. data is updated. So the results are permanently stored in the database and used for HC+LE downscaling and adaptation.
However, size does matter, urban_atlas_shp.zip is > 40 GB. And processing power (CPU, RAM, ..) might be a limiting factor, too. But in theory, we just need to precaculate and store the analysis grid (500x500m) for while Europe / supported major cities or regions, right? We don't need single features (buildings, roads, ...) for HC+LE and adaptation calculation, right? Those feature layers are nice for to have for visualisation (but OSM should be good enough), but not needed for impact calculation.

Local Effects Hazard

Does precalculation make sense? Possibly for predefined regions. Currently the process is manual (arcGIS) but has to be automated. A script, or even better a stored procedure (e.g. using postgis GDAL functions) has to be developed. Moreover, the re-calculation must be supported on-the-fly in real-time on a limited number of grid cells taking as parameter the user-defined study area (or even better a predefined region -> then we can reuse all materialised views for the regions, of course without adaptation options applied) and possibly the hazard event ids (events selected by the user or preselected in the Data Package), the RCP and the time-period. Then publish materialized views as layers on geoserver, EMKIKAT can query them via WCS. This should be fast, especially, if stored database procedures are used to do the HC-LE calculation (materialised views are cached!). Then expose a REST (e.g. http://postgrest.org ) or GraphQL (e.g. https://github.com/graphile/postgraphile) API for e.g. the Table Component on top of those views, generic standard sql queries (aggregate functions, etc.) can be used to create generic SQL queries that feed the API.

Impact

Currently managed and calculated in Emikat. Does precalculation make sense? IMHO not. -> see adaptation options.

Adaptation Options

Adaptation Options modify properties of the Local Effects Input Layer and are applied on a single analysis grid cell, not a single feature (building, roads, ....), right? If cells are in a database, simply make a spatial select (user-defined study area polygon or better: predefined region) from Local Effects Input Layers and apply +/- to properties (e.g. albedo+=0.5) and store in a materialized view. If we decide to support only pre-defined regions (e.g. for major European cities like metropolitan region of Naples) instead of arbitrary user-defined study areas, we can even re-use the Local Effects Hazard materialised views for the respective region and create an 'adapted' materialised view on top of it. Storage might become a problem (if the system is used) But we can clean those temporary adapted views after a certain time period (e.g. 14 days), unless the user has bought a subscription (= if you don't pay for it, studies are deleted after 14 days).

Then re-calculate Local Effects Hazard Layers for the selected study area using a script or even better the stored database procedure that makes use of the 'adapted' (those with adaptation options applied!) LE Input materialized views . If the user makes a preselection for one RCP, one timeperiod or even a few events (e.g. just low and high) then there is even less to re-caculate. All of this should IMHO be possible in real-time if all i/o happens in the same database instance. Then publish HC+LE materialized views as layers on geoserver, EMKIKAT can query them via WCS.

Implement indicators for viewing and retrieving

Indicators:
Extreme events by decade 2051-2060, 2071-2080, 2091-2100:
• decadal average of the annual number of summer days (>25°C minimum temperature)
• decadal average of the annual number of tropical nights (> 20°C minimum temperature)
• decadal average of the annual number of heat episodes > 3 day, >5 days, > 10 days with maxima >25°C)
• decadal average of the number of heat days (> 25°C)
• decadal average of the annual number of frost days (< 0°C maximum temperature)
• decadal average of the annual number of frost episodes > 3 day, >5 days, > 10 days with maxima <0°C)
• decadal average of the annual number of storm days (maximum windspeed > y m/sek.))
(indicators shall be confirmed with the climate experts.

CLARITY Data Package: Hazard Maps Resource specification

This task is responsible for defining the data structure of the Hazard Map(s) that will be included in CLARITY data package. This includes indicating:

  • how is the resource encoded (e.g., store the information in raster or vector format). Besides, in the case of using vector based data, a set of "standard" attributes must be defined for containing the information, so that all resources of this type have always the same set of attributes (and names) and CLARITY tools can process them in the same manner.
  • what formats are supported for this specific type of resource (e.g., for raster data: netcdf, geotiff, etc.; for vector data: geojson, shapefile, etc.)
  • what metadata attributes are used to describe this resource (refer to CLARITY General Data Package specification and https://frictionlessdata.io/specs/data-resource/).

In this task the following partners MUST be involved: METEOGRID/PLINIVS/ZAMG

CLARITY Data Package: Impact Maps Resource specification

This task is responsible for defining the data structure of the Impact Map(s) that will be included in CLARITY data package. This includes indicating:

how is the resource encoded (e.g., store the information in raster or vector format). Besides, in the case of using vector based data, a set of "standard" attributes must be defined for containing the information, so that all resources of this type have always the same set of attributes (and names) and CLARITY tools can process them in the same manner.
what formats are supported for this specific type of resource (e.g., for raster data: netcdf, geotiff, etc.; for vector data: geojson, shapefile, etc.)
what metadata attributes are used to describe this resource (refer to CLARITY General Data Package specification and https://frictionlessdata.io/specs/data-resource/).
In this task the following partners MUST be involved: PLINIVS

Produce Heat Wave Hazard Events for Europe

we need some sort of loook-up table for determining for each area in Europe the event-probability (maybe implemented as REST service).
-->action: ZAMG will take care of generating it --> For each grid point we have one frequent event, occasional and rare event (this is done for the baseline scenario + the 3 rcps scenarios (and for each rcp there are 3 future scenarios))
--> the idea is to keep it as possible for the screening phase. More complex solution for selecting/applying the hazard events to the study area could be part of the definition of a future business model (e.g., provide better methods of calculation in a pay mode)

Hazard Local Effects Modelling Workflow Implementation

Implementation of the Modelling Workflow for Hazard Local Effects with help of

  • Building Blocks like Data Repositories (PostgreSQL RDBMS, Geoserver, Rasdaman (?), etc.), Catalogue of Elements at Risks and Adaptation Options (Emikat), etc.
  • custom scripts (to be stored in the data-package repository)
  • etc.

The results are stored in the respective data- / meta-data repositories of the CSIS. See also Configure the CSIS with some datasets

The implementation has to be documented in this wiki in the CSIS Architecture.

Roles and Responsibilities

  • ATOS prepares the input layers for HC-LE calculation and makes them available on ATOS Geoserver instance via WMS interfaces so that they can also be used as static Background Layers.
  • METEOGRID transforms HC and EE input layers (from PLINIVUS, ZAMG) to the harmonised grid and makes them available on METEOGRID Geoserver via WCS and WMS interfaces
  • METEOGRID prepares HC-LE layers based on HC layers and input layers and makes them available on METEOGRID Geoserver Geoserver via WCS and WMS interfaces

Reference Modelling Workflow Documentation

Documentation of the Implementation of the Modelling Workflow in the Data Package Wiki the CSIS Architecture. See also clarity-h2020/csis-architecture#4

The documentation should give answer to the following questions

@negroscuro @ghilbrae @stefanon

  • what are the scripts involved to process the input data (if possible we should put the scripts in the data-package repository to make the process transparent and repeatable)
  • where is the output of the scripts stored (e.g. postgis db = Data Repository BB)
  • how is the processed data made available in the CSIS (e.g. Geoserver to connected to postgis)
  • etc.

@humerh @bernhardsk

  • how is the processed data accessed by emikat? (on-demand?, from which data repository, etc.)
  • how is the data rasterized and further processed (Catalogue of ER & AO = Emikat)
  • how is the rasterized data made available (e.g. Geoserver to connected to emikat ?!)
  • how is impact calculated (Emikat)
  • how are the results of the impact model ...
    • made available so that they can be shown on a map
    • aggregated to a list of performance indicators so that they can be shown in a table and used for Scenario Analysis & MCDA
  • etc.

The documentation will serve as main input for D4.3 "Technology Support Report" and will contribute to the documentation of the Emergent Architecture (see D4.2 CLARITY CSIS Architecture v1.0) of the CSIS. Thus, we should be able to create an architecture diagram that looks similar to this one (Example from the CRISMA Project):

crisma architecture example

Need first data package for the demo

Need a data package with

  • at least one hazard layer (e.g. heath layer)
  • at least one exposure layer (e.g. population)
  • at least one vulnerability combination (e.g. heath/population)

The meta-information has to be published as a data package.
The data has to be published on relevant services (grid maps for hazard & population)

Integration of EEA's Urban Adaptation Map Viewer

We should integrate maps and data from EEA's Urban Adaptation Map Viewer into the CSIS.

We could show some of their Urban-Adaptation-datasets on our maps and embed the city profiles as external application via iFrame.

I don't expect that this will save us some work regarding local effects data processing and impact calculation, but it will enhance the CSIS with a lot of useful information, especially the city profiles:

grafik

WDYT?

But we need a formal / service level agreement to use their data and APIs (WMS, WFS, etc.). Denis?

Sketch workflow for extracting extreme event occurrence patterns

On Climate twins adaptation
The initial climate twins paper as published in 2011 describes the climate twins application “being designed to communicate climate changes in an intuitive and understandable way by showing regions which have now similar climate conditions as a given Point of Interest (POI) in the future to learn from adaptation options”.

Making use of the climate twins concept for CLARITY focusing on climate services objectives dealing with climate risk (=extreme events) / vulnerability and adaptation requires to adapt the research approach not finding similarities in average climate signals with respect to daily temperature maximum distribution and daily precipitation volume patterns for selected decades towards extreme event occurrence. Thus the appropriate objective must be finding similarities in extreme events to learn from cities which have experienced similar events earlier and frequently to cope better with climate risk, to better adapt to climate change and to better mitigate vulnerability.

The CLARITY concept shall focus on extracting extreme event occurrence patters (n of heat waves of certain length, n of summer days or tropical nights during prior decades and years) and extracting the same indicators for extreme events screening future climate data at the European scale, from appropriate climate simulations (in our case from Euro-Cordex ensembles).

By identifying cities or regions with a certain number of prior extreme events of critical magnitude and extracting the expectations on extreme events for the own city allows to review the activities of those critically exposed cities which show a similar environmental and spatial framework (characteristics of terrain and land use) in earlier years or decades.

By reviewing the damage repair and adaptation activities in cities with similar earlier experience, each target city can learn from activities in those earlier exposed cities to get better prepared and to cope better with future climate hazards.

Besides generating maps of extreme event patterns for prior and future decades cities can mark their location and provide a link where their adaption measures have successfully responded to such extreme events. In this way those cities can actively help other cities, facing future climate hazards, in learning from their earlier experience.

Geonode does not allow for uploading layers

When trying to upload a shapefile through the GeoNode interface , I get this error:

401,
HTTP Status 401 – Unauthorized
Type Status Report
Message No AuthenticationProvider found for org.springframework.security.authentication.UsernamePasswordAuthenticationToken

Description The request has not been applied because it lacks valid authentication credentials for the target resource.
Apache Tomcat/9.0.10

Might be solveable like this: GeoNode/geonode#3175
Could you have a look? Thanks!

GRID harmonization

In order to make the re-projection, we'd like to know more about the output GRID.

We already know that it must be EPSG 3035 but we also need the reference raster.

I guess we'd also need a reference raster for Europe, one for Naples, and, in the future, one for each DC and probably, one for each study area we'd like to cover. In this case, who will be defining this reference raster? (Is this specified in the Data Package? Is it created when the user selects an area? Any other option?)

CSIS Permissions to create Data Package

Hi! We are stating with the DC4 Data Package but our(my) user does not have permissions to edito or create one. The user we are using at Meteogrid rn is the one I have (username: angela). Could someone give this account permissions to do that?

Thanks!!

myclimateservice.eu DNS Entries

Register (sub) domain names for myclimateservice.eu and point DNS entries to IP of csis.clarity-h2020.eu

  • myclimateservice.eu
  • www.myclimateservice.eu
  • csis.myclimateservice.eu.eu
  • geoserver.myclimateservice.eu
  • erdapp.myclimateservice.eu
  • tiles.myclimateservice.eu
  • api.myclimateservice.eu
  • geonode.myclimateservice.eu

Check and add background data

Background data
• open street map content (source: OSM)
• satellite image (source: Google)
• digital elevation map / terrain shading (source: EEA)
• NUTS 1 and NUTS3 region borders (source: Eurogeographics)
• Population distribution map (source: EU pop grid, Eurostat)
• selected event indicator(s) as choropleth maps
(source: extracted from EUROCORDEX model runs.

Subtasks

Collect and document data sources

This task involves identifying and collecting information about the different datasets necessary in the project and more particularly the DCs.
The task must also clarify which datasets are private/public and therefore can be made publicly available via CSIS platform for other users (or prevent their access)

This task is led by METEOGRID but REQUIRES contributions from PLINIVS, AIT, SMHI, ZAMG and AEMET

DataPackage: Add "references" property in resource

@fgeyer16 , @patrickkaleta I need to create a "references" property (in the dp_resource object) which is an array of unlimited number of subarrays (which have a lenght = 3), any idea on how can I model this in Drupal?

For the moment it would be fine if we cannot limit the length of the sub-arrays. The content of the subarrays is just text.

See below an example with 2 sub-arrays:
"references": [
["@mapview:ogc:wms", "#/path", "https://clarity.meteogrid.com/geoserver/Local_Effects/wms?service=WMS&version=1.1.0&request=GetMap&layers=Local_Effects:LE_Tmrt_FrequentEvent_1971-2000_observations_Naples&bbox=4647500,1947000,4720500,2008000&width=725&height=768&srs=EPSG:3035&format=image/png"],
["@resource:hazard-event:heat-wave-duration:$temperature=28:$duration=6", "#/resources/[name='hazard_event-HW.6d_28C']"]
]

CLARITY Data Package: Exposure Maps Resource specification

This task is responsible for defining the data structure of the Exposure Map(s) that will be included in CLARITY data package. This includes indicating:

how is the resource encoded (e.g., store the information in raster or vector format). Besides, in the case of using vector based data, a set of "standard" attributes must be defined for containing the information, so that all resources of this type have always the same set of attributes (and names) and CLARITY tools can process them in the same manner.
what formats are supported for this specific type of resource (e.g., for raster data: netcdf, geotiff, etc.; for vector data: geojson, shapefile, etc.)
what metadata attributes are used to describe this resource (refer to CLARITY General Data Package specification and https://frictionlessdata.io/specs/data-resource/).
In this task the following partners MUST be involved: METEOGRID/PLINIVS/ZAMG

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.