In order to perform automated screening studies, the complete modelling workflow should IMHO be implemented as follows:
Ideally all i/o data we need for HC+LE, Impact & adaptation calculation is in one database (or a cluster). But ATM only for Impact Calculation a "database" (AIT EMIKAT) is used. So either implement the whole process in EMIKAT (which cannot be done by AIT personnel only), or perform / store HC+LE, Impact & adaptation calculation in a PostGIS database and let EMIKAT access these data via Postgres REST API or Geoserver/WCS.
Hazard Events / Indexes / ...
pre-calculate (all?) Hazard Events (HeatWaveOccurence Matrixes, 12x12 km raster) for Europe and put them into a PostGIS database.
this process should be semi-automated (= a script). we have to re-calculate when bias corrected EU-CORDEX data is available is available, right? The (R) scripts should be made available on github, if possible (open science and such ...). I don't think that size matters here, Netdcf files are less than 3 MB. Apart form the re-calculation on bias corrected data, this process has to be performed only once, the result is stored permanently in the database and used for HC+LE downscaling.
Local Effects Input Layers
Pre-calculate (important: as 500x500 (?) grids, not features) the Local Effects Input Layers (those derived by Mario from pan-European datasets like UrbanAtlas) for Europe / major European cities and put them into a PostGIS database.
the process is semi-automated. Feature extraction scripts are on github. So doing the calculation for whole Europe / major European cities shouldn't be a problem. Re-calcualtion is only needed when e.g. Urban Atlas, etc. data is updated. So the results are permanently stored in the database and used for HC+LE downscaling and adaptation.
However, size does matter, urban_atlas_shp.zip is > 40 GB. And processing power (CPU, RAM, ..) might be a limiting factor, too. But in theory, we just need to precaculate and store the analysis grid (500x500m) for while Europe / supported major cities or regions, right? We don't need single features (buildings, roads, ...) for HC+LE and adaptation calculation, right? Those feature layers are nice for to have for visualisation (but OSM should be good enough), but not needed for impact calculation.
Local Effects Hazard
Does precalculation make sense? Possibly for predefined regions. Currently the process is manual (arcGIS) but has to be automated. A script, or even better a stored procedure (e.g. using postgis GDAL functions) has to be developed. Moreover, the re-calculation must be supported on-the-fly in real-time on a limited number of grid cells taking as parameter the user-defined study area (or even better a predefined region -> then we can reuse all materialised views for the regions, of course without adaptation options applied) and possibly the hazard event ids (events selected by the user or preselected in the Data Package), the RCP and the time-period. Then publish materialized views as layers on geoserver, EMKIKAT can query them via WCS. This should be fast, especially, if stored database procedures are used to do the HC-LE calculation (materialised views are cached!). Then expose a REST (e.g. http://postgrest.org ) or GraphQL (e.g. https://github.com/graphile/postgraphile) API for e.g. the Table Component on top of those views, generic standard sql queries (aggregate functions, etc.) can be used to create generic SQL queries that feed the API.
Impact
Currently managed and calculated in Emikat. Does precalculation make sense? IMHO not. -> see adaptation options.
Adaptation Options
Adaptation Options modify properties of the Local Effects Input Layer and are applied on a single analysis grid cell, not a single feature (building, roads, ....), right? If cells are in a database, simply make a spatial select (user-defined study area polygon or better: predefined region) from Local Effects Input Layers and apply +/- to properties (e.g. albedo+=0.5) and store in a materialized view. If we decide to support only pre-defined regions (e.g. for major European cities like metropolitan region of Naples) instead of arbitrary user-defined study areas, we can even re-use the Local Effects Hazard materialised views for the respective region and create an 'adapted' materialised view on top of it. Storage might become a problem (if the system is used) But we can clean those temporary adapted views after a certain time period (e.g. 14 days), unless the user has bought a subscription (= if you don't pay for it, studies are deleted after 14 days).
Then re-calculate Local Effects Hazard Layers for the selected study area using a script or even better the stored database procedure that makes use of the 'adapted' (those with adaptation options applied!) LE Input materialized views . If the user makes a preselection for one RCP, one timeperiod or even a few events (e.g. just low and high) then there is even less to re-caculate. All of this should IMHO be possible in real-time if all i/o happens in the same database instance. Then publish HC+LE materialized views as layers on geoserver, EMKIKAT can query them via WCS.