iiasa / climate-assessment Goto Github PK
View Code? Open in Web Editor NEWHome Page: https://climate-assessment.readthedocs.io/en/latest
License: MIT License
Home Page: https://climate-assessment.readthedocs.io/en/latest
License: MIT License
At the moment, if your input data has for instance AR6 climate diagnostics|Infilled|Emissions|Kyoto Gases (AR6-GWP100)
, then at the end of the workflow, you will probably see something like the following error:
Emissions|VOC|Energy|Demand|Transportation, Emissions|VOC|Energy|Supply, Emissions|VOC|Industrial Processes are being ignored for the calculation of the GWP100.
2022-06-14 22:54:40 pyam.logging MainThread - ERROR: Duplicate rows in `data`:
model scenario region variable unit year
0 IMAGE 3.2 SSP1_SPA1_19I_LIRE_LB World AR6 climate diagnostics|Infilled|Emissions|Kyo... Mt CO2-equiv/yr 2015
1 IMAGE 3.2 SSP1_SPA1_19I_LIRE_LB World AR6 climate diagnostics|Infilled|Emissions|Kyo... Mt CO2-equiv/yr 2016
2 IMAGE 3.2 SSP1_SPA1_19I_LIRE_LB World AR6 climate diagnostics|Infilled|Emissions|Kyo... Mt CO2-equiv/yr 2017
3 IMAGE 3.2 SSP1_SPA1_19I_LIRE_LB World AR6 climate diagnostics|Infilled|Emissions|Kyo... Mt CO2-equiv/yr 2018
4 IMAGE 3.2 SSP1_SPA1_19I_LIRE_LB World AR6 climate diagnostics|Infilled|Emissions|Kyo... Mt CO2-equiv/yr 2019
...
Traceback (most recent call last):
File "scripts/run_workflow.py", line 7, in <module>
climate_assessment.cli.workflow()
File "C:\Users\kikstra\.conda\envs\ca-unep\lib\site-packages\click\core.py", line 1130, in __call__
return self.main(*args, **kwargs)
File "C:\Users\kikstra\.conda\envs\ca-unep\lib\site-packages\click\core.py", line 1055, in main
rv = self.invoke(ctx)
File "C:\Users\kikstra\.conda\envs\ca-unep\lib\site-packages\click\core.py", line 1404, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "C:\Users\kikstra\.conda\envs\ca-unep\lib\site-packages\click\core.py", line 760, in invoke
return __callback(*args, **kwargs)
File "c:\users\kikstra\documents\2022_unep\climate-assessment\src\climate_assessment\cli.py", line 446, in workflow
gwp=gwp,
File "c:\users\kikstra\documents\2022_unep\climate-assessment\src\climate_assessment\cli.py", line 639, in run_workflow
model=model,
File "c:\users\kikstra\documents\2022_unep\climate-assessment\src\climate_assessment\postprocess.py", line 65, in do_postprocess
prefixes=prefixes,
File "c:\users\kikstra\documents\2022_unep\climate-assessment\src\climate_assessment\utils.py", line 313, in add_gwp100_kyoto_wrapper
df = add_gwp100_kyoto(df, gwp_instance=gwp, prefix=prefix)
File "c:\users\kikstra\documents\2022_unep\climate-assessment\src\climate_assessment\utils.py", line 278, in add_gwp100_kyoto
return pyam.IamDataFrame(pyam.concat([df, kyoto]))
File "C:\Users\kikstra\.conda\envs\ca-unep\lib\site-packages\pyam\core.py", line 2812, in concat
return IamDataFrame(pd.concat(ret_data, verify_integrity=False), meta=ret_meta)
File "C:\Users\kikstra\.conda\envs\ca-unep\lib\site-packages\pyam\core.py", line 158, in __init__
self._init(data, meta, index=index, **kwargs)
File "C:\Users\kikstra\.conda\envs\ca-unep\lib\site-packages\pyam\core.py", line 174, in _init
_data = format_data(data.copy(), index=index, **kwargs)
File "C:\Users\kikstra\.conda\envs\ca-unep\lib\site-packages\pyam\utils.py", line 374, in format_data
"Duplicate rows in `data`", df[rows].index.to_frame(index=False)
File "C:\Users\kikstra\.conda\envs\ca-unep\lib\site-packages\pyam\logging.py", line 32, in raise_data_error
raise ValueError(msg)
ValueError: Duplicate rows in `data`:
model scenario region variable unit year
0 IMAGE 3.2 SSP1_SPA1_19I_LIRE_LB World AR6 climate diagnostics|Infilled|Emissions|Kyo... Mt CO2-equiv/yr 2015
1 IMAGE 3.2 SSP1_SPA1_19I_LIRE_LB World AR6 climate diagnostics|Infilled|Emissions|Kyo... Mt CO2-equiv/yr 2016
2 IMAGE 3.2 SSP1_SPA1_19I_LIRE_LB World AR6 climate diagnostics|Infilled|Emissions|Kyo... Mt CO2-equiv/yr 2017
3 IMAGE 3.2 SSP1_SPA1_19I_LIRE_LB World AR6 climate diagnostics|Infilled|Emissions|Kyo... Mt CO2-equiv/yr 2018
4 IMAGE 3.2 SSP1_SPA1_19I_LIRE_LB World AR6 climate diagnostics|Infilled|Emissions|Kyo... Mt CO2-equiv/yr 2019
...
Would be good to
a. check whether this only happens for kyoto gases, or for all ar6 climate diagnostics
b. throw an error earlier in the workflow, by adding an input data check, to not find this surprise only once all the calculations have been done
After merging #31, the test to reproduce the WG3 database failed at the Get infiller database stage (https://github.com/iiasa/climate-assessment/actions/runs/5000486659/jobs/8957948310).
I'll investigate.
FYI @jkikstra, @znicholls, @lewisjared
I realized today that the current build of the docs is failing.
https://readthedocs.org/projects/climate-assessment/builds/22716524/
Hi all, I have been trying to run MAGICC through the climate-processor, which calls climate-assessment on a fresh environment with Windows10.
But I run into an issue with the function .get_version()
that is not compatible with Windows applications.
me running either with CLI or from script
run_magicc(input_data_file="ENGAGE_SSP2_v4.1.8.3.1_T4.5v2_r3.1_NPi2020_GDP_CI_rcp2p6_PS.xlsx",
results_file="MAGICC_rcp6p0_res2.xlsx",
input_data_directory=path_all ,
results_directory=path_all ,
logging_directory=path_all
)
[69 rows x 86 columns]
2024-02-13 15:27:55 INFO Python-dotenv could not find configuration file .env.
Traceback (most recent call last):
Cell In[17], line 1
run_magicc(input_data_file="ENGAGE_SSP2_v4.1.8.3.1_T4.5v2_r3.1_NPi2020_GDP_CI_rcp2p6_PS.xlsx",
File ~\Documents\Github\climate-processor\climate_processor\__init__.py:210 in run_magicc
).apply(IamDataFrame(input_data_directory / input_data_file)).to_excel(
File ~\Documents\Github\climate-processor\climate_processor\__init__.py:115 in apply
self.apply_magicc(
File ~\Documents\Github\climate-processor\climate_processor\__init__.py:137 in apply_magicc
run_workflow(
File ~\AppData\Local\anaconda3\envs\message_gdp\Lib\site-packages\climate_assessment\cli.py:608 in run_workflow
df_climate = climate_assessment(
File ~\AppData\Local\anaconda3\envs\message_gdp\Lib\site-packages\climate_assessment\climate\__init__.py:126 in climate_assessment
climate_model_cfgs, climate_models_out_config = _get_model_configs_and_out_configs(
File ~\AppData\Local\anaconda3\envs\message_gdp\Lib\site-packages\climate_assessment\climate\__init__.py:434 in _get_model_configs_and_out_configs
magicc7_cfgs, magicc7_out_config = get_magicc7_configurations(
File ~\AppData\Local\anaconda3\envs\message_gdp\Lib\site-packages\climate_assessment\climate\magicc7.py:23 in get_magicc7_configurations
if MAGICC7.get_version() != magicc_version:
File ~\AppData\Local\anaconda3\envs\message_gdp\Lib\site-packages\openscm_runner\adapters\magicc7\magicc7.py:220 in get_version
check_output([cls._executable(), "--version"]) # nosec
File ~\AppData\Local\anaconda3\envs\message_gdp\Lib\subprocess.py:466 in check_output
return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
File ~\AppData\Local\anaconda3\envs\message_gdp\Lib\subprocess.py:548 in run
with Popen(*popenargs, **kwargs) as process:
File ~\AppData\Local\anaconda3\envs\message_gdp\Lib\site-packages\spyder_kernels\customize\spydercustomize.py:109 in __init__
super(SubprocessPopen, self).__init__(*args, **kwargs)
File ~\AppData\Local\anaconda3\envs\message_gdp\Lib\subprocess.py:1026 in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File ~\AppData\Local\anaconda3\envs\message_gdp\Lib\subprocess.py:1538 in _execute_child
hp, ht, pid, tid = _winapi.CreateProcess(executable, args,
OSError: [WinError 193] %1 is not a valid Win32 application
If I comment the part with .get_version()
in climate_assessment\climate\magicc7.py, I still get the same error in openscm_runner
C:\Users\vinca\AppData\Local\anaconda3\envs\message_gdp\Lib\site-packages\openscm_runner\adapter │
│ s\magicc7\magicc7.py:184 in _write_scen_files_and_make_full_cfgs │
│ │
│ 181 │ │ │ scen_file_name = os.path.join(out_directory, scen_file_name) │
│ 182 │ │ │ writer.write( │
│ 183 │ │ │ │ scen_file_name, │
│ ❱ 184 │ │ │ │ magicc_version=self.get_version()[1], │
│ 185 │ │ │ ) │
│ 186 │ │ │ │
│ 187 │ │ │ scenario_cfg = [
@jkikstra @znicholls Do you have any quick fix or solutions to be implemented? Hopefully it is not a huge fix.
Thanks!
Desired behaviour
The current code should allow for a scenario to be run with only Emissions|CO2|Energy and Industrial Processes
or even only Emissions|CO2
.
Issue
If running based on an emissions file where all scenarios only have one of these variables (only "Emissions|CO2*"
), then we get the following error (I used run-example-fair.ipynb
to produce this error, with changed input data EMISSIONS_INPUT_FILE = "ar6_minimum_emissions.csv"
) :
run_workflow(
input_emissions_file,
outdir,
model=model,
model_version=model_version,
probabilistic_file=probabilistic_file,
fair_extra_config=fair_extra_config,
num_cfgs=num_cfgs,
infilling_database=infilling_database_file,
scenario_batch_size=scenario_batch_size,
)
2022-12-12 11:55:49 climate_assessment.cli MainThread - INFO: Outputs will be saved in: ..\data\output-fair-example-notebook
2022-12-12 11:55:49 climate_assessment.cli MainThread - INFO: Outputs will be saved with the ID: ar6_minimum_emissions
2022-12-12 11:55:49 climate_assessment.cli MainThread - INFO: Loading ..\tests\test-data\ar6_minimum_emissions.csv
2022-12-12 11:55:49 pyam.core MainThread - INFO: Reading file ..\tests\test-data\ar6_minimum_emissions.csv
2022-12-12 11:55:49 climate_assessment.cli MainThread - INFO: Converting to basic columns i.e. removing any extra columns
2022-12-12 11:55:49 climate_assessment.cli MainThread - INFO: Performing input data checks
2022-12-12 11:55:49 climate_assessment.checks MainThread - INFO: CHECK: if no non-co2 negatives are reported.
2022-12-12 11:55:49 pyam.core MainThread - WARNING: Filtered IamDataFrame is empty!
...
c:\users\kikstra\documents\github\climate-assessment\src\climate_assessment\checks.py in perform_input_checks(df, output_csv_files, output_filename, lead_variable_check, historical_check, reporting_completeness_check, outdir)
875
876 LOGGER.info("CHECK: if no non-co2 negatives are reported.")
--> 877 df = check_negatives(df, output_filename, outdir=outdir)
878
879 LOGGER.info("CHECK: report emissions for all minimally required years.")
c:\users\kikstra\documents\github\climate-assessment\src\climate_assessment\checks.py in check_negatives(df, filename, negativethreshold, outdir, prefix)
558 # set small non-negative non-CO2 values to zero
559 df_co2 = df.filter(variable=f"{prefix}Emissions|CO2*").timeseries()
--> 560 df_nonco2 = df.filter(variable=f"{prefix}Emissions|CO2*", keep=False).timeseries()
561 df_nonco2 = df_nonco2.where(
562 (df_nonco2 > 0) | (df_nonco2 < negativethreshold) | df_nonco2.isnull(), other=0
~\.conda\envs\ca-testing\lib\site-packages\pyam\core.py in timeseries(self, iamc_index)
782 """
783 if self.empty:
--> 784 raise ValueError("This IamDataFrame is empty!")
785
786 s = self._data
ValueError: This IamDataFrame is empty!
Proposed minimum solution
Add if-statement(s) where necessary, like: if not df.filter(variable=f"{prefix}Emissions|CO2*", keep=False).timeseries()
Proposed ideal solution
Add a test that takes in a minimum emissions file like ar6_minimum_emissions.csv, and that checks either:
i. whether all checks are passed
OR
ii. that a complete infilled emissions set is provided based on this input.
After merging #31, I think it would be a good moment to issue a new release.
When trying to include the climate-assessment into the Scenario Explorer processing infrastructure I ran into the issue that this package requires pyam-iamc==1.4.0
while the also required nomenclature package requires pyam-iamc>=1.7.0
.
I'll open up a PR, updating the dependencies to see if the tests run.
FYI @jkikstra, @znicholls
MAGICC spews warnings when running (about reading a sub-annual file). This shouldn't be the case but doesn't actually affect performance at all.
There are currently a number of issues when trying to build the Docker container.
Including:
FYI @jkikstra, @lewisjared, @znicholls, @gidden
This is a placeholder to point out that the logging/command line output can be improved.
For instance;
Priority: low.
Many of our requirements are pinned. This doesn't make sense any more, we shouldn't force users to use specific versions just to install the package.
At the moment it looks like we need to do:
--model "magicc" --version "v7.5.3"
and
--model "fair" --version "1.6.2"
It's confusing to have "v" for magicc but not for fair.
P.S. note that CICERO has a different version naming entirely: "v2019vCH4"
Line for FAIR:
Line for MAGICC:
Line for CICERO:
(ping @phackstock)
The documentation at the moment still misses a clear description of all possible (CSV and XLSX) output files.
I propose to add a list with names of those files, what they contain, and when they appear as output.
This could be added, for instance in the following section:
https://climate-assessment.readthedocs.io/en/latest/general.html#expected-output
Now that we're public, should we make CI public again? I think CI minutes are free again now. @phackstock @khaeru @danielhuppmann do you know better what rules about minutes are for public repos?
As pointed out by @znicholls in #50, there are now breaking changes in cicero, so the nightly tests would need to be adjusted/removed.
Currently climate-assessment
and openscm-runner
are fixed to run v1.6.2 of fair
. This is great for AR6, but the model has moved on and the scenarios that are hard-coded in the historical are quickly becoming out of date. From our end we're developing annual updated calibrations. Many people use climate-assessment
and openscm-runner
in their workflows, so it would be nice to keep the climate assessment current. Yep, this is me volunteering.
Updating fair
and its calibration will cause issues with reproducing AR6. There is no simple solution other than for a user to request an older version of climate assessment
in their workflows (e.g. v0.1.1).
see also openscm/openscm-runner#93
Bumping up to the new version of pyam should yield a significant increase in speed when bigger files are read in.
See https://github.com/IAMconsortium/pyam/releases/tag/v1.8.0 (~3x times speed up for AR6 ensemble [from 90s down to 30s] )
Hi all,
Basic question here. I'm wondering what is the intended way to use the CLI and the scripts contained in scripts
. With a development version, I can just call e.g. python /path/to/cloned/climate-assessment/scripts/run_harm_inf.py --flags
. However, the scripts
folder is not installed when using pip
. So, what was the original plan to install and use the CLI scripts? Are they supposed to only be called inside python, or is Anaconda the only supported way to use the CLI?
I ask because we are trying to avoid actual python code in the REMIND implementation, and stick to the end-user scripts as much as possible. Of course if that's not supposed to be supported, we can get some python in there, or I can do the work to make the CLI more "installable" if someone points me in the right direction.
Thanks anyway
There's a number of pyam improvements that would greatly benefit us downstream.
One of our checks is whether Kyoto gases are greater when infilled than harmonised. This is generally smart, but not so when CO2 AFOLU emissions have been infilled (as they can be negative)
This a bug and/or something to probably mention in the docs.
Desired behaviour
It should be possible to perform a climate run with either Emissions|CO2
or Emissions|CO2|Energy and Industrial Processes
reported, at minimum.
Issue
With the input ar6_minimum_emissions_co2.csv, after resolving the issue described in issue #19 , we get a bit further in the processing chain, but at the end of harmonisation_and_infilling we get an error looking like what is below (running run-example-fair.ipynb
).
This error is thrown by the sanity_check_hierarchy()
, which has co2_infill_db
as input - so that's where the investigation should start.
...
2022-12-12 14:40:47 climate_assessment.infilling MainThread - INFO: Post-processing for climate models
2022-12-12 14:40:47 climate_assessment.infilling MainThread - INFO: Checking infilled results have required years and variables
2022-12-12 14:40:47 climate_assessment.infilling MainThread - INFO: Check that there are no non-CO2 negatives introduced by infilling
2022-12-12 14:40:47 climate_assessment.harmonization_and_infilling MainThread - INFO: Writing infilled data as csv to: ..\data\output-fair-example-notebook\ar6_minimum_emissions_co2_harmonized_infilled.csv
2022-12-12 14:40:47 climate_assessment.harmonization_and_infilling MainThread - INFO: Writing infilled data as xlsx to: ..\data\output-fair-example-notebook\ar6_minimum_emissions_co2_harmonized_infilled.xlsx
2022-12-12 14:40:47 pyam.utils MainThread - WARNING: Formatted data is empty!
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
~\AppData\Local\Temp\ipykernel_2548\3282750836.py in <module>
8 num_cfgs=num_cfgs,
9 infilling_database=infilling_database_file,
---> 10 scenario_batch_size=scenario_batch_size,
11 )
...
c:\users\kikstra\documents\github\climate-assessment\src\climate_assessment\harmonization_and_infilling.py in harmonization_and_infilling(df, key_string, infilling_database, prefix, instance, outdir, do_harmonization)
133 infilled,
134 out_afolu="Emissions|CO2|AFOLU",
--> 135 out_fossil="Emissions|CO2|Energy and Industrial Processes",
136 )
137
c:\users\kikstra\documents\github\climate-assessment\src\climate_assessment\checks.py in sanity_check_hierarchy(co2_inf_db, harmonized, infilled, out_afolu, out_fossil)
1101 )
1102
-> 1103 if not (np.isclose(infill_db_pivot, harmonized_pivot)).all():
1104 raise ValueError(
1105 "The sum of AFOLU and Energy and Industrial Processes "
<__array_function__ internals> in isclose(*args, **kwargs)
~\.conda\envs\ca-testing\lib\site-packages\numpy\core\numeric.py in isclose(a, b, rtol, atol, equal_nan)
2356 yfin = isfinite(y)
2357 if all(xfin) and all(yfin):
-> 2358 return within_tol(x, y, atol, rtol)
2359 else:
2360 finite = xfin & yfin
~\.conda\envs\ca-testing\lib\site-packages\numpy\core\numeric.py in within_tol(x, y, atol, rtol)
2337 def within_tol(x, y, atol, rtol):
2338 with errstate(invalid='ignore'):
-> 2339 return less_equal(abs(x-y), atol + rtol * abs(y))
2340
2341 x = asanyarray(a)
ValueError: operands could not be broadcast together with shapes (86,1) (0,1)
Proposed minimum solution
tbd.
First check how sanity_check_hierarchy()
is implemented, and figure out why it throws the error.
Proposed desired solution
tbd.
Desired behaviour
Both 2015 + list(range(2020, 2101, 10))
and 2010 + list(range(2020, 2101, 10))
are supposed to be acceptable inputs.
Issue
With input that has no 2010 value reported, but does have 2015 + list(range(2020, 2101, 10))
ar6_minimum_emissions_co2eip.csv, we get an error looking like this (running run-example-fair.ipynb
):
run_workflow(
input_emissions_file,
outdir,
model=model,
model_version=model_version,
probabilistic_file=probabilistic_file,
fair_extra_config=fair_extra_config,
num_cfgs=num_cfgs,
infilling_database=infilling_database_file,
scenario_batch_size=scenario_batch_size,
)
2022-12-12 14:31:06 climate_assessment.cli MainThread - INFO: Outputs will be saved in: ..\data\output-fair-example-notebook
2022-12-12 14:31:06 climate_assessment.cli MainThread - INFO: Outputs will be saved with the ID: ar6_minimum_emissions_co2eip
2022-12-12 14:31:06 climate_assessment.cli MainThread - INFO: Loading ..\tests\test-data\ar6_minimum_emissions_co2eip.csv
2022-12-12 14:31:06 climate_assessment.cli MainThread - INFO: Converting to basic columns i.e. removing any extra columns
2022-12-12 14:31:06 climate_assessment.cli MainThread - INFO: Performing input data checks
2022-12-12 14:31:06 climate_assessment.checks MainThread - INFO: CHECK: if no non-co2 negatives are reported.
2022-12-12 14:31:06 climate_assessment.checks MainThread - INFO: CHECK: report emissions for all minimally required years.
2022-12-12 14:31:06 pyam.utils MainThread - WARNING: Formatted data is empty!
2022-12-12 14:31:06 climate_assessment.checks MainThread - INFO: CHECK: combine E&IP if reported separately.
2022-12-12 14:31:06 climate_assessment.checks MainThread - INFO: CHECK: reclassify Waste and Other CO2 under E&IP.
2022-12-12 14:31:06 climate_assessment.checks MainThread - INFO: CHECK: delete rows only reporting zero for the entire timeframe.
2022-12-12 14:31:06 climate_assessment.checks MainThread - INFO: CHECK: check if co2 lead variables are reported.
2022-12-12 14:31:06 climate_assessment.harmonization MainThread - INFO: Using ar6 instance for harmonization
2022-12-12 14:31:06 scmdata.run MainThread - INFO: Reading c:\users\kikstra\documents\github\climate-assessment\src\climate_assessment\harmonization\history_ar6.csv
2022-12-12 14:31:06 climate_assessment.harmonization MainThread - INFO: Not harmonizing set()
2022-12-12 14:31:06 climate_assessment.harmonization MainThread - INFO: harmonization_year 2015
2022-12-12 14:31:06 climate_assessment.harmonization MainThread - INFO: Stripping equivalent units for processing
2022-12-12 14:31:06 climate_assessment.harmonization MainThread - INFO: Creating pd.DataFrame's for aneris
2022-12-12 14:31:06 climate_assessment.harmonization MainThread - INFO: Adding 2015 values based on historical percentage offset from 2010
...
c:\users\kikstra\documents\github\climate-assessment\src\climate_assessment\harmonization_and_infilling.py in harmonization_and_infilling(df, key_string, infilling_database, prefix, instance, outdir, do_harmonization)
68
69 if do_harmonization:
---> 70 harmonized = run_harmonization(df, instance=instance, prefix=prefix)
71 else:
72 LOGGER.info("Not performing harmonization")
c:\users\kikstra\documents\github\climate-assessment\src\climate_assessment\harmonization\__init__.py in run_harmonization(df, instance, prefix)
193 history,
194 yr=historical_offset_add_year,
--> 195 low_yr=historical_offset_base_year,
196 )
197
c:\users\kikstra\documents\github\climate-assessment\src\climate_assessment\harmonization\__init__.py in add_year_historical_percentage_offset(df, dfhist, yr, low_yr)
118 df = pd.concat([df2015, dfno2015])
119 else:
--> 120 raise KeyError(f"{low_yr} not in `dfno2015`")
121
122 return df
KeyError: '2010 not in `dfno2015`'
Proposed minimum solution
In the function def add_year_historical_percentage_offset()
, add a check that checks if ALL scenarios have 2015, while NONE have 2010.
Proposed desired solution
tbd.
What would probably be good is to run a GitHub action workflow to check if the docs can be built successfully. This way we can avoid that a PR accidentally kills the docs. Unless we have that already in place in which case we can close this PR.
In our workflows, particularly the WG3 workflow, there is quite a lot of repetition. This violates DRY. As github yaml doesn't support anchors, other options have to be used e.g. https://stackoverflow.com/questions/64895637/what-are-the-dry-options-for-github-action-yml-workflows
Final publication online since today: https://gmd.copernicus.org/articles/15/9075/2022/
Links/Citations should be updated (both for updated paper link and to code/data assets).
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.