switch-model / switch Goto Github PK
View Code? Open in Web Editor NEWA Modern Platform for Planning High-Renewable Power Systems
Home Page: http://switch-model.org/
License: Other
A Modern Platform for Planning High-Renewable Power Systems
Home Page: http://switch-model.org/
License: Other
I am working with some of the set arrays, and realized that we have two different naming conventions going. We should probably standardize on one or the other.
Option 1: (index_set)_(listed_items), e.g.,
TS_TPS
PROJ_FUEL_USE_SEGMENTS
PERIOD_RELEVANT_TRANS_BUILDS
Option 2: (listed_items)_(preposition)_(index set), e.g.,
PROJECTS_ACTIVE_IN_TIMEPOINT
CONNECTIONS_DIRECTED_INTO_WN
ACTIVE_PERIODS_FOR_PROJECT
Option 1 is more terse, but it is a little unclear where the index name ends and the item name begins. These can also have naming conflicts with product-style sets, e.g., if PROJ_DISPATCH_POINTS is a set of tuples from within PROJECTS x TIMEPOINTS, what should we call the indexed set of TIMEPOINTS for each PROJECT? So I would recommend standardizing on Option 2.
With Option 2, we might also want to decide whether the name should be a noun_adjective_in_something (PROJECTS_ACTIVE_IN_TIMEPOINT) or an adjective_noun_for_something (ACTIVE_PROJECTS_FOR_TIMEPOINT). I doubt we can standardize on which preposition to use (ACTIVE_PROJECTS_FOR_TIMEPOINT or ACTIVE_PROJECTS_IN_TIMEPOINT?), but that should be OK.
Another naming issue: we sometimes write out "project" or "PROJECT" in names of components and files, and sometimes write "proj" or "PROJ". We should probably standardize on the long or the short form. I don't really have an opinion about which.
I think if we straighten these out, it will help people in learning what each set does, and guessing the name of the set they should use (or create) for a particular task.
Various functions now produce standardized output files. However, when multiple scenarios are run in the same directory, the later output files usually overwrite the earlier ones, making them irretrievable.
We should modify these output functions, so that if a scenario name is defined in model.options.scenario_name
, it will be appended to the file name or used to define a subdirectory within outputs/, where all the output files will be stored. Either of these would make it possible to save results from multiple scenarios and then retrieve them later.
The reporting routines in the switch.hawaii modules append the scenario name to the end of each file. This makes it easy to browse for a particular output file, e.g., to open it in Excel. The subdirectory option would be neater, but would also take a little more digging to inspect the results.
Switch currently requires there to be at least one existing build.
Steps to reproduce: Take examples/copperplate0
and modify it by deleting all rows from proj_existing_builds.tab
and proj_build_costs.tab
. Run switch_mod.solve.
Expected result: No error.
Actual result: Switch gives the following error:
ERROR: Constructing component 'proj_existing_cap' from data={None: 'PROJECT'} failed:
RuntimeError: Failed to set value for param=proj_existing_cap, index=None, value=PROJECT.
source error message="Error setting parameter value: Index 'None' is not valid for array Param 'proj_existing_cap'"
Traceback (most recent call last):
...
File ".../pyomo/core/base/param.py", line 802, in construct
% (self.cname(True), str(key), str(val), str(msg)) )
RuntimeError: Failed to set value for param=proj_existing_cap, index=None, value=PROJECT.
source error message="Error setting parameter value: Index 'None' is not valid for array Param 'proj_existing_cap'"
I debugged this and I discovered that Pyomo has a dubious special case for 1-line .tab
files. (See the elif len(tmp) == 1
case in pyomo/core/plugins/data/text.py
.)
I would expect Pyomo to treat a 1-line file as containing no rows of data (just a row of headings). However, instead Pyomo seems to be converting the file to the declaration param proj_existing_cap := PROJECT
.
A possible fix would be to change load_aug()
in utilities.py
so that it skips any .tab
files that contain zero rows. load_aug()
already has a check for that, but it's only enabled when it's called with optional=True
.
I was wondering if there were plans to update to Python 3?
https://docs.python.org/3/howto/pyporting.html
If not, is there an appetite for such a move?
Reed had trouble with run_tests.py on a fresh install. I'll ask him to assign himself as well to join in the conversation.
Reeds-MacBook-Air-2:/ reedHaubenstock$ ls -l /Users/reedHaubenstock/Desktop/switch_py
/Users/reedHaubenstock/Desktop/switch_py:
total 72
-rwxr-xr-x@ 1 reedHaubenstock staff 342 Jul 8 20:15 AUTHORS
-rwxr-xr-x@ 1 reedHaubenstock staff 1104 Jul 8 20:15 INSTALL
-rwxr-xr-x@ 1 reedHaubenstock staff 11555 Jul 8 20:15 LICENSE
-rwxr-xr-x@ 1 reedHaubenstock staff 732 Jul 8 20:15 LICENSE.BOILERPLATE
-rwxr-xr-x@ 1 reedHaubenstock staff 956 Jul 8 20:15 README
drwxr-xr-x@ 30 reedHaubenstock staff 1020 Jul 8 20:15 doc
drwxr-xr-x@ 9 reedHaubenstock staff 306 Jul 14 12:24 examples
-rwxr-xr-x@ 1 reedHaubenstock staff 2257 Jul 8 20:15 run_tests.py
-rw-r--r--+ 1 reedHaubenstock staff 1609 Jul 14 12:19 run_tests.pyc
drwxr-xr-x@ 15 reedHaubenstock staff 510 Jul 8 20:15 sandbox_dev
drwxr-xr-x@ 32 reedHaubenstock staff 1088 Jul 14 12:18 switch_mod
drwxr-xr-x@ 4 reedHaubenstock staff 136 Jul 8 20:15 tests
Reeds-MacBook-Air-2:/ reedHaubenstock$ echo $PYTHONPATH
:/Users/reedHaubenstock/Desktop/switch_py
Reeds-MacBook-Air-2:/ reedHaubenstock$ cd Users/reedHaubenstock/Desktop/switch_py
Reeds-MacBook-Air-2:switch_py reedHaubenstock$ ./run_tests.py
----------------------------------------------------------------------
Ran 0 tests in 0.000s
OK
If arguments that set options for the solve module are inputted in the command line along with the python -m switch_mod.solve
command, then they are also passed to the model's _ArgumentParser
, which aims to get command line options for other modules, raising an error since no module uses the option verbose
, for example.
I think a way to get around this would be to modify the define_AbstractModel()
and create_model()
functions in the utilities module so their default values are empty lists, instead of all of the command line options: change sys.argv[1:]
for []
. That way, the solve module will be able to parse the basic options and non of those will be passed to the model's module option parser.
If a developer is writing a new module which requires a command line option, then he can actually pass an argument list instead of leaving the default value.
An alternative would be to write some lines to filter the system's argument list for non-solve options and pass that list as the default.
I'm not sure what approach would be more suitable given pending pull request [https://github.com//pull/17]. Maybe defaulting to an empty list would be better in the very short term so that new users can get SWITCH working in their first run.
New installation fails as Pandas deprecated sort() in favor of sort_values() or sort_index(). Should I require a specific pandas version before version 0.20 or attempt to update the calls to sort() with sort_values()?
We need more robust methods for re-using reference input sets, that lets us:
a) specify permutations of inputs for exploring a wider space
b) not duplicate data on disk
c) allows clean and compact diffs
d) readily deployable
e) track development, history, and and stakeholder approval process
f) easily enables derivative work
This issues of permutation and history tracking may have distinct solutions, but I am wondering if we could design a way to use git and data organizational conventions to accomplish all of these.
Matthias has many constant use cases for a-c.
Sergio and his team are actively working through d as they compile data for Switch-Mexico. They have a chance to do it well, and could use some help in figuring out how to navigate tools. They are using google drive, I suggested moving to git (and github if their repositories don't have a size restriction).
I've had separate conversations with Mark and Ana about these issues lately.
That's it for now. I wanted to start a thread on this topic before leaving on vacation for the week.
-Josiah
This is an update of the stale pull request #97 . Either this is a bug that needs to be fixed, or a confusing modeling formulation that needs clarification in documentation and/or formulation.
The minimum downtime constraint is currently:
CommitGen[t] <= max(CommitUpperLimit[t_prior] for t_prior in time_window + 1) - sum(m.ShutdownGenCapacity[t_prior] for t_prior in time_window)
I think it needs to be:
CommitGen[t] <= CommitUpperLimit[t] - sum(m.ShutdownGenCapacity[t_prior] for t_prior in time_window)
Matthias has stated using max() is necessary to track a band of capacity that needs to stay down for maintenance. To me, it looks like it will be overestimating available capacity if more capacity was available in prior timepoints.
Matthias's comments from prior documentation & a post May 19, 2017 on Pull Request #96. The max(...) term finds the largest fraction of capacity that could have been committed in the last x hours, including the current hour. We assume that everything above this band must remain turned off (e.g., on maintenance outage). Note: this band extends one step prior to the first relevant shutdown, since that capacity could have been online in the prior step.
... This implements a band of capacity that does not participate in the minimum downtime constraint. Without that term, the model can turn off some capacity, and then get around the min-downtime rule by turning on other capacity which is actually forced off by gen_max_commit_fraction, e.g., due to a maintenance outage.
My response: Any forced maintenance outage encoded by gen_max_commit_fraction will be directly reflected into ShutdownGenCapacity and subject to minimum downtime, if that capacity was online when the maintenance event started. If sufficient capacity was offline prior to maintenance, then min downtime would not need to be tracked separately. I don't see a separate band of capacity that needs to be implicitly tracked. The way I read it, the max(...) term would overestimate available capacity if prior timepoints in the window had more capacity available. I think the max(...) term needs to be replaced by m.CommitUpperLimit[g, t].
When building a long-term scenario (2050) I included renewable energy plants installed in 2015, however I kept getting the following error:
RuntimeError: Failed to set value for param=gen_max_capacity_factor, index=('renewPlant', 338), value=0.248987. source error message="Error setting parameter value: Index '('renewPlant', 338)' is not valid for array Param 'gen_max_capacity_factor'"
It seems that SWITCH can't assign capacity factors for legacy plants (and hence can't construct the model to be run) if their max age have been reached before an investment period. In this particular point, no capacity factors could be assigned after 2040 (2015 init period +25 commission = 2040) because they weren't mean to exist then.
A quick workaround to construct and run was to extend these projects' lifetime.
Hi all,
I am interested in using your SWITCH tool with research on alternative ES and generation systems. I tried setting up SWITCH, but am running into the following errors when I run the run_tests.py:
invalid literal for int() with base 10: 'c'
invalid literal for int() with base 10: 'c'
invalid literal for int() with base 10: 'c'
invalid literal for int() with base 10: 'c'
invalid literal for int() with base 10: 'c'
invalid literal for int() with base 10: 'c'
invalid literal for int() with base 10: 'c'
invalid literal for int() with base 10: 'c'
invalid literal for int() with base 10: 'c'
invalid literal for int() with base 10: 'c'
invalid literal for int() with base 10: 'c'
invalid literal for int() with base 10: 'c'
invalid literal for int() with base 10: 'c'
To exit: use 'exit', 'quit', or Ctrl-D.
An exception has occurred, use %tb to see the full traceback.
SystemExit: True
I will admit, trying to get SWITCH set up was a bit confusing for me, so this could very well be an issue on my end. To help debug this issue, here are some details about what I have installed and the process I took to set up SWITCH:
Environment:
Process to set up SWITCH:
After all of this, I try running run_tests.py in Spyder and it gives me the above errors. Any ideas what the problem could be? Thanks for any help.
Miles
Edit: I forgot to also mention that I have not been able to successfully run make_doc.sh. I downloaded cygwin and from there went to the doc folder and tried
bash make_doc.sh
It then gives me:
no Python documentation found for '../\r'
Various standard modules now have post_solve()
functions with built-in report-writing behavior. There are no command-line options to control this, so users have no easy way to turn this off.
This can be a problem when running iterative models on an HPC system, e.g., I am currently running a model that solves for 2 years of hourly data in each of 6 study periods, which makes about 1 GB of output per iteration. None of that is needed, because I only use a few kB of diagnostic statistics each iteration, which are written by a different module. But all this output burdens the HPC's network file system and could slow down the iterations -- they only take a couple of minutes each when running a lot of solutions in parallel, but may take much longer when the file system is backlogged.
I am able to turn off a lot of output by leaving out switch_model.reporting
, but it's not so easy to turn off the post_solve()
functions in the standard modules. I am currently doing that from one of my custom modules via monkey-patching, as follows:
# suppress standard reporting to minimize disk access (ugh)
from importlib import import_module
for module in [
'balancing.load_zones', 'generators.core.build', 'generators.core.dispatch',
'generators.extensions.storage'
]:
imported_module = import_module('switch_model.' + module)
del imported_module.post_solve
But this is not a good long-term solution.
I would recommend moving these standard outputs into the reporting module, and having it 'duck type' the outputs, i.e., only generate outputs for components that are present in the model, and not worry about the others. Then these outputs can be suppressed simply by omitting the reporting module. This would also move reporting to a higher level ("report this element if present, regardless of what module created it"), which would make the output more standardized (roughly the same outputs whether you use the standard modules or some alternative replacements) and avoid repetition of reporting code in alternative models.
I would also recommend adding some command-line flags to control the level of reporting -- per-variable outputs, per-expression outputs, only certain variables and expressions, horizontal-table output or only certain horizontal-table output.
Pyomo 5.6.1 introduces some changes that can break Switch models. One is that it adds a util
object to pyomo.environ
, which masks a util
module imported in switch_model.hawaii.save_results
, used by the main Hawaii model (users get "AttributeError: 'module' object has no attribute 'write_table'").
I also saw problems (maybe the same one) when trying to run the advanced demand response model with Pyomo 5.6.1.
A quick fix is to roll back to Pyomo 5.1.1, but that is not sustainable long-term.
The following lines of code seem to be wrong. With the current implementation, StorageEnergyInstallCosts
is the same every period.
switch/switch_model/generators/extensions/storage.py
Lines 136 to 143 in a68b637
I think the correct expression would be similar to how GenCapitalCosts
is implemented where only build years for the period are included in the sum.
switch/switch_model/generators/core/build.py
Lines 467 to 471 in a68b637
For example if we consider a 20 year project, currently the overnight cost is annualized over 20 years but included in every period (possible a lot more than 20 years).
In switch_model.transmission.transport, we generally refer to entities called "transmission lines". These are actually transfer corridors: aggregated transfer capacity between two zones, calculated from the capabilities of the underlying network. Users often estimate this as a derated sum of the ratings of the transmission lines connecting these zones, but there are other ways to do it (e.g., consult the WECC Path Rating Catalog). At any rate, these are definitely not transmission lines, and it could bug electrical engineers to hear them called that. So we should probably rename them as transfer corridors, transmission corridors, transfer paths or similar. That will also keep the name space open for actual transmission line modeling later.
It would also be helpful to add bidirectional ratings for these paths (at least for initial capacity, and possibly different costs to update the capacity in each direction, based on a detailed transmission study). This reflects the fact that transfer paths often have different ratings in each direction.
Reed: Hey Josiah,
I just wanted to give you an update on investigating the GLPK issue.
I first ran all of the example models from the examples folder with cbc as the solver and they all worked. Then I tried to run each of them using GLPK as the solver which resulted in the models 3zone_toy, copperplate1, and discrete_build producing errors and copperplate0, custom_extension, and production_cost_models running without error.
Furthermore, the errors that resulted in running the models 3zone_toy, copperplate1, and discrete_build were all actually the same error, which is "missing upper bound". I then looked into the glpk code to see where this error message was being produced and found that it was being produced in the glpcpx.c file. More specifically, it was being produced by the parsing_bounds function, and that the token that was being found after the "<=" / " less than or equal to" token was a symbolic name token, which doesn't trigger any of the if statements and gets thrown in to the else case which is throwing the error that we see.
I haven't been able to figure out more than that, but I will keep working on it. In all honesty I'm not sure whether I should be examining the python code or how pyomo works or how glpk works in order to find out what is going wrong, but I'll check the web to see if anything similar has happened to anyone.
Josiah: That's weird. I think the problem lies in the fuel_markets module because it only appears in the problematic examples. The strange thing is that neither of the two constraints in fuel_markets use a "<="; both are equality constraints.
I'm not sure the best way to proceed either, so checking for similar problems on the web and/or making a simple example that reproduces this error and posting it with a question on the developer's list or stackoverflow makes sense.
Reed: I haven't tried looking into the fuel_markets module, but I'll look into it when I get the chance.
I think the error is fairly simple, but I'm not sure how to fix it. So the actual error I was getting looked something like this
ERROR: "[base]/site-packages/pyomo/opt/base/solvers.py", 428, solve
Solver (glpk) returned non-zero return code (1)
ERROR: "[base]/site-packages/pyomo/opt/base/solvers.py", 433, solve
Solver log:
GLPSOL: GLPK LP/MIP Solver, v4.55
Parameter(s) specified in the command line:
--write /var/folders/fp/86bgl9md7q90fc9jztz5m4980000gn/T/tmpfKTxhX.glpk.raw
--wglp /var/folders/fp/86bgl9md7q90fc9jztz5m4980000gn/T/tmpA8GzVb.glpk.glp
--cpxlp /var/folders/fp/86bgl9md7q90fc9jztz5m4980000gn/T/tmpKOUxAv.pyomo.lp
Reading problem data from '/var/folders/fp/86bgl9md7q90fc9jztz5m4980000gn/T/tmpKOUxAv.pyomo.lp'...
/var/folders/fp/86bgl9md7q90fc9jztz5m4980000gn/T/tmpKOUxAv.pyomo.lp:5257: missing upper bound name
CPLEX LP file processing error
Traceback (most recent call last):
File "solve.py", line 30, in <module>
results = opt.solve(switch_instance, keepfiles=False, tee=False)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pyomo/opt/base/solvers.py", line 435, in solve
"Solver (%s) did not exit normally" % self.name )
pyutilib.common._exceptions.ApplicationError: Solver (glpk) did not exit normally
So I went into the file /var/folders/fp/86bgl9md7q90fc9jztz5m4980000gn/T/tmpKOUxAv.pyomo.lp and looked at line 5257 to see what was causing the error, and sure enough the line was just 0 <= x1294 <= inf , but for some reason GLPK only seems to recognize infinity when it is preceded by a + sign or a - sign, which is kind of interesting because in the 5,256 lines before it there were plenty of +inf 's which didn't cause GLPK any problems.
Thanks for the advice regarding what to do next, and I might end up posting something on stack exchange if I can't figure it out.
Josiah: I replicated your process on copperplate1, edited that line from inf to +inf, and got glpsol to solve it, which supports your assessment:
siah-macbookpro:copperplate1 josiah$ ./solve.py
ERROR: "[base]/site-packages/pyomo/opt/base/solvers.py", 428, solve
Solver (glpk) returned non-zero return code (1)
ERROR: "[base]/site-packages/pyomo/opt/base/solvers.py", 433, solve
Solver log:
GLPSOL: GLPK LP/MIP Solver, v4.52
Parameter(s) specified in the command line:
--write /var/folders/yv/q91xdmmh8xj3z80059_xrqzr0000gn/T/tmptblBpo.glpk.raw
--wglp /var/folders/yv/q91xdmmh8xj3z80059_xrqzr0000gn/T/tmpcrjXQm.glpk.glp
--cpxlp /var/folders/yv/q91xdmmh8xj3z80059_xrqzr0000gn/T/tmpWmfzOA.pyomo.lp
Reading problem data from `/var/folders/yv/q91xdmmh8xj3z80059_xrqzr0000gn/T/tmpWmfzOA.pyomo.lp'...
/var/folders/yv/q91xdmmh8xj3z80059_xrqzr0000gn/T/tmpWmfzOA.pyomo.lp:337: missing upper bound
CPLEX LP file processing error
Traceback (most recent call last):
File "./solve.py", line 26, in <module>
results = opt.solve(switch_instance, keepfiles=False, tee=False)
File "/Library/Python/2.7/site-packages/pyomo/opt/base/solvers.py", line 435, in solve
"Solver (%s) did not exit normally" % self.name )
pyutilib.common._exceptions.ApplicationError: Solver (glpk) did not exit normally
siah-macbookpro:copperplate1 josiah$ bbedit /var/folders/yv/q91xdmmh8xj3z80059_xrqzr0000gn/T/tmpWmfzOA.pyomo.lp
siah-macbookpro:copperplate1 josiah$ glpsol --cpxlp /var/folders/yv/q91xdmmh8xj3z80059_xrqzr0000gn/T/tmpWmfzOA.pyomo.lp
GLPSOL: GLPK LP/MIP Solver, v4.52
Parameter(s) specified in the command line:
--cpxlp /var/folders/yv/q91xdmmh8xj3z80059_xrqzr0000gn/T/tmpWmfzOA.pyomo.lp
Reading problem data from `/var/folders/yv/q91xdmmh8xj3z80059_xrqzr0000gn/T/tmpWmfzOA.pyomo.lp'...
52 rows, 44 columns, 111 non-zeros
342 lines were read
GLPK Simplex Optimizer, v4.52
52 rows, 44 columns, 111 non-zeros
Preprocessing...
23 rows, 27 columns, 55 non-zeros
Scaling...
A: min|aij| = 5.586e-01 max|aij| = 4.383e+03 ratio = 7.846e+03
GM: min|aij| = 6.357e-01 max|aij| = 1.573e+00 ratio = 2.474e+00
EQ: min|aij| = 4.067e-01 max|aij| = 1.000e+00 ratio = 2.459e+00
Constructing initial basis...
Size of triangular part is 23
0: obj = 2.029492572e+07 infeas = 7.797e+01 (0)
* 9: obj = 3.939574534e+07 infeas = 0.000e+00 (0)
* 15: obj = 2.667139398e+07 infeas = 1.339e-30 (0)
OPTIMAL LP SOLUTION FOUND
Time used: 0.0 secs
Memory used: 0.1 Mb (74014 bytes)
siah-macbookpro:copperplate1 josiah$
I skimmed through pyomo's interface to GLPK (pyomo/solvers/plugins/solvers/GLPK.py), but had trouble tracing down where the cpxlp file actually gets written out. I finally grep'ed their codebase for cpxlp and found pyomo/repn/plugins/cpxlp.py. Searching that file for 'inf' brought me to line 771-774, which has an if statement that translates no upper bound to " <= +inf\n" and otherwise prints the value of the upper bound. In python, positive infinity is printed as 'inf'. I edited the if statement to be:
if vardata_ub is not None and value(vardata_ub) != float('inf'):
output_file.write(ub_string_template % value(vardata_ub))
else:
output_file.write(" <= +inf\n")
and that fixed the problem.
I looked fuel_markets.py and found the source of the problem. The upper bound of fuel that you can purchase in a given supply tier for a particular price will default to infinity if no upper bound was provided - effectively giving an unlimited supply at a given price. This parameter is used as the upper bound for the decision variable FuelConsumptionByTier and is translated into a constraint. Since the upper bound was defined, it used the python depiction of its value, which comes to 'inf'. I added some logic to the lines that specify the upper bound to replace float('inf') with None, and it works now. I just pushed the fix to github.
I would really like to have documentation for each module with a number of features:
We have a lot of this in the Supplemental Information file for the Switch 2.0 paper. But ideally these elements would be automatically extracted from the source code, to make sure we cover everything. In the near term, this would be helpful for cross-checking that everything is covered in the Supplemental Information file, and possibly to add some extra detail there (cross-reference Python names for the Latex terms, list the tables that parameters are defined in, etc.) In the longer term, this could help us create web-based documentation that uses Python terms rather than Latex, is more readable than our current source code, and allows cross-referencing terms between modules.
I've been playing around this week to see what might be possible along these lines. I'm pretty sure now that we could do this by inserting our detailed comments throughout the main code in each module, using docstring format (triple-quotes). This text would be similar (often identical) to the comments currently written at the top of each module, but would be dispersed throughout the module instead. Once that is done, I'm pretty sure we (I) could automatically generate documentation pages for each module by following these steps:
Constraint_rule
is given by:" to improve readabilityOnce this is done, it's fairly easy to convert the rst file into HTML, Tex, PDF, etc.
At a later stage, we may be able to use standard translations to convert the Python component names (and eventually maybe even the sum()-type expressions) from this rst file into equivalent Tex terms (i.e., shorter variable names), and write additional Tex-oriented rst files. Then it might be pretty quick work to tweak that to good Tex code and/or we could use a system to retain any translated code that has been manually tweaked, until the corresponding Python code changes (this would probably be something for later in the year).
To help you see how this could work, I have attached a zip file commit_autodoc.zip (but see new version in comment below) containing three example files:
So my questions are:
There are a number of ideas in pull request #115 that won't make it into the 2.0.6 release, so I'm gathering them here for future consideration.
First, unaddressed goals from the start of that pull request:
conda install switch_model_extras
/ pip install switch_model[extras]
)conda install -c defaults -c conda-forge switch_model
or pip install switch_model
(+find glpk somewhere) for most users?conda config --add channels new_channel
). Should we recommend this for all users, or would that be too much meddling in people's system configuration?git clone https://github.com/switch-model/switch.git && cd switch
conda install --only-deps switch_model
pip install --editable .
or python setup.py develop
(note: conda develop
doesn't install command-line scripts or dependencies; see conda/conda-build#1992 (comment))switch find <module>
: report file path to specified module (possibly just a submodule within switch_model)
atom `switch find switch_model`
, mate `switch find pyomo`
or maybe atom `switch find discrete_commit`
.switch install examples
: copy examples directory to local directoryswitch solve --trace [[<module>|<file>[:<function>|<line>]], [<module>[:<function>]], ...]
: invoke the debugger (a) when particular callbacks are called, (b) when any callback in the specified module is called (if no function specified), or (c) whenever any callback is called (if no modules specified).
Hey @josiahjohnston
I was wondering if we could add more extensions for the input files such as *.csv, *.tsv, etc. I think this will give more flexibility for some users of switch. I can do the pull request for this. It is an easy feature implementation.
I was wondering if there is any way that Switch simulations could be parallelized to speed up runs. It has more to do with Pyomo rather than Switch itself, but I thought this would be the ideal place to ask first.
I've found that there are two main processes that take up most of the time of my simulations: the model instantiation (load_inputs
function) and the printing of the LP file that is passed to Gurobi, the solver. Actual optimization time is around the same order of magnitude of these processes, but it's already being parallelized by Gurobi, so no speed ups seem possible in that area.
Both model instantiation and pre solving use up only one core of the server and take significant time to complete. Do you know if it's possible to speed them up? As for the pre solving process, I tried using a direct Python interface with Gurobi (loading the model directly from Pyomo components instead of writing and parsing an LP file), but it was even a bit slower.
What could be the source of this error?
Deterministic concurrent LP optimizer: primal simplex, dual simplex, and
barrier Showing barrier log only...
Root barrier log...
Elapsed ordering time = 5s Ordering time: 139.02s
Barrier performed 0 iterations in 568.30 seconds Error termination
Explored 0 nodes (0 simplex iterations) in 570.04 seconds Thread count was
1 (of 12 available processors)
Solution count 0
Solve interrupted (error code 10001) Best objective -, best bound -, gap -
Traceback (most recent call last):
File "<stdin>", line 5, in <module> File
"C:\Users\Public\Software\lib\site-
packages\pyomo\solvers\plugins\solvers\GUROBI_RUN.py", line 114, in
gurobi_run
model.optimize()
File "model.pxi", line 833, in gurobipy.Model.optimize
gurobipy.GurobiError: Out of memory
Traceback (most recent call last):
File "C:\Users\Public\Software\Scripts\switch-script.py", line 11, in
load_entry_point('switch-model', 'console_scripts', 'switch')()
File "c:\users\puneet chitkara\switch\switch_model\main.py", line 39, in main
main()
File "c:\users\puneet chitkara\switch\switch_model\solve.py", line 161, in main
results = solve(instance)
File "c:\users\puneet chitkara\switch\switch_model\solve.py", line 731, in solve
results = model.solver_manager.solve(model, opt=model.solver, **solver_args)
File "C:\Users\Public\Software\lib\site-packages\pyomo\opt\parallel\async_solver.py", line 28, in solve
return self.execute(*args, **kwds)
File "C:\Users\Public\Software\lib\site-packages\pyomo\opt\parallel\manager.py", line 107, in execute
ah = self.queue(*args, **kwds)
File "C:\Users\Public\Software\lib\site-packages\pyomo\opt\parallel\manager.py", line 122, in queue
return self._perform_queue(ah, *args, **kwds)
File "C:\Users\Public\Software\lib\site-packages\pyomo\opt\parallel\local.py", line 58, in _perform_queue
results = opt.solve(*args, **kwds)
File "C:\Users\Public\Software\lib\site-packages\pyomo\opt\base\solvers.py", line 600, in solve
"Solver (%s) did not exit normally" % self.name)
I tried to remove the DumpPower variable, but some examples increased their total cost. The Geothermal projects have to lower their baseload power outputs to exactly match demand on the first timepoints, so more fuel has to be burned in later timepoints.
With Rodrigo we believe that the variable should be eliminated, even if it changes outputs of some examples (we would update them). A solution to reduce cost increases would be to allow free power curtailment for baseload projects. Variable O&M costs for operating the baseload plant at a certain dispatch level would need to be accounted for, but less fuel would be burned in the whole horizon of the simulation.
It would be nice to have a standardized system for creating tabular output files, where users can call a helper function to register any expressions that they want reported, along with the indexes and (optional) filename to use for them. Then a standard reporting module would create a single output file for each specified filename and/or indexing set, and would automatically create a column for each expression that's been assigned to that file (there would probably also be some standard expressions registered automatically). Tables like this are very handy for reviewing results in Excel or Pandas.
Tables like this are already created by switch_mod.hawaii.util.write_table(), but it would be nice to generalize this approach as described above. This would reduce the amount of nesting that is often needed in calls to write_table() and also make it so that all the code related to one output file doesn't have to be in one place.
I just noticed that transmission sunk costs -the capital and fixed O&M costs of existing capacity- are not considered in the objective function. This is due to the TX_BUILDS_IN_PERIOD
set not including builds indexed by 'Legacy'.
This is not a problem by itself, but it happens that the model does include generation sunk costs -the capital and fixed O&M costs of predetermined capacity-. If the model allowed for early decommissioning or mothballing, these costs would not be sunk, but we still haven't enabled those features.
So, for consistency, either Tx sunk costs should be included in the OF or Gen sunk costs should be excluded. Either way, both total costs and (total - sunk) costs should be reported on model exit.
This issue will serve as a central point to discuss merging the issues from the SWITCH WECC repo.
As of now the plan is as follows.
wecc
branch in this repo.wecc
specific files into a wecc
package.master-with-black-formatted-history
and wecc-with-black-formatted-history
which are the master
and wecc
branches rewritten but with their history modified such that every commit is formatted according to black
. This reduces merge conflicts in the next step (see notes on how this was done below).master
into wecc
wecc
branch into the main branch as small features. These merge requests will have to be merged more or less in order. See chart below.master
switch drop
tool (status: not started)wecc
folder for others to see / use (status: not started)switch compare
(status: not started)master
features to include in WECCdimen
in set definitionordered=False
in wecc
but unique_list
in master
wecc
specific examples e.g. ca_policies
or stochastic
exampleconfig.yaml
approach to generating scenariosmaster-with-black-formatted-history
and wecc-with-black-formatted-history
This is achieved with the following commands in Powershell.
Run git merge-base wecc master
to find the commit where the two branches first diverged.
Checkout that commit on a new branch called common-wecc-parent
.
Reformat the code with black --fast -t py310 --extend-exclude 'switch_wecc/wecc' ./switch_model
and make a new commit.
Checkout master
on switch to a new branch master-with-black-formatted-history
.
Run git rebase --rebase-merges --exec "black --fast -t py310 --extend-exclude 'switch_wecc/wecc' ./switch_model; git commit -a --amend --allow-empty" common-wecc-parent
a. The --rebase-merges
ensures we keep the topography of the merges
b. The --exec "black --fast -t py310 --extend-exclude 'switch_wecc/wecc' ./switch_model; git commit -a --amend --allow-empty"
means that after every commit, we amend the commit such that the switch_model files are reformatted.
This will generate significant merge conflicts. For each one run
a. git restore --theirs -s REBASE_HEAD .
to update the local files to the commit that is being applied.
b. black --fast -t py310 --extend-exclude 'switch_wecc/wecc' ./switch_model
to reformat the files.
c. git add .
to mark any conflicts as resolved.
d. git rebase --continue
to continue with the rebase. To avoid the popup to edit the commit message see instructions here.
To make these commands automatically run up to 100 times in a loop, in Powershell simply run
1..10 | % {git restore --theirs -s REBASE_HEAD . ; black --fast -t py310 --extend-exclude 'switch_wecc/wecc' ./switch_model ; git add . ; git rebase --continue}
Check that there are no differences between master-with-black-formatted-history
and master
(you might need to run black on master
)
Redo step 4 to 7 but for the wecc
branch.
I think we need to standardize how SWITCH is installed and run, because we seem to have two different versions methods in use. I'll describe below how I do it, which is my recommended approach. But I would like to hear if people have other suggestions. I would then like to rewrite the instructions so users can get up and running pretty easily.
I install SWITCH by cloning the repository, then cd'ing into the switch
directory and running python setup.py develop
or python setup.py develop --user
. This installs the package in-place, so I can edit it and use it at the same time (and if I want to change to another installation, I just go there and run python setup.py develop
again). Users who don't want to edit the package can instead run python setup.py install
or python setup.py install --user
. Once we put the package on pypi and conda, they can just do pip install switch
or conda install -c switch-model switch
. These commands work well under anaconda, and would probably work with homebrew python. With the standard system python, users might need to use sudo
with the system-wide versions.
These commands do a few useful things:
switch_mod
package in the python system pathswitch
command-line script (equivalent to python -m switch_mod.main
)I then setup models in various locations in my file system (not inside the switch
folder) and solve them by running switch solve
or switch solve-scenarios
. If I edit the local copy of the switch package or use git pull
, those changes are automatically reflected the next time I use the command-line script. This works well, and makes it easy to setup other users and share models with them (via separate repositories for each model).
As I understand it from the INSTALL file, other SWITCH users are editing their PYTHONPATH
to point to the switch repository and then using pip install --user -r pip_requirements.txt
to install the dependencies. I don't know how other users are activating the switch
command-line script, if they use it at all. This approach has a few disadvantages compared to my approach, which make it difficult to give general installation instructions for new users:
switch
command-line script, which is very handy, especially for new usersAll of these problems are addressed automatically by setuptools, which is used by setup.py. This takes care of all the cross-platform issues other than installing the solver.
Any objections to making setup.py
the standard way to install switch, via the various commands listed in the second paragraph? Or making switch solve ...
the standard way to run it?
Our documentation needs a lot of improvement. Extensive documentation is embedded in docstrings of most modules and functions, but the pydoc results are crap. When learning the model, the Switch-Mexico team chose to read the code directly instead of the pydoc stuff and re-write the documentation in LaTeX.
Our working plans for cleaning this up are:
define_components()
to attributes of the components themselves. See pull request #57 for some discussion. Write some new code to construct a model, then introspect it to describe each component and its documentation (possibly including equations). The module- and component-level documentation could either be compiled into a rst file or otherwise passed into Sphynx.
Note, the component documentation could either be in the doc
argument of its definition, or assigned to an arbitrary attribute immediately after the definition. I'm pretty sure we will have to write some custom code to inspect the pyomo model, so assigning our extra documentation to our own extra attributes seems like a fairly clean solution.
Given recent issues with package installation ( #111 ) and the fact that Python 2 will cease to be maintained by January 1st, 2020, I propose we start planning for an upgrade to Python 3.
I haven't had experience with upgrading projects to be compatible with new versions of programming languages, so I'm not sure what the best approach for this work is:
-Should we tackle this by creating a new branch and upgrading one module at a time until everything is ready and then merge?
-Should we create one branch per module and merge one at a time (with backwards compatibility to avoid breaking Python 2 modules)?
-Should we divide tasks between us beforehand? Or just commit-as-much-as-you-can?
-Should we look for external help (maybe Mark or someone from the ERG)?
Happy to hear your thoughts on this. I can personally commit some hours per week to support this project (if this is something we want to embark upon).
This post assumes you have read Choosing a Versioning Scheme, which recommends Semantic Versioning.
Right now we are in beta phase 2.0.0b_X_. We'll shift to release candidate 2.0.0rc_X_ around the time the paper is being reviewed, and a full release 2.0.0 sometime after the paper is published. Subsequently, we'll need to iterate version numbers every time we push to PyPi or Anaconda.
Given a version number MAJOR.MINOR.PATCH, increment the:
- MAJOR version when you make incompatible API changes,
- MINOR version when you add functionality in a backwards-compatible manner, and
- PATCH version when you make backwards-compatible bug fixes.
Additional labels for pre-release and build metadata are available as extensions to the MAJOR.MINOR.PATCH format.
Ideally, every validated pull request to the main branch would increment the version number, and there is probably some software available to help automate that. Minimally, we will need to manually increment the version number whenever we introduce a change that requires reformatting/upgrading the input files. Typically, those changes would increment MINOR, but we are currently in beta stage, so we are incrementing the beta suffix
We also have an option to use developer versions *2.0.0dev_X_ for playing around with things that are primarily meant for other developers, but I don't know if that will offer any benefit over feature-specific git branches.
Does anyone have questions, comments, amendments or counter-proposals?
As currently setup, the switch repository is designed to be used like this (I think):
switch/
switch_mod/
user_module.py
inputs/
outputs/
This has two problems:
switch/
folder (e.g., user_module.py
) need to be added to switch/.gitignore
, and then .gitignore
also has to ignore itself, or else collect a list of every module every user has added and ignored. Alternatively, users can leave this files untracked, and live with git's warning messages.user_module.py
to be part of your own local repository, you are probably out of luck (since it falls within the area dedicated to the switch repository).We have partially resolved these issues for switch-hawaii by using directory structures like this:
switch-hawaii-models/ (repo)
project1/
switch/ (repo)
switch_mod/
switch-hawaii/ (repo)
user_module.py
solve.py
In this setup, switch-hawaii-models/.gitignore
is set to ignore the switch
and switch-hawaii
directories within it, so it only includes all the project-specific code (as desired). However, this makes the python path requirements a little complicated. I find switch_mod
by including code in project1/solve.py
that adds the adjacent switch directory to the python path before running. Then I can just execute cd switch-hawaii-models/project1; python solve.py
.
This works fine, but I would now like to start using the standard switch_mod.solve module. To do that, I need to have both project1/switch
and project1
in my python path. It's easy to add project1
to the path simply by cd'ing into that directory before starting. And if switch_mod
resided at the top level instead of the second level, it would also be in the path at that point (along with switch-hawaii
). i.e., I would like to have a directory structure like this:
switch-hawaii-models/ (repo)
project1/
switch_mod/ (repo)
switch-hawaii/ (repo)
user_module.py
(solve.py no longer needed)
This would have a couple of additional advantages:
Along with this, I would like to suggest renaming switch_mod
to simply switch
. I know it's a pain, but it would be a much more natural name for this module.
Solve.py defines solver_options_string to add the maximum number of iterations, MIPgap and others (screenshot of the lines). What is the format in which I have to write for example: solver_options_string mipgap=0.001?
I have tried many ways but it doesn't work for me.
Thank you very much.
Dear @mfripp and @josiahjohnston,
I was updating the WECC branch of switch to the most recent stable release from the upstream, but found some errors.
First, it looks like the way it the master branch of switch is coded right now only works for Pyomo==5.6.5
and PyUtilib==5.8.0
. If we upgrade to Pyomo 5.7 we can not longer use PyUtilib==5.8.0 due to deprecation of the enum
module. The setup.py
does not specify this version so by default it install the most recent ones and is not working properly. An easy fix will be to specify this version on the setup.py
. However, I think in the long run we want to update to the most recent version of Pyomo. Some people that have reported also this issue: #130
Here is the error when trying to run switch with the most recent version of Pyomo:
ValueError: Unexpected keyword options found while constructing 'AbstractOrderedSimpleSet': rule
Pyomo deprected the rule argument for Abstract set, I think we could simply replace that argument with initialize
or validate
depending on the set.
I am going to create a pull request for fixing this, but want to know if you are already working on this.
cc @PatyHidalgo
I have been pondering on the SWITCH git workflow and have some questions/comments:
Why are we always doing pull requests from branches in our own forks of the repo, instead of pushing our changes to a new branch in the same SWITCH repo and doing the pull request from there?
I think it would simplify the work flow, since it would be:
(Option 1) When your changes consist in several commits that make it easier to follow the history:
-Write changes / new features into a new branch of your local SWITCH repo
-Push them to a new branch in the switch_model repo
-Do a Pull Request from that new branch to the master branch
-Make changes/additions according to the peer review
Once everything is approved, then:
-If the master branch received new commits from someone else in the meantime:
----Pull them to your local repo
----Rebase the local master branch into your new_branch to get those commits in the past history of the branch
-Rebase your local new_branch into the local master branch to put your commits after the master's Head (and this way you avoid getting that extra Merge commit that only says "this is a merge" and get a linear history)
-Push your local master branch into the remote. I tested it and GitHub even recognizes that you included all changes from the Pull Request and automatically closes it tagging it as "merged". The master's branch history becomes linear, with the original commits, then the commits someone else made and then your new features.
-Delete the remote branch to avoid accumulating branches
(Option 2) When you are doing some small or very condensed changes, so only 1 commit is necessary to understand them:
-Write changes / new features into a new branch of your local SWITCH repo
-Push them to a new branch in the switch_model repo
-Do a Pull Request from that new branch to the master branch
-Make changes/additions according to the peer review
Once everything is approved, then:
-Merge and squash the changes into a single commit. This provides a linear history, makes it really easy for inexperienced Git users to include changes (I count myself in that group) and is really quick (only 1 click is necessary on the GitHub interface). I did this for my latest commits and the history remains linear and is effective for small changes.
What do you think about this workflow? I find this option very convenient, since I like experimenting with my Fork and then just manually writing the features I want to add into the main SWITCH repo files. It would be simple if I could just push that new branch and then directly do a Pull Request.
Hello there!
I'm opening this issue so that we don't forget about something we recently discovered in the Switch code. We don't have time to fix it right now, however I'd like to keep track of it.
Currently if we have a period named 2020
and we have generation being pre-built in the year 2020
, the following code behaves unexpectedly.
switch/switch_model/generators/core/build.py
Lines 328 to 332 in 60a5953
When this function is called for generation being pre-built in the year 2020
, build_year in m.PERIODS
is true despite the project not being a period build. This means the project will retire sooner than normal (i.e. generation pre-built in 2020
will retire before generation pre-built in 2019
).
I don't think there's an easy fix to the problem as currently gen_build_costs.csv
doesn't differentiate between pre-built generation and investment builds so there's no way to differentiate between the two. The best course of action is likely to display a warning/error if pre-build years conflict with a period label.
I wonder if the switch code wasn't designed to have pre-build years after a period starts.
Thank you!
Martin
Dear all,
thanks for your work on Switch. It seems really interesting.
However, I failed to install Switch and run examples. I have installed both from conda and in dev mode with pip install with the same results. when running most tests or examples, I get the error
Traceback (most recent call last):
File "C:\Users\TGi\AppData\Local\Continuum\anaconda3\envs\switch\Scripts\switch-script.py", line 9, in <module>
sys.exit(main())
File "C:\Users\TGi\AppData\Local\Continuum\anaconda3\envs\switch\lib\site-packages\switch_model\main.py", line 39, in main
main()
File "C:\Users\TGi\AppData\Local\Continuum\anaconda3\envs\switch\lib\site-packages\switch_model\solve.py", line 90, in main
model = create_model(modules, args=args)
File "C:\Users\TGi\AppData\Local\Continuum\anaconda3\envs\switch\lib\site-packages\switch_model\utilities.py", line 100, in create_model
module.define_components(model)
File "C:\Users\TGi\AppData\Local\Continuum\anaconda3\envs\switch\lib\site-packages\switch_model\generators\core\no_commit.py", line 83, in define_components
rule=lambda m:
File "C:\Users\TGi\AppData\Local\Continuum\anaconda3\envs\switch\lib\site-packages\pyomo\core\base\set.py", line 2208, in __init__
Set.__init__(self, **kwds)
File "C:\Users\TGi\AppData\Local\Continuum\anaconda3\envs\switch\lib\site-packages\pyomo\core\base\set.py", line 1934, in __init__
IndexedComponent.__init__(self, *args, **kwds)
File "C:\Users\TGi\AppData\Local\Continuum\anaconda3\envs\switch\lib\site-packages\pyomo\core\base\indexed_component.py", line 182, in __init__
Component.__init__(self, **kwds)
File "C:\Users\TGi\AppData\Local\Continuum\anaconda3\envs\switch\lib\site-packages\pyomo\core\base\component.py", line 402, in __init__
% ( type(self).__name__, ','.join(sorted(kwds.keys())) ))
ValueError: Unexpected keyword options found while constructing 'AbstractOrderedSimpleSet':
rule
It seems this has to do with Pyomo. I have tried installing older versions of Pyomo instead (4.4.1 and 5.6.4) but then there was another error:
Traceback (most recent call last):
File "C:\Code\switch\switch-model\tests\utilities_test.py", line 26, in test_save_inputs_as_dat
return_model=True, return_instance=True
File "C:\Code\switch\switch-model\switch_model\solve.py", line 90, in main
model = create_model(modules, args=args)
File "C:\Code\switch\switch-model\switch_model\utilities.py", line 100, in create_model
module.define_components(model)
File "C:\Code\switch\switch-model\switch_model\generators\core\no_commit.py", line 83, in define_components
rule=lambda m:
File "C:\Users\TGi\AppData\Local\Continuum\anaconda3\envs\switch\lib\site-packages\pyomo\core\base\set.py", line 2208, in __init__
Set.__init__(self, **kwds)
File "C:\Users\TGi\AppData\Local\Continuum\anaconda3\envs\switch\lib\site-packages\pyomo\core\base\set.py", line 1934, in __init__
IndexedComponent.__init__(self, *args, **kwds)
File "C:\Users\TGi\AppData\Local\Continuum\anaconda3\envs\switch\lib\site-packages\pyomo\core\base\indexed_component.py", line 182, in __init__
Component.__init__(self, **kwds)
File "C:\Users\TGi\AppData\Local\Continuum\anaconda3\envs\switch\lib\site-packages\pyomo\core\base\component.py", line 402, in __init__
% ( type(self).__name__, ','.join(sorted(kwds.keys())) ))
ValueError: Unexpected keyword options found while constructing 'AbstractOrderedSimpleSet':
rule
I am using python 3.7.
thank you for your help.
T
Dear sir,
I want to use certain advanced features of SWITCH, when I install the advanced dependencies via "pip install --upgrade --editable .[advanced]",the following situation has occurred.
(base) C:\ProgramData\Anaconda2\Lib\site-packages\switch-2.0.2>pip install --upgrade --editable .[advanced]
DEPRECATION: Python 2.7 will reach the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 won't be maintained after that date. A future version of pip will drop support for Python 2.7.
Obtaining file:///C:/ProgramData/Anaconda2/Lib/site-packages/switch-2.0.2
Requirement already satisfied, skipping upgrade: Pyomo>=4.4.1 in c:\programdata\anaconda2\lib\site-packages (from switch-model==2.0.2) (5.6.5)
Requirement already satisfied, skipping upgrade: testfixtures in c:\programdata\anaconda2\lib\site-packages (from switch-model==2.0.2) (6.9.0)
Requirement already satisfied, skipping upgrade: pandas in c:\programdata\anaconda2\lib\site-packages (from switch-model==2.0.2) (0.24.2)
Requirement already satisfied, skipping upgrade: numpy in c:\programdata\anaconda2\lib\site-packages (from switch-model==2.0.2) (1.16.2)
Requirement already satisfied, skipping upgrade: scipy in c:\programdata\anaconda2\lib\site-packages (from switch-model==2.0.2) (1.2.1)
Collecting rpy2 (from switch-model==2.0.2)
Using cached https://files.pythonhosted.org/packages/8d/7c/826eb74dee57e54608346966ed931674b521cf098759647ed1a103ccfa79/rpy2-3.0.4.tar.gz
Complete output from command python setup.py egg_info:
rpy2 is no longer supporting Python < 3.Consider using an older rpy2 release when using an older Python release.
----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in c:\users\kevin_xo\appdata\local\temp\pip-install-3etqdj\rpy2\
So what should I do? Thanks a lot!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.