spf-ost / pytrnsys Goto Github PK
View Code? Open in Web Editor NEWPackage that provides functionality to run and process, plot and report TRNSYS simulations
Home Page: https://pytrnsys.readthedocs.io
License: GNU General Public License v3.0
Package that provides functionality to run and process, plot and report TRNSYS simulations
Home Page: https://pytrnsys.readthedocs.io
License: GNU General Public License v3.0
test
Heat balance variables are only considered for the SPF, if they end in "Demand" or "D". This is not documented. We should add this. Probably also in the heat balance comment of each ddck.
As a developer I'd like to run the pytrnsys examples as integration tests on GitHub during continuous integration to guard better against introducing breaking changes to pytrnsys.
We can have Windows runners for GitHub actions, so this shouldn't be a problem technically. However, this would require setting aside a TRNSYS license for this purpose (Are there/do we have floating licenses?).
Include pytrnsys_spf/utils/dckAnalyzer.py into the standard dckCheck and raise a warning when the standard TRNSYS constants values (e.g., maxFileWidth or nMaxCardValues) are exceeded.
Instead of try and error with TRNSYS let pytrnsys check that
Implement meaningful error messages when TRNSYS has problems (not when a simulation aborts because of a simulation error, this is already covered); e.g. when the TRNSYS-exe path is incorrect or the dlls were not added.
At the moment it just breaks at random places returning unhelpful exceptions.
If no data was loaded (potentially because the respective folder does not contain a lst-file) an exception with a proper error message should be raised. At the moment an error only occurs at a later stage, e.g. in the form of a key error or as a problem of empty axes when plotting.
As a user I'd like to get a descriptive error message when the TRNSYS
executable is not found or if it is not of the correct version.
Currently, I'm just told that the simulation has failed.
Doing some tests, I had some TRNSYS errors with Type 951: pytrnsys_ddck/ground_heat_exchanger/type951/type951.ddck
Lines 90-91: 2 constants are declared, but only one is specified after.
Line 26: this equation gives an error. It might be related to the previous point, if one variable has not been declared and is missing.
As a user I want to be sure pytrnsys
runs TRNSYS with the DLLs released with pytrnsys
. Hence pytrnsys
should at startup always check that
foo.dll
in the TRNSYS DLL folder which exports a function (e.g. func7
) of the same name as a function exported by a Dll x
in the pytrnsys folder, then x
= foo.dll
and the hashes of both foo.dll
s must be the same (i.e. they must [almost certainly ;)] be binary equal)pytrnsys
DLL folder should also be in the TRNSYS DLL folder (looking at the names). If this is not the case tell the user to run the copy-dll
script and re-run pytrnsys
pytrnsys
should refuse to run unless all of the above conditions are satisfied (possibly we can add a command line argument to override this behavior).
Set up a way to import old projects to the folder structure of the GUI. This should copy the ddck-files given in the run.config of the old config to the ddck-folder of the GUI project. (Need to find a stable way to connect the ddck-files mentioned in the run.config to the components in a diagram.)
We want to be able to capture some of the uncertainty of projecting costs within our cost calculation by adding variance to our variable costs.
As a user I would like changeDDckFile
to behave like variation
with respect to combinations. In particular changeDDckFile
should be sensitive to combineAllCases
.
If we don't do weird stuff like
string scaleHP "1.0"
changeDDckFile combi combi
(even for systems without a heat pump) scaling leads to an error.
@martin-neugebauer or @danielcarbonell know more.
We need a test that checks that dck files that were created from ddck-files with the help of port information json-files (from the GUI) are correct. For this the respective test files created for pytrnsys_gui should be migrated to pytrnsys.
The default names of components should be short (e.g. Coll instead of Collector) to avoid exceeding TRNSYS' 15 character limit
We need to feature a new key in the run.config to tell pytrnsys to use the connection.json coming from the GUI to do the automatic connection. My suggestion:
string pathToConnectionInfo "C:\Testing\test\connection.json"
It seems that it does not read the first variable.
For string plotHourly "TGHXout" I get
raise KeyError(key) from err
KeyError: 'T'
so it takes the first letter only as string.
Running docs/buildDoc.py
I get the following warnings:
[...]
preparing documents... done
C:\Users\damian.birchler\src\pytrnsys\docs\guide\getting_started.rst:169: WARNING: toctree contains reference to nonexisting document 'guide/ :maxdepth: 3'
C:\Users\damian.birchler\src\pytrnsys\docs\guide\getting_started.rst:175: WARNING: toctree contains reference to nonexisting document 'guide/ :maxdepth: 3'
C:\Users\damian.birchler\src\pytrnsys\docs\guide\process_data.rst:135: WARNING: Explicit markup ends without a blank line; unexpected unindent.
C:\Users\damian.birchler\src\pytrnsys\docs\guide\run_simulation.rst:4: WARNING: duplicate label config_file, other instance in C:\Users\damian.birchler\src\pytrnsys\docs\guide\config_file.rst
WARNING: autodoc: failed to import module 'physProp' from module 'pytrnsys.physprop'; the following exception was raised:
No module named 'CoolProp'
WARNING: autodoc: failed to import module 'plotGround' from module 'pytrnsys.plot'; the following exception was raised:
No module named 'interpolation'
writing output... [100%] pytrnsys.utils
[...]
Building the documentation is the only kind of static analysis we currently have. And it finds that something is wrong with the CoolProp
module (probably it is a package that needs to be added to the requirements) and interpolation
(maybe it now lives at models/interpolation.py
?). Something to look into in a spare moment.
When creating a diagram from scratch the side of the direct ports of the Tes (left or right) is missing in the blackbox component outlet temperature names, which entails errors when running the mass flow solver.
Currenlty if we have calc A= B and calc C=A, A it is not recognized by the system. We need to create a new calculation method such as calcTest C=A which is a workaround. What we need is a sequential calculation that recognizes what has been calculated before in the config file.
Only json with *-results.json structure are loaded. To me either we don't inforce that when processing only json files or
we send a warning that no files with structrure *-results.json were found in this path
boxPlot and comparePlot cannot handle strings as x-values anymore, while this was possible before the refactoring. A list of strings as the x-value should be plotted equally spaced with the strings as the labels (this is how pyplot normally handles this). It looks like the changes made to be able to plot uncertainties inhibit this.
In the file: pytrnsys/rsim/runParallel.py
There are only the cases up to 11 CPUs. Having 16 CPUs on my PC I had some errors. Adding the following cases:
elif cpu == 12: return 800 elif cpu == 13: return 1000 elif cpu == 14: return 2000 elif cpu == 15: return 4000 elif cpu == 16: return 8000
solved the problem.
Make a unit test that is delivered with the package to test for successful installation (run an example, or at least build dck and compare)
The code should send a warning in case the path to be processed is not existing.
Currently no error/warning is raised and it is difficult to see what goes wrong.
The same when ones uses a wrong syntaxis for plotting, i.e. string plotHourly instead of stringArray plotHourly
As a developer I want to have a way to know when the dependencies of pytrnsys have changed and I need to install new packages.
Add three requirement.txt
files:
requirements.txt
: Libraries needed for an end-user/to run the softwarerequirements\test\requirements.txt
: ...for a "tester"/continuous integration (includes 1.)requirements\dev\requirements.txt
: ...for a developer (includes 2.)The requirements.txt
files will not be maintained by hand but will be compiled using pip-tool
's pip-compile
from their respective requirements.in
files stored right next to them. A Python script will be added at dev-tools\compile-requirements-txts.py
to compile all the requirements.in
files to their corresponding requirements.txt
files.
Virtual environments can then be made to contain those and only those packages which are specified in a requirement.txt
file by invoking pip-sync <path\to\the\requirements.txt>
file.
If you want to add a dependency add it to the relevant requirements.in
file(s) (e.g. if it's a package only needed within your unit test add it to requirements\test\requirements.in
) then run
python dev-tools\compile-requirements-txts.py -P <your-package>
sanity check the generated requirements.txt
file(s) and if happy push both the requirements.in
as well as the requirements.txt
files.
The hydraulic inputs of type 977 look like this at the moment:
** inputs from hydraulic solver
EQUATIONS 4
ThpEvapIn = TPiHpEvapIn !@connector
MfrEvapIn = ABS(MfrPuHpEvap)
THpCondIn = TPiHpCondIn
MfrCondIn = ABS(MfrPiHpCondIn)
@danielcarbonell Shouldn't we set MfrEvapIn = ABS(MfrPiHpEvapIn) to be consistent? We established that we expect certain pipe names, but this expects a pump name.
As a developer I want to unify the way .ddck files are connected to each other via the hydraulic. We currently/will soon have the possibility to access, e.g., the input mass flow rate through a port of a given component with the following syntax in a .ddck file (see SPF-OST/pytrnsys_gui#233)
MfrCondIn = @mfr(condensorInput, MfrPiHpCondIn)
The black box component temperatures are currently read out from a .ddck file by looking at a block like
***********************************
** outputs to hydraulic solver
***********************************
EQUATIONS 2
THpEvapOut = [162,1] !
THpCondOut = [162,3] ! Temperature of the condenser (heat sink) outlet
in the ddck file (we're searching for the string "outputs to hydraulic"). In order to be consistent we should change this to
@temp(evaporatorOutput) = ...
where the right hand side could either be THpEvapOut
, if needed, which would be defined further above or [162,1]
, directly (i.e. both ways should be supported).
At the moment comparePlotConditional can only handle exact conditions specified through ":", but not ranges like ">"; we should extend it to include this functionality
Our energy bar plots and balances are usually in kWh and so are the printed variables. The only one in MWh is the electric heat bar plot. Is that on purpose, @danielcarbonell?
Update the documentation about cumsum, _Tot, _Max, etc.
When variation runs for a simulation are started and there is a overwrite warning, the variations are still started even when the process is cancelled; this bears the risk of overwriting large chunks of already completed simulations
As a developer I would like to have a unit test that compares an automatically generated ddck-file with the original one to later build a function that generates ddck-files based on templates. This is in preparation to generate ddck-files from the graphical user interface automatically.
The starting point is pytrnsys\tests\pytrnsys\ddckGeneration on the ddck-generation-all-items branch.
@zuckerruebe Feel free to add your thoughts. :-)
When combineAllCases = False, I thought it would take sequentially the ddck files in the "changeDDckFile" command and parameters in the "variation" command, but it still does all the combinations. I think the combineAllCases boolean only applies to the defined "variations", not to the changeDDckFile, for which there is always a combination of possible cases.
In my case:
changeDDckFile building\type5998\database\mfb30_2020 building\type5998\database\mfb30_2020 building\type5998\database\mfb90_2020
variation AHU useAHU 1 0
resulted in 4 cases, while I only wanted the first ddck file with useAHU = 1, and the second ddck file with useAHU=0 (only 2 options). This is not very critical, I can just ignore the simulations that do not interest me, but it would still be nice to implement it.
In addition to/instead of the pytrnsys config files, we want to look into driving pytrnsys from a Python API.
The following snippet doesn't currently work
calcHourly cumsum_qLatkW_fromIce = (MassIce-iceBlockIni)*3.325/36.0
...
calcHourly cumsum_errorQLatkW = cumsum_sumQLatkW - cumsum_qLatkW_fromIce
beacause all the calcHourly
(just an example, this holds for all "commands") are processed together, i.e., cumsum_qLatkW_fromIce
will not be known in the last line.
This is limiting and confusing. It should be changed to a line-by-line semantics more akin to, e.g., Python.
Restructure the processing and add documentation (maybe flowchart or something). Currently there are 6 different processing classes, it is unclear why there are so many different ones and how they are connected. Probably it would be good to just have one processing class - to be discussed.
Make providing the correct number after EQUATIONS and CONSTANTS in a ddck-file obsolete by making this automatic during parsing the ddck-files when building a dck
Make sure pump powers are only present in the exported hydraulic.ddck
...because there is no -result.json file in a .gle folder. We get the following error:
FileNotFoundError: [Errno 2] No such file or directory: 'C:\\Daten\\OngoingProject\\SolTherm2050\\Simulations\\MySimulations\\System3_SolarDHW_MFH\\.gle\\.gle-results.json'
We need a function that generates dck-files with hydraulic information from a json-file created from the GUI.
When running pytrnsys
in parallel, errors are not always propagated to the user as is the case when running single as a single process.
As a user, I would like pytrnsys to work as before in spite of changes to ddck-files in light of the automatic ddck generation capabilities to be implemented in pytrnsys_gui. For this we need a function to be called from pytrnsys\pytrnsys\trnsys_util\buildTrnsysDeck.py that can handle expressions like @temp(portName, defaultName)
.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.