Giter Club home page Giter Club logo

Comments (4)

yalinli2 avatar yalinli2 commented on July 25, 2024 1

Yeah I think what you proposed works, I think Monte Carlo analysis is sufficient to answer my question, I just need to process/interpret the data and come up with a way to define the probability of each scenario. Many thanks!!!

from biosteam.

yoelcortes avatar yoelcortes commented on July 25, 2024

As for your first analysis, we could use scipy's minimization or root solving methods. They are pretty easy to use. You can pass the bounds for all parameters and solve for the values that give the minimum or the target IRR/Price, or whatever metric you pass to the objective function. Here is an example of how this could be done:

Single variable solution for cut-off conversion

from scipy.optimize import brentq
from biorefineries.cornstover.system import cornstover_sys, cornstover_tea, R301, R302

# Should return 0 where IRR = 0.15
def f(x): # objective function
    R301.saccharification[2].X = x # Set glucan to glucose conversion of R301
    cornstover_sys.simulate()
    return  0.15 - cornstover_tea.solve_IRR()
x = brentq(f, 0, 1) # Done!

Multivariate solution to maximize IRR

from scipy.optimize import minimize
from biorefineries.cornstover.model import cornstover_model
def f(x): # objective function
    # p1, p2, p3, ... = x
    # We can set variables anyway we want here.
    # We could also use BioSTEAM's Model object to do this.
    results = cornstover_model(x)
   return - results['IRR'] # We are minimizing negative IRR
solution = minimize(f, bounds, ...)
p1, p2, p3, ... = solution.x

I don't believe we should add these methods inside biosteam objects. We would still be requesting the same info (e.g. an objective function, bounds, and all the other optional arguments in scipy functions). However, once someone gets this done, we can add their code as an "example recipe" in the docs.

As for your second analysis (the Medium one). Running Monte Carlo again and again for each parameter we change would take several weeks. Wouldn't all the data required to find this already be in your first Monte Carlo analysis? I would suggest preprocessing the data, then using sklearn's SVR to build a machine learning (ML) model to perform the calculations (to vary whatever parameters you need). We can add a method in biosteam to preprocess data and return a ML model, for those people who are timid about ML.

As for the last analysis (the Hardest one). I don't think this is all that hard. You have the distributions of all parameters in the Model object, and each chaospy Distribution object has a method to get cumulative distribution. All Monte Carlo results are saved in a data frame, just need to pass these values to each parameter.

from biosteam.

yalinli2 avatar yalinli2 commented on July 25, 2024

Thanks for the explanations and helpful links, I'll give the last one a try and see how that goes.

My proposed approaches aside, what would you do if you were to solve the question I tried to address - what unit/process assumptions are needed to meet a certain TEA target? Are there simpler or more robust ways? Thanks!

from biosteam.

yoelcortes avatar yoelcortes commented on July 25, 2024

Hmm, if what we want is the worst each parameter can perform to meet a certain TEA target. Then, after we know the relationship between all parameters and the target (e.g. through Spearman coefficients), we could brute test a grid of parameter values that matter the most. You can make a grid of values using numpy, pass it to the model using the load_samples method, evaluate, then filter the scenarios. This should give us all the results we need to conduct further analysis. For example, we can get all scenarios where a parameter is at its minimum

import numpy as np
# Let's say results is an array of filtered results where 
# each row is a scenario and each column is a parameter.
p0s = results[:, 0]
p0_min = p0s.min() # minimum possible value for parameter 0
index, = np.where(p0s ==p0_min)
scenarios_with_p0_min = results[index]

This is just one analysis you could do. But the grid should give you all the data you need.

from biosteam.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.