Comments (10)
Hi,
Thanks for bringing this to our attention. You're absolutely right, something's definitely not right in that function.
I think the if
statement is not necessary here because it was decided to only keep d=1 results on file and calculate the rest (d!=1) on the fly because it's relatively fast.
I just pushed a fix to the hotfix/d0
branch. Could you please pull the changes and test it?
You should be able to do that with
git checkout -t origin/hotfix/d0
from cosmoboost.
Ok, I can check. I've found another problem, when I estimate the beta value for Aberration, Doppler and Aberration+Doppler I found Aberration and Doppler betas a little bigger than the inputed beta value and Ab+Dopp beta smaller than the inputed value, the boost is only on z direction, beta_x and beta_y are 0 on average. I already tested my estimators using boosts on pixel space and via kernel (with another code). Did you already check the recovered beta value using some estimator? Seems to have some bias.
from cosmoboost.
Sorry, I'm noob on github, what I have to do to install this fix?
from cosmoboost.
No worries, I would be happy to help.
If you installed the package by using git clone ...
followed by pip install -e .
then you can just type the command that I wrote earlier in the CosmoBoost directory and that's it. It'll fetch the fix and put you on the branch where the code is fixed.
If you didn't use the -e
argument in your installation, you will have to run pip install -e . --force-reinstall
to make sure the code is updated with the new fix.
I hope this is clear but feel free to ask for clarification.
Regarding the bias in beta: That's a very interesting observation. We have checked the code with the analytical formula in Jeong 2014 and it seems to follow the expectation pretty well (you can see figure 2 here for example). The results of the individual elements are also consistent with the values of Chluba 2011 and Dai & Chluba 2014 but I haven't done a one-by-one comparison with the exact values from their codeβI'm assuming this is the other kernel code you mentioned?
Nevertheless, recently we updated one of the internal methods to enhance memory usage but noticed an unexpected offset in the power spectrum when using large masks. This is probably due to a bug that we haven't been able to pin down yet. So it's certainly possible that what you're describing is related to this and has to be looked into.
I apologize for the lack of proper maintenance of this codebase. I wrote the main library over two years ago (with way less experience than now) and haven't included proper tests.
from cosmoboost.
Hi @pferreiraov. I merged the fix to master so now if you install a new copy, the d=0
case should work.
I think you only need to run git pull origin master
in the CosmoBoost directory. Let me know if it's not working so I can look into it.
By the way, we've noticed that in the new memory-optimized version d=0
is producing what is expected from d=1
which might be related to the bias you've encountered earlier. We're looking into it so I'll you updated as soon as we catch the bug.
from cosmoboost.
Just a quick update on this issue:
seems like what we called the analytic
method (which uses Bessel function approximations instead of solving the ODE in Dai & Chluba 2014) introduces a bias with respect to the numerical
method that was originally implemented.
You can use the following to initialize the kernel with the numerical
method:
import cosmoboost as cb
pars = cb.DEFAULT_PARS
pars["method"] = "numerical"
kernel_a = cb.Kernel(pars, overwrite=True)
I'm trying to figure out where the bias ins coming from. I'll post updates here if I find a solution.
from cosmoboost.
Hi, the same Pedro da S. Ferreira with another username! I recently checked again and compared your kernel matrix elements with the kernel integral for d=0 and d=1, and I'm still recovering the values for d=1 (also prints "Solving kernel ODE for d=1") when set d=0. How can I test using the old non-optimized ram use version?
from cosmoboost.
Seems to be possible to do converting the kernel to d=0 using calc_K_d_arr(K, d, s), but I didn't understand the new array with extra columns given by the value of "beta_exp_order", could you please explain it to me? The first column seems to give the correct values for the kernel with d=0
pars = cb.DEFAULT_PARS
pars["method"] = "numerical"
lmax = pars['lmax'] = 2000
delta_ell = pars['delta_ell'] = 8
pars['d'] = 1
beta = pars['beta']
T_0 = pars["T_0"]
cb.Kernel.update
kernel_a = cb.Kernel(pars, overwrite=True)
kernel_a_d0 = calc_K_d_arr(kernel_a, d=0, s=0)
from cosmoboost.
The function calc_K_d_arr is defined in KernelRecursive.py
from cosmoboost.
Hi Pedro. Unfortunately, I'm no longer maintaining this repo, and I apologize for the lack of documentation and terrible code quality that's leading to confusion about the workflow of the boosting mechanism.
I'm going to tag @maamari who I believe is developing an improved version of the code and most likely will be able to answer your questions better than me.
The general idea of the kernel calculations is to calculate the d=1
elements, and for all the other Doppler weights, the recursive equations introduced in Dai & Chluba 2014 paper are used to shift the d.
So if you wanted to calculate the d=0
, you would just set the d in the parameters dictionary and pass that to the Kernel object:
pars['d'] = 0
beta = pars['beta']
T_0 = pars["T_0"]
kernel = cb.Kernel(pars, overwrite=True)
which then you can use in the boosting functions. This will internally calculate the d=1
kernel and use
to calculate the
d=0
elements. You don't have to do the d
shifts yourself.
I hope this helps.
Regarding the non-optimized ram use version, since @maamari worked on that piece, he would probably be able to answer your question.
from cosmoboost.
Related Issues (2)
- Profiling the code HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. πππ
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google β€οΈ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from cosmoboost.