Giter Club home page Giter Club logo

flavio's Introduction

flavio website

This is the repository for the flavio website.

The website uses Jekyll and is served by Github pages at https://flav-io.github.io/.

Feel free to submit pull request if you spot mistakes or want to improve the documentation.

flavio's People

Contributors

afalkows017 avatar alekssmolkovic avatar apuignav avatar bednya avatar broko55 avatar christophniehoff avatar felicia56 avatar girishky avatar gurler avatar hoodyn avatar jackypheno avatar jasonaebischergit avatar jorge-alda avatar langenbruch avatar markusprim avatar micha-a-schmidt avatar mjkirk avatar mreboud avatar nsahoo avatar olcyr avatar peterstangl avatar whereforebound avatar wzeren avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

flavio's Issues

Refactoring of `Parameter` and `ParameterConstraints` classes

This issue is based on the discussion in PR #123:

@MJKirk:

Yes, something exactly like AwareDict would work - a ParameterCitationsAwareDict as it were (a snappier name might be in order).

Regarding the inspire info for the parameters, currently the parameter data is split over 3 files: parameters_uncorrelated, parameters_correlated, and parameter_metadata. We could just put the inspire key into the metadata file, but this seems bad since the references could easily not be updated when the numbers get changed.
It seems like it would be better (if more complicated) to merge all these into a single parameter file, much like the single measurements file, and then it will be obvious to update the inspire at the same time as the numerics.
(Actually, it looks like very early on it used to be this way and then the file was split in b30aa73, do you remember why you did this @DavidMStraub?)

@DavidMStraub:

Wow, I don't remember that it has ever been a single file.

Splitting them I guess is motivated by metadata populating the attributes of Parameter instances - which are meant not to change - while values populate the ParameterConstraints instances, which can be swapped.

From this angle, it makes sense for citations not to be part of the metadata, but of the ParameterConstraints.

Unfortunately, it will not be entirely trivial to refactor ParameterConstraints to have citations associated with constraints. The implementation with the _constraints list and _parameters dict is suboptimal. If I would implement it today, I would probably add a Constraint class, that associates a single ProbabilityDistrubution with one or multiple parameters. This could then have a citation attribute. The Constraints class would then basically be a container class with a list of Constraints, and various helper methods, e.g. to make sure the collection of constraints is consistent.

I actually find the implementation with the two classes Parameter and ParameterConstraints a bit confusing for the user and I think it might be more intuitive if the Parameter class would be analogous to Measurement, i.e. if it would be itself a subclass of Constraints and one would completely drop the ParameterConstraints class.
Even though the value of a parameter can change while its metadata stays the same and this is not true for a measurement, this different treatment of Measurement and Parameter could be implemented on the level of these two classes.
Then one could again merge the three parameter YAML files into one file analogous to measurements.yaml, which I think would also be more intuitive.

I think it is actually a very good idea then to use a Constraint class instead of the _parameters dict and the _constraints list. This might make it also easier to access the ProbabilityDistrubution, e.g. for plotting experimental data.

Apart from this, I was also thinking about whether the functions in falvio.measurements and flavio.parameters could be made class methods of the Measurement and Parameter classes. This might also be more intuitive since at least I always got confused by the fact that there is both flavio.Measurement and flavio.measurements as well as flavio.Parameter and flavio.parameters.

@DavidMStraub what do you think?

<dBR/dq2>(B0->K*ee) near threshold

Dear all,
I am witnessing some strange behaviour of the observable <dBR/dq2>(B0->K*ee) near the threshold.
I understand that binned differential branching ratio in Flavio are normalized to the bin width and thus have units of GeV^(-2).

Very naively, I thought that if I want the "unormalized" differential branching fraction, I just have to multiply the result by the bin width. This seems to work fine, as long as I stay away from the threshold:

BR_0004_1 = flavio.sm_prediction("<dBR/dq2>(B0->K*ee)", .0004, 1.)*(1. - .0004)
BR_1_3 = flavio.sm_prediction("<dBR/dq2>(B0->K*ee)", 1., 3.)*(3. - 1.)
BR_0004_3 = flavio.sm_prediction("<dBR/dq2>(B0->K*ee)", .0004, 3.)*(3. - .0004)

gives:

In [81]: BR_0004_3
Out[81]: 3.790997798081069e-07

In [82]: BR_0004_1 + BR_1_3
Out[82]: 3.7912788564241227e-07

That seems reasonably close.

But near the threshold:

BR_th_1 = flavio.sm_prediction("<dBR/dq2>(B0->K*ee)", (2*.511/1000.)**2, 1.)*(1. - (2*.511/1000.)**2)
BR_1_3 = flavio.sm_prediction("<dBR/dq2>(B0->K*ee)", 1., 3.)*(3. - 1.)
BR_th_3 = flavio.sm_prediction("<dBR/dq2>(B0->K*ee)", (2*.511/1000.)**2, 3.)*(3. - (2*.511/1000.)**2)

gives:

In [86]: BR_th_3
Out[86]: 3.642236984062088e-07

In [87]: BR_th_1 + BR_1_3
Out[87]: 4.093330290405161e-07

I am probably doing something very silly, but I don't understand what. Thanks in advance !

NP Wilson Coefficients in Various EFTs

Hi,

I have been working on a project where so far we have two separate fits in flavio: one using the WET for flavour observables and another using SMEFT for Higgs signal strengths. So far we have not seen a way to combine the two sets of observables into a single fit due to the two different EFTs being used for the different Wilson coefficients.
Is it possible in flavio to introduce Wilson coefficients in two separate EFTs like this into one FastLikelihood fit?

Thanks, Matthew

Likelihood Contour Plots differing based on y-axis range

Hi,

I'm working on a project using flavio for some fits of the 2HDM, and among others, I have been working on R(K) and R(K*). For these, I produce the attached plots which show significantly different results for these observables depending on the range of values I allow mH+ (y-axis) to take (either log(mH+) in (0,3.5) or (1,3.5)). There is no other difference between producing these two plots other than the range. Both of these take "steps=60" in the likelihood_contour_data function. Increasing steps does not significantly change either of these plots.
I am looking for any guidance you could give into how the likelihood contours would give such different results purely based on range, and if there's a way to tell which, if either, of the two plots would likely be the realistic fit.

Thanks in advance. I look forward to hearing from you.

rks_plot
rks_plot2

Flavio with python 3.7: YAMLLoadWarnings

Hi David,

I just ran flavio in a python3.7 and got the following YAMLLoadWarnings, thought I'd let you know (this is just a warning, but fixing it is trivial by just replacing yaml.load(f) with yaml.load(f, loader=yaml.FullLoader))

/srv/conda/lib/python3.7/site-packages/flavio/config.py:5: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
  config = yaml.load(f)
/srv/conda/lib/python3.7/site-packages/flavio/measurements.py:14: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
  measurements = yaml.load(obj)
/srv/conda/lib/python3.7/site-packages/flavio/parameters.py:15: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
  parameters = yaml.load(obj)
/srv/conda/lib/python3.7/site-packages/flavio/parameters.py:29: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
  parameters = yaml.load(obj)
/srv/conda/lib/python3.7/site-packages/flavio/parameters.py:42: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
  list_ = yaml.load(obj)

Or are there other issues with python3.7 (seeing that it's not tested on travis)?

Cheers,
Kilian

`SafeIncludeLoader` doesn't catch `FileNotFoundError` on certain systems

At least on some Windows system, open(filename, 'r') does neither produce a FileNotFoundError nor a TypeError, but

OSError: [Errno 22] Invalid argument: /path/to/unknown/file

In this case, the include_merge_list method of SafeIncludeLoader doesn't work properly since the error is not caught:

flavio/flavio/io/yaml.py

Lines 38 to 43 in 201a4aa

try:
filename = os.path.join(self._root, file)
with open(filename, 'r') as f:
a += yaml.load(f, self.__class__)
except (FileNotFoundError, TypeError):
a.append(file)

Idea for function that gives a citation list for theory predictions

So the basic idea is to allow people to easily cite the original theory papers for each observable they use in flavio.
My rough idea is to copy how all the experimental measurements have an inspire key, and add the same thing to the Observable class.

Roughly, the function would look like

def observable_citations(obs_list):
    return (flavio.Observable[o].inspire for o in obs_list)

and be defined as a top level function in flavio/functions.py

The couple of questions I'm thinking about are:

  1. Would it be nice to also allow giving a Likelihood instance instead of a list of observables for convenience? I'm thinking particularly in terms of a smelli GlobalLikelihood, you don't give a list of observables directly so the user would have to do extra work to get a list to then feed this function. Or is this too much complexity?
  2. Should it really be extended also to measurements as well, so for an observable you get both the theory papers it was calculated in and the experimental papers where measurements were reported?

problem importing flavio

Hi, I have problem importing flavio. I get these errors. How to fix it?

Python 3.6.8 (default, Oct 7 2019, 12:59:55)
[GCC 8.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.

import flavio
Traceback (most recent call last):
File "", line 1, in
File "/home/protyush18/.local/lib/python3.6/site-packages/flavio/init.py", line 2, in
from . import physics
File "/home/protyush18/.local/lib/python3.6/site-packages/flavio/physics/init.py", line 15, in
from . import eft
File "/home/protyush18/.local/lib/python3.6/site-packages/flavio/physics/eft.py", line 3, in
import wcxf
ModuleNotFoundError: No module named 'wcxf'

Problem with matrix elements of meson mixing operators

I think the matrix elements of the meson mixing operators (defined in
flavio/physics/mesonmixing/amplitude.py) have incorrect factors.

Compare that function to eqs 2.5 - 2.7 in arXiv:1602.03560 (the Fermilab/MILC paper, which at least for B and Bs mixing is where the bag parameters are taken from).
Flavio's SLR operator corresponds to their O4 - Flavio misses out on the d_4 constant.
Similarly, flavio's VLR = -2 * O5, and again misses out the d_5 constant, which here is a large correction compared to the mass ratio.
(The factors as given by flavio currently match older lattice papers, e.g. this ETMC paper, but I agree with F/MILC that their new coefficients are correct.)

And finally, I believe the overall sign of the TLL operator is wrong, as by Fierz TLL = -4 O2 -8 O3, and O2 has a negative overall coefficient while O3 has a positive one (c_2 = -5/12, c_3 = +1/12 from F/MILC).

Possible bug when calculating SM covariance

In the _get_covariance_sm function (flavio/statistics/fits.py#L534), comparing

pred_arr = np.array(pool.map(self.get_predictions_array, X)).T
(the code for multiple threads)
to
pred_arr[:, j] = self.get_predictions_array(x, par=False, nuisance=True, wc=False)
(code for a single thread)
the get_predictions_array function is called with different arguments, since the default is True for all three parameters.
In particular, it looks like it just uses the SM wilson coefficients if doing a single threaded calculation, but our Wilson coefficient function if using multiple threads.

Running issue with operator 'phil3'

I'm running this code
wcphil3_13 = Wilson ({'phil3_13': 5.8785e-8}, scale=1e3, eft='SMEFT', basis='Warsaw')
flavio.np_prediction('BR(Z->etau)', wcphil3_13)
but getting a huge error message, ending with
"......ValueError: No solution for m^2 and Lambda found. This problem can be caused by very large values for one or several Wilson coefficients."
For other WCs, such as 'phil1' etc. it is not a problem, I'm getting some results! Please help.

Wolfenstein parametrization of CKM

I am trying to read the VCKMIN block from an SLHA file.
The problem is that I can not make flavio use Wolfenstein parametrization by

# flavio.config['CKM matrix'] = 'Wolfenstein'

(also referenced here - https://flav-io.github.io/docs/customize.html)

However, the statement (taken from closed issue #11, which seems obsolete/abandoned)

flavio.config['implementation']['CKM matrix'] = 'Wolfenstein'

is (still) working, so

import flavio
print('flavio ver:', flavio.__version__)
old_pars = flavio.default_parameters.get_central_all()
old_ckm_tree = flavio.physics.ckm.get_ckm(old_pars)
print('default ckm:', old_ckm_tree)

flavio.default_parameters.set_constraint('laC',0.22535)
flavio.default_parameters.set_constraint('A', 0.811)
flavio.default_parameters.set_constraint('rhobar', 0.131)
flavio.default_parameters.set_constraint('etabar', 0.345)

pars = flavio.default_parameters.get_central_all()
flavio.config['CKM matrix'] = 'Wolfenstein'

new_ckm_still_tree = flavio.physics.ckm.get_ckm(pars)
print('the same ckm:', new_ckm_still_tree)

flavio.config['implementation']['CKM matrix'] = 'Wolfenstein'
new_ckm_wolfenstein = flavio.physics.ckm.get_ckm(pars)
print('now new ckm:', new_ckm_wolfenstein)

gives:

flavio ver: 2.0.0
default ckm: [[ 0.97439779+0.00000000e+00j  0.2248    +0.00000000e+00j
   0.00110513-3.56252619e-03j]
 [-0.22464666-1.46526356e-04j  0.97352564-3.38045973e-05j
   0.04221   +0.00000000e+00j]
 [ 0.00841306-3.46824795e-03j -0.04137812-8.00147690e-04j
   0.9991018 +0.00000000e+00j]]
the same ckm: [[ 0.97439779+0.00000000e+00j  0.2248    +0.00000000e+00j
   0.00110513-3.56252619e-03j]
 [-0.22464666-1.46526356e-04j  0.97352564-3.38045973e-05j
   0.04221   +0.00000000e+00j]
 [ 0.00841306-3.46824795e-03j -0.04137812-8.00147690e-04j
   0.9991018 +0.00000000e+00j]]
now new ckm: [[ 0.97427186+0.00000000e+00j  0.22534861+0.00000000e+00j
   0.00124748-3.28535540e-03j]
 [-0.22520886-1.31826031e-04j  0.97343967-3.04912971e-05j
   0.04118445+0.00000000e+00j]
 [ 0.00806661-3.19813332e-03j -0.04040623-7.39726686e-04j
   0.99914538+0.00000000e+00j]]

Implementation of B(s) -> ll gamma

LHCb will measure the branching ratio of the decay B(s) -> mu mu gamma and I would like to add it to Flavio.
I will create a new file from bdecays/bpll.py called bllgamma.py and implement the SM prediction as computed in hep-ph/0410146. Concerning the form factors, should I create a new directory in bdecays/formfactors/ ?

Addition of new measurements

Dear Peter,

I'm trying to add a new measurement to the list available in the measurements.yml file, namely the LHCb CMS Bs->mumu combination 2014 link

On the dedicated documentation page I couldn't find a description of the parameters that need to be passed to the measurement entry, in particular is xi the error on the best fit value for a DLL contour plot? If yes, why is it set to xi: - [0.0, 1.0e-08] - [0.0, 1.0e-09] for LHCb Bs->mumu 2017?

Thanks,
Davide

Slow multi-thread calculation of SM covariance for fit with SMEFT WCs

After defining a FastFit instance fit with SMEFT WCs, calling

fit.get_sm_covariance()

is considerably faster than the parallelized version

fit.get_sm_covariance(threads=4)

This seems to be due to the fact that the single thread version calls self.get_predictions_array
with arguments par=False, nuisance=True, wc=False,

pred_arr[:, j] = self.get_predictions_array(x, par=False, nuisance=True, wc=False)

while the multi thread version calls self.get_predictions_array without arguments, thus using the default values par=True, nuisance=True, wc=True,
pred_arr = np.array(pool.map(self.get_predictions_array, X)).T

def get_predictions(self, x, par=True, nuisance=True, wc=True):

where the crucial argument is wc=True. Due to this argument, running in SMEFT is performed for each calculation of observable predictions, which can increase the calculation time by several orders of magnitude.

Problem with observables included in "Pseudo-measurement"

When performing several fits with different combinations of observables, a wrong Warning occurs when one first defines e.g. a global fit and then fits for individual observables. The reason for this is the pseudo-measurement added to the list of measurements by defining the global fit. This pseudo-measurement includes all the observables and thus also constraints all the observables.
Afterwards, trying to fit an individual observable results in the Warning

("{} of the measurements in the fit '{}' "
"constrain both '{}' and '{}', but only the "
"latter is included among the fit "
"observables. This can lead to inconsistent "
"results as the former is profiled over."
).format(count, self.name, obs1, obs2))

Explicitly specifying the measurements with include_measurement avoids this problem by not including the pseudo-measurements.

ValueError in FastFit.make_measurment

The code (from https://github.com/DavidMStraub/flavio-tutorial/blob/master/4%20Fits.ipynb)

import flavio
from flavio.statistics import fits
fit = fits.FastFit(name="My First Fast Fit",
                   fit_parameters=['Vcb'],
                   observables=['BR(B+->Denu)', 'BR(B+->Dmunu)'])

%time fit.make_measurement(N=100, force=True)                    

produces the following output with flavio v.1.0:

---------------------------------------------------------------------------

ValueError                                Traceback (most recent call last)

~/.local/lib/python3.5/site-packages/flavio/statistics/fits.py in make_measurement(self, N, Nexp, threads, force, force_exp)
    714         if np.asarray(central_exp).ndim == 0 or len(central_exp) <= 1: # for a 1D (or 0D) array
    715             m.add_constraint(self.observables,
--> 716                     NormalDistribution(central_exp, np.sqrt(covariance)))
    717         else:
    718             m.add_constraint(self.observables,


~/.local/lib/python3.5/site-packages/flavio/statistics/probability.py in __init__(self, central_value, standard_deviation)
    225                          support=(central_value - 6 * standard_deviation,
    226                                   central_value + 6 * standard_deviation))
--> 227         if standard_deviation <= 0:
    228             raise ValueError("Standard deviation must be positive number")
    229         self.standard_deviation = standard_deviation


ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()

Isuues with importing Flavio

Hi,

I was trying to install Flavio, but I'm not an experienced user of this coding platform. I was following the instructions for prerequisites and step by step guide to install it. But I'm having several issues:

  1. First of all, while checking the prerequisites, I found this
    joydeep@:~$ python3 --version Python 3.8.5
    joydeep@:~$ pip3 --version pip 20.2.2 from /home/joydeep/.local/lib/python3.5/site-packages/pip (python 3.5)
    is this okay ?
  2. Secondly, I went with this environment to install flavio and it appeared to me that it went without any trouble! But, with apt list --installed command, it doesn't show me flavio in it !
  3. Then, when I do import flavio, it shows me this error
    import: not authorized flavio' @ error/constitute.c/WriteImage/1028.`

Please help.
Thanks,
Joydeep

Plotting produces warnings if no font named "Computer Modern Roman" is present

rc('font',**{'family':'serif','serif':['Computer Modern Roman']})

The above line can lead to multiple warnings during plotting if no font called "Computer Modern Roman" is installed. This might for example happen if Computer Modern Unicode (CMU) fonts are installed that use a different naming scheme such that Computer Modern Roman is called "cmu serif". Or of course if Computer Modern Roman is not installed at all and other fonts like DejaVu are used as default fonts.

I would suggest to change this line to

rc('font',**{'family':'serif',
             'serif':['Computer Modern Roman','cmu serif']+rcParams['font.serif']})

This would set the first serif font to try to 'Computer Modern Roman', the second one to 'cmu serif' and keep all other serif fonts in the list such that they are used if the first two are not found.

Differences in Version 0.28 and 2.00 when modifying form factors in RK*

Dear all,

I have noticed a difference in the distribution of RK* when modifying form factors between v 0.28 and v 2.00.

I have plotted the distribution of RK* with the default form factors. I got a flat distribution for the SM, and less suppression in the low q^2 region when introducing a C_9 or C_10, or a combination of the two in both versions as expected.

However, when I increase the value of A_12 to about 1.5 (I understand it should not be this large from the LCSR calculation), I got a flat distribution even when introducing C_9 or C_10 in v0.28. This was not true for v2.00. In contrast, in the current version, when I increase the value of A_0 to some large value, I got a flat distribution(Again, I understand that A_0 should not be this large from the LCSR calculation).

I have attached the plots. I have looked through the changes and did not spot anything that would lead to this. I also tried to compare some relevant files without a clear answer. Could you please help me with that?

Thanks!
modifiedA0_2.0.pdf
modifiedA2_0.28.pdf

Constraining two Wilson coefficients in a Likelihood fit

Hi,

I was wondering whether it would be possible to constrain the values of two Wilson coefficients
in the log-likelihood function

def FLL(x):

    Re_C9, Re_C10 = x
    w = Wilson({'C9_bsmumu': Re_C9, 'C10_bsmumu': Re_C10},
                scale=4.8,
                eft='WET', basis='flavio')
    return `L.log_likelihood(par,` w)

i.e. to fit with the constraint C9NP=-C10NP

Thanks,
Davide

How to use 'flavio.np_prediction' after using 'match_run'

Hi,

I'm trying predict an observable at a particular scale. To be specific, I'm using these steps:

  1. Initialize the instance
    wc1 = Wilson ({'phil1_13': 1e-10}, scale=1e3, eft='SMEFT', basis='Warsaw')
  2. Run it down to our desired scale
    wc13 = wc1.match_run (scale=2, eft='WET-3', basis='flavio').
    Now, my question is, if I want to predict some NP variable such as BR(tau->phimu), after step 2, how would I do that ? Because nothing has been attributed to wc13, so it seems that I can't use flavio.np_prediction for wc13.

Please help. Thanks.

Missing obs_err_dict in flavio.plots.q2_plot_th_bin

Hi all,

I noticed that calling flavio.plots.q2_plot_th_bin in flavio.plots with wilson coefficients defined - so wc!=None - it will always result in a crash since obs_err_dict does not exist for assigning

err = obs_err_dict[bin_]

four lines later. Is there any way of fixing this by pulling up uncertainties, or does there maybe even exist an equivalent to flavio.sm_uncertainty for new physics?

Best,

Fabian

mue conversion too small?

Dear Flavio team,

We are trying to compute several LFV observables, however the numbers we are getting for mu-e conversion in nuclei seem too small. For example,

wc  = Wilson({'phil1_12': 1e-10},scale=1e3, eft='SMEFT', basis='Warsaw')
flavio.np_prediction('BR(mu->eee)',wc)
flavio.np_prediction('CR(mu->e, Al)',wc)

5.912821433496535e-12
1.1867886618408577e-19

Nevertheless, from using wilson for the RGEs and plugging the WCs at low energies in available expressions (e.g. 1702.03020), we expected CR~6e-11.

Trying to understand the difference, we checked the mueconversion.py file and found some apparent inconsistency when defining the gRV and similar couplings (line 30 on). Reference hep-ph/0203110 includes a normalization of Gf/sqrt(2) for the 2l2q operators, however the WET.flavio.pdf document doesn't (it does for flavor conserving ones, but not for the LFV ones). Therefore, we understand that there are some factors missing in mueconversion.py and one should replace

30    gRV = {'u': 4*(wc['CVRR_mueuu'] + wc['CVLR_mueuu']),...

by 

30    gRV = {'u': 1/(np.sqrt(2)*par['GF'])*(wc['CVRR_mueuu'] + wc['CVLR_mueuu']),...

and so on for gLV... This new normalization enhances flavio predicition for CR up to 3e-11, consistent with what we were expecting.

Does this make sense? is it possible that this factor is missing in the code? or is the normalization in WET.flavio.pdf what needs to be modified?

Many thanks in advance!
Best regards,
Xabi

FlavorKit interface

Hi,
i am currently trying to calculate the BR of Bs->mumu in the MRSSM with the help of SPheno
and i can not get past the reading of the Wilson Coeffients. It gets stuck there forever.
Same problem with the MSSM although your test file from 2016 works.
The input file for the MRSSM can be found here: https://pastebin.com/raw/H9vkaK0y
and for the MSSM (new): https://pastebin.com/raw/hYuhK2mQ
The files were created with SARAH-4.12.2 and SPheno v4.0.3.

clerical error

flavio/flavio/physics/bdecays/ball/qcdf.py_

item 151: (ubar*mB + u*q2) maybe change to (ubar*mB**2 + u*q2)

Parameters in bsz.py

In physics/bdecays/formfactors/b_v/bsz.py, the resonance parameters are hardcoded:

mres_bsz['b->d'] = [5.279, 5.324, 5.716];
mres_bsz['b->s'] = [5.367, 5.415, 5.830];

which are slightly different from those listed in BSZ2015 [1503.05534]. Of course, the values are changed slightly when the PDG releases their new averages. Anyway, it would be better to put the above codes in a parameter file, instead of in bsz.py.

Method to count SM discrepancy in terms of sigmas

Dear Mantainer(s),

I am new to the flav.io package and I'm currently performing fast likelihood fits of WC. As far as I understand, the WCs that one calls by:

from wilson import Wilson
par = flavio.parameters.default_parameters.get_central_all()

def FLL(x):
Re_C9, Re_C10 = x
w = Wilson({'C9_bsmumu': Re_C9, 'C10_bsmumu': Re_C10},
scale=4.8,
eft='WET', basis='flavio')
return FL.log_likelihood(par, w)

are the NP ones, is it correct? After having performed the fit and drawn the fpl.likelihood_contour_data plot,
Is there a predefined method to extract the #of sigmas the best fit point is away from the [0,0] in the Re_C9NP and Re_C10NP space?

Thanks in advance,
Davide

Obervables and relevant NP operators

Is there a way to check which NP operators contribute to each observable? For example, is there a method to check which operators are relevant to an observable, such as 'BR(B0->K*nunu)'. Also, is the reverse also possible? i.e. to have a list of all observables a NP operator would affect.
Finally, is flavio considering only tree-level NP contributions or also loop effects generated by the NP operators?

Looking forward to your reply.

plotting the error budget

From the documentation

errors = flavio.sm_error_budget('BR(Bs->mumu)')
import flavio.plots
flavio.plots.error_budget_pie(errors)
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-6-4a69de14ce56> in <module>()
----> 1 flavio.plots.error_budget_pie(_4)

/home/beaujean/.local/lib/python3.4/site-packages/flavio/plots/plotfunctions.py in error_budget_pie(err_dict, other_cutoff)
     44         return r'{p:.1f}\%'.format(p=pct*err_tot)
     45     plt.axis('equal')
---> 46     return plt.pie(fracs, labels=labels, autopct=my_autopct, wedgeprops = {'linewidth':0.5}, colors=flavio.plots.colors.pastel)
     47 
     48 

TypeError: pie() got an unexpected keyword argument 'wedgeprops'

I'm still using ubuntu 14.04 with matplotlib 1.3.1. Could that be the reason? If so, don't worry, I want to upgrade to 16.04 soon anyways

Mention smelli on flavio website

As noted in issue #136, smelli is currently not mentioned on the flavio website but would be interesting for many people using likelihoods in flavio.

Problem with likelihood_contour

@peterstangl @DavidMStraub

Hi,

I'm trying to convert the example notebook in

https://github.com/flav-io/flavio-examples/blob/master/FastFit_WilsonCoefficients_C7p.ipynb

to the new FastLikelihood instances of flavio 2.
I built the FastLikelihood and called the make_measurement() method.

And with
par = flavio.default_parameters.get_central_all()

and

mywilson = wilson.Wilson({'C7p_bs': 2e-6+1j*1e-6},
scale=4.8, eft='WET', basis='flavio')
I can run:

global_fastfit.log_likelihood(par, mywilson)

Obtanining a number.

Now I'm stuck when running:

flavio.plots.likelihood_contour(global_fastfit.log_likelihood,
-x_max, x_max, -x_max, x_max, n_sigma=(1, 2), col=0,
interpolation_factor=10, steps=30, label='global')

On the docstring it says log_likelihood must be a function but the log_likelihood method of FastLikelihood returns a float that clearly is not callable.

What should I pass as a first argument to flavio.plots.likelihood_contour? In case do you have a flavio 1.6 or greater version of the example notebooks?

Thanks,
Davide

Strange side effect of calling Observable.get_measurements()

Hi @DavidMStraub,
I was trying to play with handy getMeasurements(), but encountered a very strange side effect:

The code:

import flavio
print('Flavio version:', flavio.__version__)
print('SM_prediction before get_measurements()', flavio.sm_prediction('BR(Bs->mumu)'))
print('Measurements:', flavio.Observable('BR(Bs->mumu)').get_measurements())
print('SM_prediction after get_measurements()', flavio.sm_prediction('BR(Bs->mumu)'))

produces the following error on the second call to sm_predictions():

Flavio version: 1.5.0
SM_prediction before get_measurements() 3.66775536885e-09
Measurements: ['CMS Bs->mumu 2013', 'LHCb Bs->mumu 2017', 'ATLAS Bs->mumu 2018', 'inofficial combination Bs->mumu 2019']
Traceback (most recent call last):
  File "bug_get_measurements.py", line 5, in <module>
    print('SM_prediction after get_measurements()', flavio.sm_prediction('BR(Bs->mumu)'))
  File "~/.local/lib/python3.6/site-packages/flavio/functions.py", line 40, in sm_prediction
    return obs.prediction_central(flavio.default_parameters, wc_sm, *args, **kwargs)
  File "~/.local/lib/python3.6/site-packages/flavio/classes.py", line 559, in prediction_central
    return self.prediction.get_central(constraints_obj, wc_obj, *args, **kwargs)
AttributeError: 'NoneType' object has no attribute 'get_central'

Am I doing something wrong?

All the best,
Alexander

possible bug in C7,C7' flavio->EOS conversion

According to (outdated?)

https://wcxf.github.io/assets/pdf/WET.EOS.pdf

and

https://wcxf.github.io/assets/pdf/WET.flavio.pdf

C7_bs (C7p_bs) in Flavio corresponds to the same operator as b->s::c7 (b->s::c7') in EOS.

However, when I run the code

import flavio
import wilson
wcF = flavio.WilsonCoefficients()
wcF.set_initial(scale=3,wc_dict={'C7_bs':1})
wcF.wc.translate('JMS').translate('EOS')

I've got a non-zero c7', i.e,

'b->s::c7': (1.00036571+0j),
"b->s::c7'": (-0.01912714+0j)

I would be grateful, if you help me with this issue...

All the best,
Alexander
P.S. Just tested with flavio-1.3.1, wilson-1.7, wcxf-1.6 from PyPi.

Calculation of SM Wilson Coefficients

Could you please clarify, which are the parameters of the method flavio.physics.running.running.get_wilson()
I want to calculate the SM Coefficient e.g. C9p_bsee from the electroweak scale down to the mass of the B meson. How one does this?

Thanks in advance.

Errors in implementation of K0->ll'

Hi,

I think I've found two problems in the implementation of K0->ll' in flavio/physics/kdecays/kll.py.

Lepton flavour violating K-short decay results in errors

K=KS together with l1 != l2 not a valid option in the definition of function amplitudes_eff:

   ...
    if K == 'KS' and l1 == l2:
        Peff = P.imag
        Seff = S.real + SLD
    if K == 'KL':
        Peff = P.real + PLD
        Seff = S.imag
    return Peff, Seff

Correspondingly, asking flavio for a prediction on BR(KS->emu,mue) leads to
UnboundLocalError: local variable 'Peff' referenced before assignment.

This would have been trivial to fix, be there not the second problem.

Lepton flavour violating K-long decay calculated incorrectly

This is physIcs-related issue: the formulas for using imaginary of real parts of the Wilson coefficients (or, of the P,S parts of the amplitudes in the current flavio implementation) hold only for the lepton flavour conserving case.

Generally, KL and KS are superpositions of K0 and antiK0: K_L,S = (sbar d +- dbar s) / sqrt(2).
Thus, naively, the amplitudes consist of the combinations (CX_sdl1l2 +- CX_dsl1l2)/sqrt(2) with X=9,9p,10,10p,S,SP,P,Pp.
However, in flavio, the _dsl1l2 Wilson coefficients are not to be specified as they are obtained from complex-conjugated _sdl2l1 ones. In particular:
image
(The extra minus sign for CS-CSp is spit by the CP transformation of the corresponding operator - I would appreciate if someone could check the above formulas independently. The CX + CXp combinations are irelevant for pseudoscalar meson decays.)
If l1==l2, one gets terms like ( (CX_sdll-CXp_sdll) +- (CX_sdll-CXp_sdll).conj() ) / sqrt(2) which simplify to real or i*imaginary parts indeed.
However, the LFV cases lead to expressions like ( (CX_sdemu-CXp_sdemu) +- (CX_sdmue-CXp_sdmue).conj() ) / sqrt(2) which obviously cannot be simplified any further.

Regards, Matej Hudec

Different `wcxf.WC` instances with same hash

There is an issue with the hash of WilsonCoefficient as implemented in 48347a1:

def __hash__(self):
"""Return a hash of the `WilsonCoefficient` instance.
The hash changes when Wilson coefficient values or options are modified.
It assumes that `wcxf.WC` instances are not modified after instantiation."""
return hash((self.wc, frozenset(self._options)))

The problem is that different wcxf.WC instances based on different Wilson coefficients can have the same hash. This can be demonstrated as follows:

from wcxf import WC
for i in range(10):
    x = [0,0.1*i]
    wc_dict = {'C9_bsmumu': x[0], 'C10_bsmumu': x[1]}
    wc =WC(eft='WET', basis='flavio', scale=4.8,
       values=WC.dict2values(wc_dict))
    print(hash(wc))

e.g. returns

8768634703673
8768634703806
8768634703673
8768634703806
8768634703673
8768634703806
8768634703673
8768634703806
8768634703673
8768634703806

I don't know where is behaviour is coming from, but to use the caching as implemented in 48347a1, it will be necessary to make sure that different wcxf.WC instances will have different hashes.

This also applies to wilson, from which, using

from wilson import Wilson
for i in range(10):
    x = [0,0.1*i]
    wc_dict = {'C9_bsmumu': x[0], 'C10_bsmumu': x[1]}
    w = Wilson(wc_dict, 4.8, eft='WET', basis='flavio')
    print(hash(w))

i get

8768634700692
8768634700790
8768634700692
8768634700790
8768634700692
8768634700790
8768634700692
8768634700790
8768634700692
8768634700790

concerning contour regions in a two-parameter case for a single observable

Hello colleagues,
I have a query not so much directly related to flavio code, but more towards understanding statistical methods used for doing chi-square analysis.
I understand that in a global fit analysis where several measurements (say, N no. of observables) are combined together, one uses $\delta\chi^2$= 2.3 and 6 for obtaining 68% and 95% C.L. regions for a two parameter fit. But when there is only a single observable in chi-square definition (N=1), what values of $delta \chi^2$ should one use to obtain the same credible regions. Should I still use $\delta\chi^2$= 2.3 and 6 for plotting 68% and 95% C.L.?
I did a quick review of flavio code regarding this, and I think, flavio always uses values 2.3 and 6 irrespective of numbers of measurements involves. For example, in the flavio plots here, the function flavio.plots.likelihood_contour is used. Cheking this function’s definition, I see that dof is set to 2, and hence $\delta \chi^2$= 2.3 and 6 for combined global analysis as well as for a single observable, e.g. $S_{K^{*} \gamma}$.

Sorry if its a very elementary stuff that I should know, but I got confused because now in this particular case one is trying to fit two parameters to one measurement (a single data point), and generally chi-square language is used when we try to combine several measurements. Normally, I would take the central value of measurement and add and subtract 1 sigma errors to get an exp. range, and plot the values of two parameters for which theory value lies in this exp. range. But this essentially corresponds to plotting for $\delta \chi^2$ = 1 (and not 2.3). Hoping someone helps me learn the correct approach in such case.

Thanks in advance!
Girish

Participate in iminuit v2.0 beta?

Dear flavio team,

I am about to finish a major rewrite of iminuit, version 2.0, that replaces Cython as the tool to wrap C++ Minuit2 with pybind11, which is going to solve several issues that the legacy code had. All the good things that this will bring are listed on top of this PR:
scikit-hep/iminuit#502

Switching to the new version of iminuit should be completely transparent to you, since the new version passes the comprehensive suite of unit tests of iminuit-v1.x. However, I would like to use this opportunity to finally remove interface that has been successively marked as deprecated in versions 1.3 to 1.5.

Therefore my two question to you:

  • Did you take note of the deprecation warnings in iminuit and did you keep up with the interface changes so far?
  • Are you interested in trying out a Beta release of v2.0 to work out any possible bugs in the new version before the release?

Best regards,
Hans, iminuit maintainer

wcxf dependency problem

@DavidMStraub, I just noticed that due to the old commit 7e3d7e9, flavio does not depend explicitly on wcxf anymore. However, before commit 13759cc flavio imports wcxf. This leads to the problem that if wilson v2.0 is installed during the installation of flavio v1.1-v1.6 (i.e. it affects the current flavio version on PyPI), the wcxf wrapper package will not be installed (since wilson 2.0 doesn't depend on wcxf anymore). In this case, flavio is unusable until wcxf is installed manually.

Possible solutions:

  • release a new (minor) flavio version such that at least the newest version includes commits 13759cc and bc55b3c
  • release a new (minor) wilson version that requires the wcxf wrapper package such that a flavio installation that installs the newest wilson version will also install the wcxf wrapper

Probably the first solution is the preferable one. In any case, this is only a problem if one tries to use flavio v1.1-v1.6 together with wilson 2.0 without having wcxf installed.

emcee sample generator creates duplicates?

I found that the number of samples generated by the emcee wrap in flavio does not really generate 10* (dimension)* steps samples because it is generating between 2 and 3 duplicates of the walkers positions. I checked this issue with the simple data generated in section "4. Fits" in the flavio tutorial when dealing with Bayesian fits where I also found that for some points with the same position the emcee scan.mc.lnprobability instance is returning different probabilities.

Measurements for RK*.

@peterstangl @DavidMStraub

my_obs = ['((B0->K*ll), q2min=1.1, q2max=6)']
FL = FastLikelihood(name='test', observables=my_obs)
FL.make_measurement(threads=4)

fails to find measurements for the RK*.

line 74, in __init__ assert missing_obs == set(), "No measurement found for the observables: " + str(missing_obs) AssertionError: No measurement found for the observables: {'(<Rmue>(B0->K*ll), q2min=1.1, q2max=6)'}

Please have a look.
Thanks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.