Comments (9)
Hi @armoutihansen! My understanding is that your goal is to estimate the parameters xlogit
to estimate a Mixed Logit model and select random parameters for
Let me know if this utility specification is valid, and I can further guide you on how to estimate it in xlogit
.
from xlogit.
Hi @arteagac! Thanks a lot for the quick response.
That is indeed correct. The goal would be to estimate
in the first specification as well as
in the second. Furthermore, for both of these, the goal would also be to estimate the choice sensitivity/scale parameter
from xlogit.
Hi @armoutihansen. Great! It seems that xlogit
can indeed help you with this estimation. You simply need to prepare a dataset that pre-computes xlogit
to estimate the model (see estimation examples here). If the examples are not clear or if you need any further help, please do not hesitate to let me know
For the sensitivity/scale parameter xlogit
also supports this type of estimation. For this, you simply need to pass scale_factor
as an argument to the fit function.
from xlogit.
Hi @arteagac. Thank you very much for the help!
For the standard (multinomial) logit, I managed to reproduced the results of Bruhin et al. (2019):
In the output above, scale_factor=-df_long['pi']
to add instead of subtract, but this returned different parameter estimates and a much lower logik. Do you know how to deal with this? Also: It is not (yet) possible to cluster the standard errors, right?
For the mixed logit, I did not manage to get that encouraging results:
It is not clear to me why the loglik is so much lower than that of the multinomial logit. Also, if I omit the scale factor, I achieve almost half as low a loglik. I am wondering whether this is due to the scale factor not being random in the current estimation. I saw that this point was discussed in your WTP issue, but I couldn't find any reference to randomising the scale factor in the documentation. Is is correct that it is not (yet) possible to let the scale factor be random?
Again, thank you very much for the help!
from xlogit.
Hi @armoutihansen, I am not sure what might be causing the flip in signs compared to Bruhin et.al., but I assume it might be some minor issue when you pre-processed x1, x2, x3, and x4. Make sure you did not flip any operations during the data preparation. Regarding clustered errors, unfortunately xlogit still does not support those.
Regarding your mixed logit model, I think there is something off because the log-likelihood is extremely low. This is a potential symptom of non-convergence, which is non-surprising for this type of non-linear WTP-like models. You are right, xlogit still does not support a random scale factor, but I think this might not be the cause of the issue. I would advise to try running multiple estimations using different start points by passing the to the init_coeff
argument in the fit function. Also, if for your case it is critical to use a random scale_factor, perhaps you can take a look at the logitr R package (https://jhelvy.github.io/logitr/), which supports this feature.
from xlogit.
Okay. Thanks a lot! I will give logitr a try for the clustered standard errors and random scale factor.
from xlogit.
Sure, please let me know how it goes.
from xlogit.
Hi @arteagac, I just wanted to quickly update you on my progress.
First of all, I realised that I do not need to specify my model in WTP space in order to estimate the scale parameter. In particular, I can just estimate:
and then multiply
logitr:
Log-likelihood: -3,819.24
In the outputs above, self
is tn
, but perhaps I have misunderstood how the truncated normal is used? If I specify it as log-normally distributed, I am far from convergence with a log-likelihood of approximately -50,000.
from xlogit.
Hi @armoutihansen, Thanks a lot for the update. Your feedback helps me to keep improving xlogit. I took a look at logitr's censored normal implementation and it seems that it is the same as xlogit's truncated normal (I will double-check if it is better to rename tn
as cn
in xlogit to keep consistency with logitr). Technically, both should provide the same results, but I am not sure why xlogit fails to converge. I know logitr uses by default the L-BFGS-B
optimization routine, whereas xlogit uses by default BFGS
. Perhaps you can try setting xlogit to use L-BFGS-B
by using the optim_method='L-BFGS-B'
parameter in the fit
method. L-BFGS-B
is more robust, so perhaps this can help with the convergence issue.
from xlogit.
Related Issues (19)
- Implement mixed logit models HOT 1
- Small Bug in Predict HOT 5
- Small Typo MNL Line 285; MXL Line 417 HOT 1
- Marginal Effects/Elasticities HOT 6
- potential bug in prediction of MIXL models HOT 4
- Keep a given parameter fixed to a predetermined value HOT 2
- Variation in alternatives HOT 3
- Include additive term in the utility formulation HOT 2
- Implement weigthed regression functionality HOT 1
- Standard errors v.0.2.6. HOT 2
- inconsistent alts values in long format HOT 3
- WTP Space, GMNL, and Starting Values HOT 23
- Enabling Non-Linear Utility Functions - WTP Space HOT 39
- Implement robust standard errors HOT 1
- Adding logitr to benchmarking HOT 14
- Logo generated by DALLE-mini HOT 2
- Implemement Sobol draws
- Implement correlated heterogeneity
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from xlogit.