Giter Club home page Giter Club logo

genproductions's Introduction

genproductions's People

Contributors

agrohsje avatar bendavid avatar carolhungwt avatar ckmackay avatar covarell avatar danbarto avatar davidsheffield avatar dnash86 avatar efeyazgan avatar farrah-simpson avatar gourangakole avatar govoni avatar helee avatar hroskes avatar irenezoi avatar jacobchesslo avatar jaylawhorn avatar jfernan2 avatar kdlong avatar khurtado avatar menglu21 avatar mlizzo avatar mseidel42 avatar perrozzi avatar qliphy avatar saptaparna avatar sihyunjeon avatar smdogra avatar syuvivida avatar vciulli avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

genproductions's Issues

CMSConnect in MadGraph v2.5.5

Port changes in master branch allowing gridpack generation with CMSConnect to 2.5 branch.

@khurtado, any projections on this? I think it would be very useful to have for testing the Z/Wgamma samples which are very resource intensive.

[2017 MC v2] Finalize pdf weights for 2017

These would be useful for alpha_s extraction studies.

Pdf sets are

322500 | NNPDF31_nnlo_as_0108
322700 | NNPDF31_nnlo_as_0110
322900 | NNPDF31_nnlo_as_0112
323100 | NNPDF31_nnlo_as_0114
323300 | NNPDF31_nnlo_as_0117
323500 | NNPDF31_nnlo_as_0119
323700 | NNPDF31_nnlo_as_0122
323900 | NNPDF31_nnlo_as_0124

Each includes full uncertainties (101 replicas) but it would be sufficient to just have the central set for each (ie 8 additional weights)

[mg] Condor support at lxplus

Condor cluster interface in MadGraph should be updated to work with condor on lxplus. This is currently under investigation, with a few known issues documented here for better bookkeeping.

@vciulli @bendavid @qili @perrozzi may have further comments or suggestions

  1. In version 2.5.4, there seems to be a bug where the command "set cluster_type condor" is not properly called and this variable is not set. The reason for this should be more carefully investigated.

  2. MadGraph uses text parsing of condor_q to manage jobs. This gives an issue with the newest version of condor, which is installed at lxplus, as the output format of condor_q has changed. The old output can be obtained by the command "condor_q -nobatch," however, this won't work on the old version, so it would be necessary to detect the condor version and then decide which command to apply. This is the simplest solution, but is a bit dirty. condor_q is really designed to give human readable output, so a better solution is to use specific ClassAdd commands, for example "condor_q -af JobStatus" to get the status, but this involves breaking into the core code a bit more. Could possibly be implemented as a plugin.

  3. lxplus asks for jobs to advertise their runtime for a queue-like implementation. This requires run-time specific information to be passed to the condor configuration files which are embedded in the code and not usually accessible. The easiest way I see around this is to use the "cluster_queue" parameter in the run card, though it will require modifying the way this parameter is used. Should be fairly straightforward, however.

[mg] best way to distribute the condor python bindings for mg5_amc@nlo

Hello,
During @LoRusso tt012j_5f_ckm_NLO_FXFX test, I have noticed most madgraph jobs in that test run ran for less than 8 hours, with a very few going up to 24 to 35 hours.

This can be problematic while submitting to the Global Pool, because jobs are matched to currently running pilot jobs (pilots live for up to 48 hours and multiple real jobs run inside them) in the pool and you need to tell in advance how long a job is going to last at most (or it can be evicted when the pilot it is running at dies otherwise). The default in CMS Connect for this is 8 hours, which means a job can match to pilots that have been alive between 0 to 40 hours. Increasing the number to e.g 47 hours would mean a job can only match to very new pilots, which means matching to less resources. For the tt012j_5f_ckm_NLO_FXFX run tests, one of the jobs ran 5 times on 5 different machines (first 4 got evicted after 8 to 24 hours) before completion. which translated into +120 hours wall clock time for that job since it started.

To fix this, I am thinking on implementing a list of “promised times” for a job. This is, something like: MaxWallTimeMins = [ 860, 2460, 45*60]. Which means, if a job gets evicted after 8 hours, try 24 hours next and 45 next. The numbers in the list could be changed by the user and have the above by default.

Do you think that is reasonable? Otherwise, is there a way to tell if some particular jobs are going to take way longer than the others in madgraph so that we can set bigger times for those in advance?

[mg26x] cmsgrid_final.lhe can exist for madgraph gridpacks even if some steps fail at runtime

Observed in mg26x branch with systematics weights step, but possibly affecting other additional steps as well (madspin, model reweighting, etc).

This could lead to silent failures of part of the chain which leaves an inhomogeneous set of events in the dataset.

Gridpack script must not produce cmsgrid_final.lhe in case ANY configured processing step fails, in order to guarantee that the CMSSW job fails in this case.

merge powheg_gcc530 branch into master

With the new workflow explicitly sourcing scram_arch and cmssw from runcmsgrid.sh
we can move the production to powheg gcc530 without problems.
We can clone the current master branch to have a branch, for instance named powheg_gcc48x, compatible with gcc480 for back-compatibility.

[mg] Madgraph jobs longer than 48 hours and Global Pool limits

Hello,

Making this issue to keep it in mind for the near future.
I am not aware of any workflow having single jobs going over 48 hours on average at this point, but I am making this issue to remind people that is the actual limit of the Global Pool, hence CMS Connect.

If this is an issue now, let me know. If you think this will become an issue soon, let me know. There are 2 approaches in case this become a problem:

  • Reduce the walltime by decreasing the number of events per job in madgraph cards
  • Coordinate with the Submission Infrastructure team to provide special glideins with lifetimes longer than 48 hours (e.g 72 hours?)

If the first isn't hard, I would rather go with that option at that point, but I will need to know if the second option becomes necessary.

Adding a new administrator

Is it possible to add Josh as administrator of genproductions?
Maybe I can do it, but I don't know how...

Cheers,
Vitaliano

Possible problem with gridpack due to taring

There was another issue reported with Dalitz gridpacks.
See HN post. There seem to be errors from tar which can not find some files.
I put one log file from gridpack here: /afs/cern.ch/user/a/andrey/public/HiggsDalitz/gridpack_generation.log for you to have a look. (UPD: search file for word fail)

In principle, this does not seem to affect the event production with runcmsgrid.sh, so perhaps this is not a problem. We just need a confirmation.

Thanks

Madgraph issues with condor @CMSConnect

Hello,

@lorusso7 has been testing running gridpacks at CMS Connect and noticed an issue related to the cluster.f NLO instance in madgraph:

The issue is triggered when running: tt012j_5f_ckm_NLO_FXFX with the Madgraph version used on the master branch. The gridpack gets stucked on the condor worker node on process P1_gg_ttxg/GF14, with the following output:

nFKSprocess: 10. Lower bound for tau is (taking resonances into account) 0.79264E-03  0.36600E+03  0.13000E+05
 bpower is   3.0000000000000000
 cluster.f: Error. Invalid combination.
 error
 ERROR in setclscales izero
Time in seconds: 0

That error appears to be triggered in this line:

https://github.com/bendavid/MadGraph5_aMC-NLO/blob/07621fd7bc806503cf0cbb197762aac421233956/Template/NLO/SubProcesses/cluster.f#L688

He did reproduce this workflow fine on LSF, so this might be related to condor (in which case it should also fail on lxplus using condor).
@lorusso7 pointed out there have been issues in the past related to cluster_queue that were workaround already. Looking into histt012j_5f_ckm_NLO_FXFX_gridpack/work/gridpack/mgbasedir/input/mg5_configuration.txt file, the cluster queue is propely set to None for condor.

Any ideas of what the issue is or how to better debug this?

# MG5 MAIN DIRECTORY
mg5_path = /home/lorusso/cms_connect_kenyi/genproductions/bin/MadGraph5_aMCatNLO/tt012j_5f_ckm_NLO_FXFX/tt012j_5f_ckm_NLO_FXFX_gridpack/work/MG5_aMC_v2_4_2
cluster_queue = None
cluster_temp_path = /home/lorusso/cms_connect_kenyi/genproductions/bin/MadGraph5_aMCatNLO

Best,
Kenyi

[mg and LHEExternalProducer] Number of LHE events produced is lower than expected

According to this thread https://hypernews.cern.ch/HyperNews/CMS/get/prep-ops/4328/1/1.html
there are mg5_amc workflows that produce lhe files with lower number of events than what they are required to produce, and CMSSW expects to find.
The issue is not flagged with a custom error, so the exit code is not distinguishable from generator crashes, for instance. It seems to be process dependent, and random seed dependent, which makes it difficult to reproduce.
The current handling by computing is to patiently chase these cases and resubmit them, hoping that they will not fail again.

After talking to @vlimant a few action items/options can be foreseen to (try to) overcome this (according to my recollection, please correct me if something is not right):

  • from the logs of the failed jobs, collect the distribution of the number of events actually produced
  • it would be also interesting to see what is the fraction of jobs that is affected by this problem
  • in case the jobs fail with very large number of events (for instance >10K) use the nevents passed as parameter to the job as a maximum number of events
  • since mg5_amc@nlo doesn't crash but simply ends, control at the end of runcmsgrid.sh if 100% of the requested events has been produced, otherwise raise an exception with a dedicated error code. The jobs failing with this error code can be either resubmitted for a limited number of times (e.g. 5) or not resubmitted
  • another possibility is to allow GS to create a lower number of events without raising an exception. This option should be evaluated carefully because could introduce unwanted consequences

Let me know if I missed something or you have other ideas.

We can plan another dedicated discussion in one of the following pre-MCCM meetings inviting representatives from computing
(@anorkus fyi)

Problem for gridpack W+j production in Madgraph V2_5_1

Dear experts, @kdlong, @qliphy,
for the Madgraph validation tests I have tried to produce W+jet gridpack in version 2.5.1.
The run and proc cards for the process are shown below [1].

To produce the gridpack I have used the "standard" receipt (for lsf submission) described in [2] with "memoryInMBytes=30000 MB" and 2nw queue. The CMSSW release is CMSSW_7_1_23 with ARCH=slc6_amd64_gcc481.

The gridpack output is shown in [3]: in particular the all jobs end but the cross section is not valuated and so the LHE events productions does not work [4].

I have done the same gridapck with "use_syst=False" but the same problems appear.
I have understood that Qiang has encountered the same issue in the gridapack generation.
Do you have any suggestion to fix the problem?
Do I produce the same gridpack in V2.5.4 in the actual branch mg25x [5]?

[1]
WJetsToLNu_HT-incl_proc_card.txt
WJetsToLNu_HT-incl_run_card.txt

[2] https://twiki.cern.ch/twiki/bin/view/CMS/QuickGuideMadGraph5aMCatNLO#Quick_tutorial_on_how_to_produce

[3] STDOUT.txt

[4] OUT_lhe.txt

[5]https://github.com/cms-sw/genproductions/blob/mg25x/bin/MadGraph5_aMCatNLO/gridpack_generation.sh#L114

[mg26x] Upgrade mg5_amc to 2.6.0

The new 2.6.0 release is out containing most importantly the bias weighting functionality at NLO.
(Also interesting are the finite quark mass corrections for gluon fusion production.)

Since this is closely related to developments in the 2.5 branch (and was originally supposed to be called 2.5.6) I think we should just update the mg25x genproductions branch rather than creating a new one.

Automated testing of pull requests

In my opinion we critically need some automated tests to be run before a pull request can be accepted so we don't repeat situations where release versions trivial fail.

Did anyone ever look at this before ( @bendavid maybe)? Could we involve the computing people to understand if it's possible to set something simple up under CMS-bot or jenkins, for example?

[mg] remove proton and jet definitions from mg_amc proc cards

Because it is taken automatically from the model.
And the PDF handles it from the log.
The parsing script still checks for the proton and jet definitions. We could enforce not to specify it.

Also, the proton and jet definitions could be automatically stripped off in the import-from-pre2017 script

Update MadGraph examples to 2017 PDF recommendation

Implement list of PDF sets in [1] into example cards and test performance.

For NLO MadGraph: Change this line to appropriate LHAPDF numbers

https://github.com/cms-sw/genproductions/blob/mg25x/bin/MadGraph5_aMCatNLO/cards/examples/wplustest_5f_NLO/wplustest_5f_NLO_run_card.dat#L65

For LO MadGraph: Change calls in Syscalc/update to new "systematics" tool.

https://github.com/cms-sw/genproductions/blob/mg25x/bin/MadGraph5_aMCatNLO/runcmsgrid_LO.sh#L94-L120

https://cp3.irmp.ucl.ac.be/projects/madgraph/wiki/Systematics

Update MadGraph v2.5.5 systematics (PDFs and scales including functional form)

Implement list of PDFs [1] in MadGraph v2.5.5 using "systematics" python module.

This is in principle already tested and working (pull request to come shortly), but there are a few issues worth discussing: @perrozzi @efeyazgan @vciulli @qliphy @bendavid

  1. It's also possible to include weights for different functional forms of the dynamic scale. The default MadGraph scale choice is not simple, so having values like H_{T}/2 would be useful in some cases. Do we want to include these?

  2. In the current version, the weight ids start from 1001 and count by one, not differentiating between different types of weights. In the old samples, PDF weights started at 2000, and we extended this to 3000s and 4000s for the POWHEG samples. This has the nice feature that you can easily distinguish where one type of weight ends and the next begins. I asked about such a feature [3] and the authors are receptive. Do others think it's worth trying to add this as a patch when it's implemented?

[1] https://docs.google.com/spreadsheets/d/1YjhruLPvmwqoA1YZPgmU5H9T3ELnYnriHgO9DrJQoxk/edit#gid=0
[2] https://cp3.irmp.ucl.ac.be/projects/madgraph/wiki/Systematics
[3] https://answers.launchpad.net/mg5amcnlo/+question/654417

Possible bug in gridpack maker script.

Hi,
Could someone have a look at the problem reported in [1, 2].
It looks like a bug in LHE file creator inside gridpacks, which prevents reading it in CMSSW.

Could you confirm that it is in fact a bug or not. And if it is, please suggest a patch which would allow to produce a samples via McM (without re-creating a gridpack).

Thanks,
Andrey

Make examples direcrory

I would like to suggest to create another sub-directory with examples for a few most popular cases. It is very difficult to navigate inside the existing directories because there are too many fragments in there.
I think if you just copy the most relevant configs (generic LHE hadronization with and without jet-matching, pythia8 process production, etc) and link them form tutorial pages, it would be easier to follow.

Best,
Andrey

[powheg]LHAPDF::RangeError

Concerning the diboson (ZZ,WZ) production (ZZTo2L2Nu/ZZTo4L,WZTo3LNu) with powheg, has anyone already tried the gridpack creation with the NNPDF31 set ?

When I tested the 5 TeV cards [1] with the 260000 --> 306000 substitution I am getting the exception in step 2:

"
terminate called after throwing an instance of 'LHAPDF::RangeError'
what(): Unphysical Q2 given: -nan
"

Can someone reproduce the error ? Can this be somehow related to mllmin cut ?

Strangely (?) for WW it works.

Regards and thanks

[1] https://github.com/cms-sw/genproductions/tree/powheg_gcc48x/bin/Powheg/production/5TeV

pull request #503 for run_pwg.sh

Hi, we've made some modifications to fix some bugs and support the latest powheg source. Please accept the pull request #503. Thank you very much.

Wrong gridpack directory path on eos?

While trying to upload some new gridpack on cvmfs I noticed that there exists a dir
/eos/cms/store/group/phys_generator/gridpacks
which is not mapped to cvmfs.
Is that of any use?
If not maybe it should be removed to avoid confusion.
The right directory for uploading to cvmfs is
/eos/cms/store/group/phys_generator/cvmfs/gridpacks

include all the datacards inside the mg5_amc@nlo gridpack

all the datacards used to create a mg5_amc@nlo gridpack should be shipped in the final gridpack in a clear and easy-to-identify way. Like in the main directory.
so far this is not the case. some cards are included into process/Cards, but is not easy to retrieve/understand

Madgraph 2.5.x open issues

  • we need to move to either 2.5.6 once it's available, or else 2.5.5 plus one additional patch affecting Vgamma type processes at NLO

  • LO systematic weights can now be handled directly by madgraph, such that syscalc is no longer needed. We should remove it completely from the scripts and make sure the built in functionality is working correctly (consistent order of weights between LO and NLO, reasonable naming of weights in header, reasonable value/behaviour for weights)

  • once condor at CERN is working correctly (without relying on either afs or Eos) we can also deprecate lsf support and related patches.

  • add example for LO weight biasing (SUSY group has started looking at this)

cleangridmore.sh breaks MadSpin

Hello,

I'm producing gridpacks using MadGraph+MadSpin. After I produce a gridpack, if I try to generate events with it, MadSpin complains about missing fortran source files in mgbasedir/models/template_files/fortran/, mgbasedir/aloha/template_files/, and possibly other folders.
If I remove the line "find ./ -name "*.f" | xargs -r rm" in cleangridmore.sh and re-produce the gridpack, then the generation works correctly. However this might impact negatively on the size of the gridpack in case a large number of diagrams needs to be computed.

Cheers,
Emanuele

Running gridpack_generation.sh in interactive mode

What is currently the recommended way of using the gridpack_generation.sh script locally to produce a gridpack? For initial tests it is nice to just run it in a local area so that you don't have to wait for lxbatch to run your job (can take days lately).

When I try to run with

./gridpack_generation.sh processname cardsdir

it does not work because there is a check for unused variables.

When I remove that line, the generation proceeds as expected.
It would be good if there was a supported way to create the gridpack locally without having to modify the script itself.

gridpack generation problem while using CMSConnect

Hello Experts
card I am using:
https://github.com/cms-sw/genproductions/tree/master/bin/MadGraph5_aMCatNLO/cards/production/pre2017/14TeV/vh012j_5f_NLO_FXFX_M125_14TeV_VToAll

I see there is an error
++ tee /srv/vh012j_5f_NLO_FXFX_M125_14TeV_VToAll.log
gridpack_generation.sh: line 105: /dev/fd/62: No such file or directory
++ tee /srv/vh012j_5f_NLO_FXFX_M125_14TeV_VToAll.log

genproduction master upto PR #1405

I have used command like below:
nohup ./submit_cmsconnect_gridpack_generation.sh > somefile.debug 2>&1 &

[mg] default lhaid list too long for NLO mode

In the NLO 5F case, we put a list of 37 IDs into the run_card by default. It seems madgraph cannot handle more than 30 (error message below). In the NLO 4F case, we only give 27, so it's no problem.
Does this work for anyone else?

Error message:
InvalidRunCard : Length of list for 'lhaid' too long: max is 30.

sed "s/\$DEFAULT_PDF_SETS/306000,322500,322700,322900,323100,323300,323500,323700,323900,305800,13000,13065,13069,13100,13163,13167,13200,25200,25300,25000,42780,90200,91200,90400,91400,61100,61130,61200,61230,13400,82200,292200,292600,315000,315200,262000,263000/" $CARDSDIR/${name}_run_card.dat > ./Cards/run_card.dat

[mg] Increasing priorities on your T2, when submitting from CMS Connect

Hello,

Just so you know, if you have a local allocation on a CMS Tier 2/3 and you would like to give priority for your jobs over other CMS users when using CMS Connect, this is already possible within the Global Pool, but requires coordinating with your Site administrator.

The steps are the following:

  1. Look for your Site in the DB and contact your Site Admin (or Site executive): https://cmsweb.cern.ch/sitedb/prod/sites/
  2. Look for your grid/CERN username: https://cmsweb.cern.ch/sitedb/prod/people
  3. Ask your Site admin to add your username to the SITECONF local priority group
    https://twiki.cern.ch/twiki/bin/view/CMSPublic/CompOpsCustomizedGlideins#local_group

The last step requires your admin to be able to modify the SITECONF in the cern gitlab: https://gitlab.cern.ch/SITECONF , as mentioned in the twiki.

Once this is done, allow a few hours for the changes to propagate into CVMFS. That's it, no changes on genproductions is needed. Additionally, this will give you extra priority on CRAB jobs too.

@kdlong : Can you try these instructions out and give some feedback on the new performance?
For the test, you can type:

source /etc/ciconnect/set_condor_sites.sh T2_US_Wisconsin

before submitting your jobs on CMS connect to submit only to your T2 Site.
After that, we can add the instructions to the twiki if desired.

Verify correct installation/usage of ninja and collier for mg5_amc in 25x branch

Since the Collier library has been added to recent madgraph versions, we should make sure it is being correctly installed and used (it should be done automatically, but there were some technical issues which prevented this from working with Ninja in our scripts in the past). There should be some performance improvement associated with this.

Validate Zg patch from 2.5.6 on top of 2.5.5

This depends on #1218, but can be worked on simultaneously. See initial discussion in [1]. Outline of steps necessary:

  1. Start from mg5x branch [2]. Currently this is version 2.5.4. #1218 will move this to 2.5.5, then this should be added on top of it.
  2. Additional patches in [3] should be added into the "MadGraph5_aMC@NLO/patches" folder following the other examples [4].
  3. Test that the patches are applied correctly. If you look at the generation script, you'll see that the tarball is downloaded and the patches are applied in one of the first steps. I'd put an exit command after this, the open up the files you tried to patch and look around to make sure things are working [6]. There will also be output message saying whether the patches succeeded or not.
  4. Generate the new gridpack in the usual way. Consult with the Generators Validation group about generating events for validation and how to modify the usual workflow to test the specific issues of this request.

@vciulli @qliphy

[1] #1200
[2] https://github.com/cms-sw/genproductions/tree/mg25x
[3] http://bazaar.launchpad.net/~mg5core1/mg5amcnlo/2.5.6/revision/276
[4] https://github.com/cms-sw/genproductions/tree/mg25x/bin/MadGraph5_aMCatNLO/patches
[5] https://github.com/cms-sw/genproductions/blob/mg25x/bin/MadGraph5_aMCatNLO/gridpack_generation.sh#L173
[6] https://github.com/cms-sw/genproductions/blob/mg25x/bin/MadGraph5_aMCatNLO/gridpack_generation.sh#L173

Slow uscms.org server when using CMSConnect

@khurtado The one frustration I have with CMSConnect is that the uscms.org server is incredibly slow. If I'm just running a single job that I already have the cards for it's not so bad, but if I want to do any sort of development it's prohibitively slow. Is this only a problem because I'm accessing it from Europe or is the machine always like this? Is there anything that can be done, or any way to submit from another machine?

[mg] Request for upload of model file

Hi,
could you please upload this model file to the central page:

/afs/cern.ch/work/a/aalbert/public/share/2017-11-13_dmwg_model/Pseudoscalar_2HDM.tgz

It is the 2HDM + pseudoscalar model that is currently being studied in the LHC DMWG.

Thank you

Andreas

Update POWHEG scripts to include all recommended 2017 PDFs

Update POWHEG scripts to include weights for all PDF sets recommended for 2017. The list can be seen in [1]. The lines in [2] should be modified to call the full list of LHAPDF set numbers. Since the list is quite long, I would strongly recommend defining a function which takes in the central set ID and number of entries rather than coping the code 30 times.

@covarell @yuanchao have you started to look at this yet? Do you think it can be addressed in a short time scale?

[1] https://docs.google.com/spreadsheets/d/1YjhruLPvmwqoA1YZPgmU5H9T3ELnYnriHgO9DrJQoxk/edit#gid=0
[2] https://github.com/cms-sw/genproductions/blob/master/bin/Powheg/runcmsgrid_powheg.sh#L193-L366

small bug when running with madspin in gridpack_generation.sh

[mg] gridpack production on condor, T2 UCSD

Hi,
we are encountering a problem when trying to produce gridpacks using the condor cluster on T2 UCSD. The condor jobs of the SubProcesses fail immediately, and the master job fails with the error:

Error when reading /home/users/dspitzba/gen/genproductions/bin/MadGraph5_aMCatNLO/wplustest_4f_LO/wplustest_4f_LO_gridpack/work/processtmp/SubProcesses/P0_qq_wp_wp_lvl/G1/results.dat
Command "generate_events pilotrun" interrupted with error:
ValueError : need more than 5 values to unpack

The log of the subprocesses (e.g. SubProcesses/P0_qq_wp_wp_lvl/G1/log.txt) read

../madevent: error while loading shared libraries: libgfortran.so.3: cannot open shared object file: No such file or directory

I am using CMSSW 7_1_28 and the scram arch slc6_amd64_gcc481.
Is there a way to solve this problem within the gridpack submission script, or should we rather contact the T2 support?

[upload of datacards] upload only example datacard and script to produce others

I am wondering whether we need to fill genproductions with hundreds (thousands) of datacards often differing by one number. This makes basically impossible any real check before merging the PR.
What about asking to upload, whenever possible, only one example datacard and the script to produce the others?
In the end we don't link genproductions, it's just for reference/check.

MadGraph LO ttZ sample includes ttG and ttH (to leptons)

The process lines here:

https://github.com/cms-sw/genproductions/blob/master/bin/MadGraph5_aMCatNLO/cards/production/13TeV/TTZJets/TTZJets_5f_LO_MLM/ttZ01j_5f_proc_card.dat#L10

and here:

https://github.com/cms-sw/genproductions/blob/master/bin/MadGraph5_aMCatNLO/cards/production/13TeV/TTZJets/TTZJets_5f_LO_MLM/ttZ01j_5f_proc_card.dat#L13

should force Z production using t t~ z, z > ell+ ell- syntax. As written, gamma and higgs production is included in the sample.

The NLO proc card excludes higgs but not photons. The same issue may be present there as well.

Madgraph MLM W+jets example cards

Hi,
To be consistent with the outcome of the validation for madgraph 2.4.x, the example cards for MLM W+jets should be modified to

  1. Replace the p p > w, w > l vl syntax with direct "p p > l vl" in the proc card

  2. Set the bwcutoff in the run card back to the default of 15

This should be done in principle both for master and 25x branches.

check on MG26x bias reweight using GEN level events

Hi, @bendavid @kdlong i have got some results on the mg26x bias reweight, the process is
generate p p > ell+ ell- j [QCD] @0 for both no_bias and with_bias.
while the pythia setting is
processParameters = cms.vstring('JetMatching:setMad = off',
'JetMatching:scheme = 1',
'JetMatching:merge = on',
'JetMatching:jetAlgorithm = 2',
'JetMatching:etaJetMax = 999.',
'JetMatching:coneRadius = 1.',
'JetMatching:slowJetPower = 1',
'JetMatching:qCut = 30.',
'JetMatching:doFxFx = on',
'JetMatching:qCutME = 10.',
'JetMatching:nQmatch = 5',
'JetMatching:nJetMax = 2',
'TimeShower:mMaxGamma = 4.0')

the bias function is "bias_wgt = (max_ptj/1000)**3".

more detail in the attachmet.

screenshot from 2017-11-03 20-46-48
ptjet_log

dyellell1j_5f_NLO_FXFX_26_simple_bias_run_card.txt
dyellell1j_5f_NLO_FXFX_26_simple_bias_proc_card.txt
dyellell1j_5f_NLO_FXFX_26_simple_bias_cutsf.txt
SMP-RunIISummer15GS-00625_1_cfgpy.txt

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.