Giter Club home page Giter Club logo

llp_sleptons_rpv_susy's Introduction

After running the docker image with mg5_aMC (MadGraph) and following all the steps (including setting up and importing the required lhapdf sets):

1- Download the UFO model directory (using wget, curl, svn, etc...)

2- run mg5_aMC

3- import model ./RPVMSSM_UFO/RPVMSSM_UFO/

4- define q = g u c d s u~ c~ d~ s~ b t b~ t~ h01 h2 h3 h+ h- This will be used to prevent mg5 from creating processes with quarks as mediators. We are interested in processes with the photon (a) and Z-boson (z) as mediators

5-define p = g u c d s u~ c~ d~ s~ b b~ Our protons are multiparticles consisting of gluons, up, down, strange, charm, and bottom quarks/anti-quarks

6- generate p p > mur- mur+ /q

7- display processes

8- output [foldername]

9- For the time being, please edit this param_card.dat:

The mass (in GeV) is in line 51 2000013 4.000000e+02 # msl2 400 GeV The decay width is in line 758 DECAY 2000013 6.500000e-15 The decay width is the beginning of the decay table:

DECAY  2000013   6.500000e-15
#  BR             NDA  ID1    ID2   ...
   1               2    13  -12 

Here mur- and mur+ (2000013 and -2000013) decay with a decay width of 6.5 E-15 GeV (proper lifetime of 0.1 ns) to two daughters: muon (anti-muon for mur+) and anti-electron neutrino (electron neutrino for mur+)

10- Edit the parameters to generate smuons with mass of 500 GeV and lifetime of 0.1 ns When you change the mass for the right-handed smuons (2000013), MadGraph expects you to make the left-handed (2000013) smuons with the same mass (even if we are not generating any left-handed smuons) After changing the parameters in the param_card.dat, put it in the folder "Cards", it will replace the one created by the program.

11- Use this run_card.dat by putting it in "Cards" replacing the one generated by the program.

12- launch

13- shower=Pythia8

To apply the selection cuts (after you validate your sample), go to Selection Cuts

llp_sleptons_rpv_susy's People

Contributors

a-a-abdelhamid avatar

Stargazers

Jordan avatar  avatar

Watchers

Tova Holmes avatar  avatar

Forkers

j-s-ashley

llp_sleptons_rpv_susy's Issues

Getting started

After running the docker image with mg5_aMC (MadGraph) and following all the steps (including setting up and importing the required lhapdf sets):

1- Download the UFO model directory (using wget, curl, svn, etc...)

2- run mg5_aMC

3- import model ./RPVMSSM_UFO/RPVMSSM_UFO/
4- define q = g u c d s u~ c~ d~ s~ b t b~ t~ h01 h2 h3 h+ h- This will be used to prevent mg5 from creating processes with quarks as mediators. We are interested in processes with the photon (a) and Z-boson (z) as mediators

5-define p = g u c d s u~ c~ d~ s~ b b~ Our protons are multiparticles consisting of gluons, up, down, strange, charm, and bottom quarks/anti-quarks

6- generate p p > mur- mur+ /q

7- display processes

8- output [foldername]

9- For the time being, please edit this param_card.dat:

The mass (in GeV) is in line 51 1000013 4.000000e+02 # msl2 400 GeV
The decay width is in line 757 DECAY 1000013 6.500000e-15
The decay width is the beginning of the decay table:

DECAY  1000013   6.500000e-15
#  BR             NDA  ID1    ID2   ...
   1               2    13  -12 

Here mur- and mur+ (1000013 and -1000013) decay with a decay width of 6.5 E-15 GeV (proper lifetime of 0.1 ns) to two daughters: muon (anti-muon for mur+) and anti-electron neutrino (electron neutrino for mur+)

10-
Edit the parameters to generate smuons with mass of 500 GeV and lifetime of 0.1 ns
When you change the mass for the right-handed smuons (1000013), MadGraph expects you to make the left-handed (2000013) smuons with the same mass (even if we are not generating any left-handed smuons)
After changing the parameters in the param_card.dat, put it in the folder "Cards", it will replace the one created by the program.

11- Use this run_card.dat by putting it in "Cards" replacing the one generated by the program.

12- launch

13- shower=Pythia8

Turning on Pythia via TXT Problem Solved BUT New SERF Problem

Although I kept getting this screen showing shower to be off:

Screenshot 2023-10-17 at 4 18 30 PM

It turned out that Pyhtia8 was actually turned on but MG5 does not show that. You have to wait for it and it you will see Pythia8 running:

Screenshot 2023-10-17 at 4 19 10 PM

I ran mg5_aMC text.txt

with text.txt:

import ./RPVMSSM_UFO/RPVMSSM_UFO
 define q = g u c d s u~ c~ d~ s~ b t b~ t~ h01 h2 h3 h+ h-
 define p = g u c d s u~ c~ d~ s~ b b~
 generate p p > mur- mur+ /q
 output
 launch
 shower=Pythia8
 set nevents 20000

So everything is fine now, and has been fine all along. When I saw that shower=off I would end the process (because it sounded like the reasonable thing to do so you do not waste time and storage), but I let it run till the end to see what happens and turns out it is actually working

Anyway, I am getting a new error about avaiblre atorage:

[Errno 28] No space left on device: '/home/alaa/PROC_RPVMSSM_UFO_1/run_01_tag_1_debug.log'

Screenshot 2023-10-17 at 4 31 30 PM

I deleted all the previous runs but I can see that the available desk space is about 12 GB only:

Screenshot 2023-10-17 at 4 32 53 PM

Johhny Lawless provided a solution to this issue by simply creating a working directory at "/userdata/"

August 29th Meeting: Efficiency Histogram & Expected Events

The binning of the histogram hist = ROOT.TH2F("hist", "hist",10, 65, 765, 8, 0, 400) is taken from the original code used in the paper. To make the same histogram with almost the same color gradient I used a color pick tool:

# Number of points in gradient
nRGBs = 5

# Stop points at 0, 0.2, 0.3, 0.4, and 0.6
stops = array('d', [0.0, 0.2, 0.3, 0.4, 0.6])

# RGB values for the colors at the stops (scaled between 0 and 1)
red   = array('d', [236/255.0, 97/255.0, 33/255.0, 30/255.0, 21/255.0])
green = array('d', [228/255.0, 203/255.0, 191/255.0, 172/255.0, 121/255.0])
blue  = array('d', [241/255.0, 221/255.0, 212/255.0, 190/255.0, 133/255.0])

# Create the gradient color table
ROOT.TColor.CreateGradientColorTable(nRGBs, stops, red, green, blue, 255)
ROOT.gStyle.SetNumberContours(255)
hist =     ROOT.TH2F("hist", "hist",10, 65, 765, 8, 0, 400)
hist.SetStats(0)

Pt = []
d0 = []
eff = []

# Read the JSON file
file_path = "2.json"
with open(file_path, 'r') as f:
    json_data = json.load(f)
    
values_data = json_data.get('values', None)

for entry in values_data:
    Pt.append(float(entry['x'][0]['value']))
    d0.append(float(entry['x'][1]['value']))
    eff.append(float(entry['y'][0]['value']))

# Fill the histogram
for i in range(len(Pt)):
    # Adding 1 to ensure we start from bin 1
    x_bin = hist.GetXaxis().FindBin(Pt[i])
    y_bin = hist.GetYaxis().FindBin(d0[i])
    hist.SetBinContent(x_bin, y_bin, eff[i])

# Draw the histogram
c = ROOT.TCanvas("canvas", "Pt-do eff", 800, 600)
hist.GetXaxis().SetTitle("Pt (GeV)")
hist.GetYaxis().SetTitle("d0 (mm)")
hist.SetTitle("Muon reconstruction efficiency")
hist.Draw("COLZ")
c.Update()

19b

This is an assurance that we have populated the histogram correctly.

Applying the efficiency selection code (did not have time to make it cleaner):

Number of passed events is:  2275  event
Number of expected events:  8.255053624999999
image

Where Ntotal is the total number of the generated events we started with in our MC simulation (i.e. MadGraph)

BUT, there is an issue here.

Any muon with a momentum of ~ 765 GeV or more does not have an efficiency in the table or the histogram.
The histogram has 10 bins for the x-axis (Pt), and the GetBin method returns bin number 11 for any +765 GeV muon.

This is the case with any empty bin in the histogram too, e.g. Pt of 592.6 GeV and do of 166.5 mm

Acceptance plot/table

Hi @trholmes,

I feel dumb asking this but I am not sure I understand the smuon acceptance table/graph here:

https://www.hepdata.net/record/ins1831504?version=2&table=Smuon%20acceptance

Screenshot 2023-08-10 at 2 10 31 PM
Screenshot 2023-08-10 at 2 26 01 PM

They only sample 4 points in lifetime (log of -1.5, -0.5, 0.5, and 1.5 ns) and only one point within each approximately 100 GeV mass range (for example they go from 325GeV directly to 425GeV then to 525 GeV)

There is nothing special about the those lifetimes except they make the log graph looks nice. 10^-1.5 gives a lifetime of 0.0316 ns, and 10^1.5 is 31.62 ns.

The search is done over 0.01 ns to 100 ns, which corresponds to log lifetimes of -2 to 2, so now I think that these few points are somehow "representative" of the entire samples, i.e. I could "interpolate" the rest of acceptance data points from it, but this is not how things usually work. I opened the csv and Json files and they have the same data points as the table above. Are these the only lifetimes and masses they tested for acceptance?

Maybe my brain is still frying from the quals :)

Thanks!

Modifying the UFO Model Parameters. Tuesday 09/26

Last week Larry @lawrenceleejr suggested editing the UFO model parameters to automaticlly create the appropriate decay tables with our desired models.

I looked into the param_card_card.py:


        # put at the beginning SMINPUT + MASS + DECAY
        for name in ['DECAY', 'MASS','SMINPUTS']:
            if name in all_lhablock:
                all_lhablock.remove(name)
                all_lhablock.insert(0, name)
        
        for lhablock in all_lhablock:
            self.write_block(lhablock)
            need_writing = [ param for param in all_ext_param if \
                                                     param.lhablock == lhablock]
            from functools import cmp_to_key
            need_writing.sort(key=cmp_to_key(self.order_param))
            [self.write_param(param, lhablock) for param in need_writing]
            
            if self.generic_output:
                if lhablock in ['MASS', 'DECAY']:
                    self.write_dep_param_block(lhablock)

        if self.generic_output:
            self.write_qnumber()
                               
    def write_block(self, name):
        """ write a comment for a block"""
        
        self.fsock.writelines(
        """\n###################################""" + \
        """\n## INFORMATION FOR %s""" % name.upper() +\
        """\n###################################\n"""
         )
        if name!='DECAY':
            self.fsock.write("""Block %s \n""" % name)

    def write_param(self, param, lhablock):
        
        lhacode=' '.join(['%3s' % key for key in param.lhacode])
        if lhablock != 'DECAY':
            text = """  %s %e # %s \n""" % (lhacode, complex(param.value).real, param.name ) 
        else:
            text = '''DECAY %s %e \n''' % (lhacode, complex(param.value).real)
        self.fsock.write(text) 
                    


    
    def write_dep_param_block(self, lhablock):
        import cmath
        from parameters import all_parameters
        for parameter in all_parameters:
            try:
                exec("%s = %s" % (parameter.name, parameter.value))
            except Exception:
                pass
       

Which looks like it is getting the decay chains from another script: decays.py:

# This file was automatically created by FeynRules 2.3.2
# Mathematica version: 10.0 for Linux x86 (64-bit) (September 9, 2014)
......

Anyway, using this model MadGraph creates param cards with empty decay tables.

The best course of action seemed like to write a few lines on code at the end of write_param_card.py to create the decay table. I looked up the syntax for decay tables and this paper had the standard FORTRAN format:
(’DECAY’,1x,I9,3x,1P,E16.8,0P,3x,’#’,1x,A), (3x,1P,E16.8,0P,3x,I2,3x,N (I9,1x),2x,’#’,1x,A).

With some help, I tried to follow this format and to write the gluon decay from the same paper as a test just to see if it works:

def write_decay_table(self, decay_entries):
  """Writes the decay table to the param card file"""
    
        for entry in decay_entries:
          pdg_number = entry['pdg']
          total_width = entry['width']
          human_readable = entry.get('human_readable', '')
        
          # Write the DECAY line
          decay_line = f"DECAY {pdg_number:<9} {total_width:16.8E} # {human_readable}\n"
          self.fsock.write(decay_line)
        
          for channel in entry['channels']:
            br = channel['br']
            nda = channel['nda']
            daughter_pdgs = channel['daughters']
            human_readable_channel = channel.get('human_readable', '')
            
            # Write the decay channel line
            channel_line = f"   {br:16.8E} {nda:<2} {' '.join(map(str, daughter_pdgs))}  # {human_readable_channel}\n"
            self.fsock.write(channel_line)

        
            
            
            
            
decay_entries = [
    {
        'pdg': 1000021,
        'width': 1.01752300e+00,
        'human_readable': 'gluino decays',
        'channels': [
            {'br': 4.18313300e-02, 'nda': 2, 'daughters': [1000001, -1], 'human_readable': '~g -> ~d_L dbar'},
            {'br': 1.55587600e-02, 'nda': 2, 'daughters': [2000001, -1], 'human_readable': '~g -> ~d_R dbar'}
        ]
    },
    # Add more entries as needed
]

# Write the decay table to the file
writer.write_decay_table(decay_entries)
self.fsock.write(decay_entries)

But the generated MadGraph param_card.dat still had empty decay tables.

I am not sure if manipulating the "decay.py" script is the way to go. However, I think there is a way to tell MadGraph to use a specific param_card. We can easily manipulate our param card (e.g. via bash script) and change/create the decay tables according to our models, then prompt MG5 to read it and generate events based on its parameters. This was my first attempt but MG5 kept ignoring this part. This "readme" might have the right syntax to do so:

https://github.com/A-A-Abdelhamid/DVMuReint/blob/main/RPVMSSM_UFO/example_monophi.md

Validating the impact parameter plot

I realized that I did not take the absolute value of d0, thus I should have included negative d0 values in the plot:
This is for the muons coming out directly from smuons vertices

d0_parity

For validating d0, I plotted it vs. L_xy, along with the line: |x| = y

Figure_5

The scatter points do not cross the line, even if they look like they do. This is just a visual artifact that is gone by zooming in the matplotlib graph:

Figure_6

Reversing the axes (because why not)

Figure_7

And Zooming in to see if the d0 data points cross the x = |y|:
Figure_9

Figure_10

I'm checking that L_xy > d0 by only plotting d0 that satisfies this condition and looking at the number of entries

Complete decay chain issue

Using this code

import pyhepmc
import ROOT
from collections import Counter

def get_decay_products(particle, event_num):
    """
    Get the decay products of a given particle.
    """
    proper_decay_products = []
    other_particles = []

    # Check if the particle itself is a final state muon or neutrino
    if particle.status == 1 and abs(particle.pid) in [13, 12, -13, -12]:
        proper_decay_products.append((particle.pid, particle.id))
    # If the particle has an end vertex, check its descendants recursively
    elif particle.end_vertex:
        for p in particle.end_vertex.particles_out:
            if abs(p.pid) in [13, 12, -13, -12]:
                proper, other = get_decay_products(p, event_num)
                proper_decay_products.extend(proper)
                other_particles.extend(other)
            elif p.pid!=22:
                other_particles.append((p.pid, p.id, event_num))
    return proper_decay_products, other_particles

def track_decay_chain(hepmc_file):
    """
    Track the decay chain of muons and neutrinos that originate from a particle with abs pid 2000013.
    """
    proper_decay_products = []
    other_particles = []

    # Open the HepMC file
    with pyhepmc.open(hepmc_file) as f:
        # Loop over events in the file
        for i, event in enumerate(f):
            # Loop over particles in the event
            for particle in event.particles:
                # Check if the particle is a muon or neutrino produced by a decaying 2000013 or -2000013
                if abs(particle.pid) in [13, 12, -13, -12] and particle.production_vertex and any(p.pid in [2000013, -2000013] for p in particle.production_vertex.particles_in):
                    # Get the decay products of the particle
                    proper, other = get_decay_products(particle, i)
                    proper_decay_products.extend(proper)
                    other_particles.extend(other)

    # Count the frequencies of each particle PID
    proper_decay_counter = Counter([pid for pid, _ in proper_decay_products])
    other_particles_counter = Counter([pid for pid, _, _ in other_particles])

    print("Proper decay products:", proper_decay_counter)
    print("Other particles:", other_particles_counter)

track_decay_chain('tag_1_pythia8_events.hepmc')

I got the output:

Proper decay products: Counter({13: 20000, 12: 20000, -13: 20000, -12: 20000})
Other particles: Counter()

So, by excluding the photons from the chain, everything works as expected.

Now by changing elif p.pid!=22: to an else:

Proper decay products: Counter({13: 20000, 12: 20000, -13: 20000, -12: 20000})
Other particles: Counter({22: 24498})

Thus it is clear that the other unexpected particles come from photons, since the the code only check muons and neutrinos recursively, but stop at any other particle after appending it to "other particles"

Events with Muons with Pt of +760 GeV

The number of events with the muons with pt of 760 GeV or more is 613 out of 20,000 events, i.e. 3%
After applying the cuts (before applying efficiency), the number of events became 209 out of 7856 events, i.e. 2.66%

Cutflow: pass trigger and at least 2 baseline leptons selection

I have been investigating why my acceptance values are off 0.5492 compared to the paper 0.682 in the 400 GeV tau= 1 ns sample by plotting distributions of pt, eta, etc... but nothing stood out as strange, so I looked into applying selection cut process itself by creating a cutflow.

I originally did NOT use a trigger selection criteria in any of the previous samples (including the the very first one that I worked on for a while with 400 GeV and proper tau of 0.1 ns)

With NO trigger selection cut, the acceptance is 0.4115, compared to the paper value of 0.386, which makes sense (relaxing a selection cut condition corresponds to more acceptance)

Let's apply the trigger selection cut:

I generated 20,000 events with mass of 500 GeV and proper lifetime of 0.1 ns

image

By applying the first selection using this "trigger" function:

def triggered(event):
    has_valid_electron = False
    has_valid_muon = False
    electron_count = 0
    lepton_eta_count = 0  # Count of leptons with eta < 2.5

    for particle in event.particles:
        if particle.status != 1:  # Consider only final state particles
          continue

        eta = CalcEta(particle)
        if abs(particle.pid) == 11:  # Electron
            if particle.momentum.pt() > 160:
                has_valid_electron = True
            elif particle.momentum.pt() > 60:
                electron_count += 1

        elif abs(particle.pid) == 13:  # Muon
            if particle.momentum.pt() > 60 and abs(eta) < 1.05:
                has_valid_muon = True

        if abs(particle.pid) in [11, 13] and abs(eta) < 2.5:
            lepton_eta_count += 1

    # Check if the event satisfies the original conditions
    if has_valid_electron or electron_count >= 2 or has_valid_muon:
        # If original conditions are met, check for the additional eta condition
        if lepton_eta_count >= 2:
            return True

    return False

And using it as follows:

for event in f:

***************
***************
      cond=triggered(event)
      if cond == False:
        nottriggered = nottriggered +1
        continue
        
      for particle in event.particles:
**************
*************

And here is what I got:

Fraction of events that passed in the generated sample: 0.8277
Error_Low: 0.0027118284088818445
Error_high: 0.0026787525426837933
Fraction of events that passed in the paper: 0.7083      #(66.3 out of 93.6)
Error_Low: 0.0545094960581447
Error_high: 0.04958628972360113

The "error" here is the statical uncertainty assuming a Poisson distribution

In my sample the fraction events that passed the trigger and 2 baseline leptons cut is 0.8277 +/- 0.0027
while it is 0.7083 +0.04956/-0.0545 in the paper
image

The sample acceptance after applying all the selection cuts (after the already applied trigger cut) is 0.3599 ~ 0.360, compared to 0.386 in the paper. With statistical error of +/- 0.0034, the acceptance has an upper estimate of 0.363, compared to the paper 0.386

By applying the trigger condition in the selection process, acceptance went from 0.412 to 0.360 which agrees more with the paper. I need to go back and impose this trigger condition to get a more accurate acceptance, yield, and expected number of events. The main sample of 400 GeV and tau = 0.1 ns has a slightly higher acceptance than the paper, so I expect this to be fixed after applying the trigger cut. However, I am still looking into my 400 GeV and tau = 1 ns sample, as its acceptance is lower than the paper. Although the last sample (400 GeV and tau = 1 ns) is what provoked me to look into the cutflow and triggering conditions, by applying the trigger selection cut the acceptance can only go down creating more discrepancy.

Next steps:

So my next steps is to:

1) complete the cutflow to make sure there is no more discrepancy between what we are doing and what the paper did
2) Continue looking into the 400GeV tau = 1ns sample

Automation

I created a text file to import our model (RPV_SUSY) and launch the process p p > mur- mur+:

import ./RPVMSSM_UFO/RPVMSSM_UFO
define q = g u c d s u~ c~ d~ s~ b t b~ t~ h01 h2 h3 h+ h-
define p = g u c d s u~ c~ d~ s~ b b~
generate p p > mur- mur+ /q
output auto
launch
shower=Pythia8
./param_card.dat

When trying to run ./bin/mg5_aMC filename.txt it starts MadGraph and creates an output, but it does do anything behind "launch"

Screenshot 2023-09-19 at 12 21 55 PM

My understanding is that running MadGraph like this runs the user interface where it expects a command line input, but I am not sure why it does not at least accept "shower=Pythia8" and it literally ignores it, it does not even an error message like it did with the parameter card.

Screenshot 2023-09-19 at 11 23 53 AM

I tried writing a bash script:

#!/bin/bash
./bin/mg5_aMC ./run_01_tag_1_banner.txt
echo -e "1" 

But it is still prompting me to manually choose the shower program and param cards options.

Monday August 14th

From Friday:
The acceptance for 400GeV and 0.1 ns smuons is 0.3928

Acceptance from the paper: 0.360

I tried to use "TEfficiency" to get the uncertainty :

Efficiency: 0.39297683957781
Uncertainty (Up): 0.43299669773397065
Uncertainty (Down): 0.39297683957781

Using this code:

histogram1 = TH1F()
histogram2= TH1F()

# Fill the histogram with single value
n = good_event #accepted events
d = 12135.  #unaccepted events

histogram1.Fill(0.5,n)  # Fill the center of the single bin with your value
histogram2.Fill(0.5,n+d)

# Create a TEfficiency object using the histogram
eff = TEfficiency(histogram1, histogram2)

efficiency = eff.GetEfficiency(1)
uncertainty_up = eff.GetEfficiencyErrorUp(1)  
uncertainty_down = eff.GetEfficiencyErrorLow(1)  

My understanding that after applying the acceptance we apply the efficiency to get the expected number of signals:

effi=(0.35621+0.35519) /2 #eff at log -1.5 ns and log -0.5 ns from the paper
final= n*effi
print(final)

output:

2794.3792000000003 #events

Is this so good so far? I tried to validate what I got but I got stuck

TEfficiency and Uncertainty

Using this code:

total_histogram = TH1F()
passed_histogram = TH1F()

# Fill the histograms with total events and passed events
total_events = 20000  # Total number of events (denominator)
passed_events = 7856     # Number of passed events (numerator)

total_histogram.SetBinContent(1, total_events)
    

passed_histogram.SetBinContent(1, passed_events)
   

# Set uncertainties for total and passed events
etot= math.sqrt(total_events)
epass= math.sqrt(passed_events)

total_histogram.SetBinError(1, etot)
passed_histogram.SetBinError(1, epass)

eff = TEfficiency(passed_histogram, total_histogram)

print("Acceptance is ", eff.GetEfficiency(1))
print("Error_Low: ",eff.GetEfficiencyErrorLow(1))
print("Error_high: ", eff.GetEfficiencyErrorUp(1))

I got

Acceptance is  0.3928
Error_Low:  0.003472834950754622
Error high:  0.0034836322326174396

I also did it manually in the next comment, and they reasonably agree.

For my own reference

ε1 = 0.2
ε2 = 0.25
ε3 = 0.3

P(at least two leptons)= {ε1 * ε2 * ε3} + {ε1*ε2} + {ε1*ε3} + {ε2*ε3}

= {0.2 * 0.25 * 0.3} + { 0.2 * 0.25} + {0.2 * 0.3} + {0.25 * 0.3}
= 0.2

Using the formula manually and by code give the result as 0.155

1 - P [None passing] - [Only ONE only passing] =
1- [(1-ε1) * (1-ε2) * (1-ε3)] - [(ε1 * (1-ε2) * (1-ε3)) +(ε2 * (1-ε1) * (1-ε3)) +(ε3 * (1-ε1) * (1-ε2))]
= 1- [(1-0.2)(1-0.25)(1-0.3)] - (0.2*(1-0.25)(1-0.3)) - (0.25*(1-0.2)(1-0.3)) - (0.3(1-0.2)(1-0.25))

= 0.155

the code gives the result as 0.15500000000000003

ε1 = 0.2
ε2 = 0.25
ε3 = 0.3
ε4 = 0.25

P(at least two leptons)= {ε1 * ε2 * ε3}+ {ε1 * ε2 * ε3*ε4 } + {ε1*ε2} + {ε1*ε3} + {ε2*ε3}

Immortal Smuons

After running generate.sh for the first time, I ran the output through our current selection cuts and got a surprise: no events passed.

I backed up a step and tried to run plot_signal_particle_properties.py, which threw an error.
image

So, I backed up further and pulled an event diagram and found the problem: smuons, once produced, do not decay.

I have a sneaking suspicion that this is a Pythia issue.

image

Expected Events (Monday 8/21/23)

This is the function that uses a random number generator and efficiencies from the paper to select or neglect events that passed the acceptance cut :

def eff_func (lepton1, lepton2):

  x1= lepton1.momentum.pt()
  y1= abs(CalcD0(lepton1))
  
  x2= lepton2.momentum.pt()
  y2= abs(CalcD0(lepton2))
  
  

  binX1 = hist.GetXaxis().FindBin(x1)
  binY1 = hist.GetYaxis().FindBin(y1)
  if binY1 ==0:
    binY1=1
    
    
  binX2 = hist.GetXaxis().FindBin(x2)
  binY2 = hist.GetYaxis().FindBin(y2)
  if binY2 ==0:
    binY2=1
    
  eff_value1= hist.GetBinContent(binX1, binY1)
  eff_value2= hist.GetBinContent(binX2, binY2)
  
  rand1= random.random()
  rand2= random.random()
  global pass1
  global pass2
  pass1 = False
  pass2=False
  
  if rand1 < eff_value1:
    pass1 = True
  else:
    pass1 = False
    
  if rand2 < eff_value2:
    pass2 = True
  else:
    pass2 = False
   
  if pass1 == True and pass2 == True:
    return True
  else:
    return False
    

For events with only 2 muons satisfying the acceptance cuts, using the function is straightforward.

In case there is more than 2 muons, the code uses the function with the leading two highest Pt muons, if one or both of them is not selected, then it removes whatever did not pass the efficeniy selection and looks at the next two leading muons

if cond== False:
          
            if pass1 == False:
              leptons.remove(leptons[0])
            
            if pass2 == False:
              leptons.remove(leptons[1])
             
            if len(leptons) >= 2:
            
              for lepton in leptons:
            
                cond= eff_func(leptons[0],leptons[1])
                if cond == True:
                  good_event=good_event+1


                  break
                if  cond == False:
                  if pass1 == False:
                    leptons.remove(leptons[0])
            
                  if pass2 == False:
                    leptons.remove(leptons[1])
                if len(leptons)==1:
                  break

The entire code for the whole selection process is here

Right now I'm getting surviving event counts of 1696, 1753, and 1756. Of course the number changes every time I run the code.

I probably should ask it to run the code 10,000 times or so then to average the number of surviving events.

Assuming that the surviving events are ~1600-1700, we need to convert (normalize) them to the actual expected observed events based on the cross-section and luminosity.
equation-4

Putting number of surviving events (N_MC) to be 1700 for now, the cross-section of 400-GeV right-handed smuon pair creation is 0.0005221 pb
and the luminosity from the paper is 139 fb-1

Putting these together, I calculated the number of expected observed events to be 6.169 event

If I did not make a mistake somewhere , then this means that the 400 GeV right-handed smuons with 0.1 ns lifetime ** would be** excluded by this search.

I still need to get the uncertainties for the efficiencies, and use them in my calculation along with the uncertainty in the cross section

Oct 24 Update: Pythia Issues Resolved

Pythia8 is on, but MG5 shows otherwise

Although I kept getting this screen showing shower to be off:

Screenshot 2023-10-17 at 4 18 30 PM

It turned out that Pyhtia8 was actually turned on but MG5 does not show that. You have to wait for it and then you will see Pythia8 running:

Screenshot 2023-10-17 at 4 19 10 PM

I ran mg5_aMC text.txt

with text.txt:

import ./RPVMSSM_UFO/RPVMSSM_UFO
 define q = g u c d s u~ c~ d~ s~ b t b~ t~ h01 h2 h3 h+ h-
 define p = g u c d s u~ c~ d~ s~ b b~
 generate p p > mur- mur+ /q
 output
 launch
 shower=Pythia8
 set nevents 20000

So everything is fine now, and has been fine all along. When I saw that shower=off I would end the process (because it sounded like the reasonable thing to do so you do not waste time and storage), but I let it run till the end to see what happens and turns out it is actually working

Anyway, you will get a new error about available storage:

[Errno 28] No space left on device: '/home/alaa/PROC_RPVMSSM_UFO_1/run_01_tag_1_debug.log'

Screenshot 2023-10-17 at 4 31 30 PM

Johhny Lawless pointed out the solution to this issue by simply creating a working directory at "/userdata/" as it has more available memory to run Pyhthia

Selection cuts and efficiency code using weights

import matplotlib.pyplot as plt
import numpy as np
import pyhepmc as hep
import pyhepmc.io as ihep
import pyhepmc.view as vhep
import uproot
import ROOT
from collections import Counter
import math
from ROOT import TLorentzVector
from ROOT import TEfficiency, TFile, TH1F, TGraph2DErrors, gStyle, gROOT, TColor, TLatex
import random
from functools import reduce
from operator import mul

def create_vector(particle):
    vector = TLorentzVector()
    pt = particle.momentum.pt()
    eta = CalcEta(particle)
    phi = CalcPhi(particle)
    mass = particle.generated_mass
    vector.SetPtEtaPhiM(pt, eta, phi, mass)
    return vector


def weight(e_list):
    # Calculate the first term: Product of (1 - e_i) for i=1 to n
    first_term = reduce(mul, [(1 - e) for e in e_list], 1)
    
    # Calculate the second term: Summation for i=1 to n of (e_i * Product of (1 - e_j) for j!=i)
    second_term = 0
    n = len(e_list)
    for i in range(n):
        e_i = e_list[i]
        remaining_elements = e_list[:i] + e_list[i+1:]
        prod_remaining = reduce(mul, [(1 - e) for e in remaining_elements], 1)
        second_term += e_i * prod_remaining
    
    # Combine both terms and subtract from 1
    result = 1 - (first_term + second_term)
    
    return result
def CalcEta(particle):
  """
  Calculate eta of a particle
  """
  momentum =particle.momentum
  pt = momentum.pt()
  px=momentum.px
  py=momentum.py
  pz=momentum.pz
  p=momentum.length()
  if pz == p:
    etaC = np.inf  # or a large number
  elif pz == -p:
    etaC = -np.inf  # or a large negative number
  else:
    etaC = np.arctanh(pz/p)
  return etaC
  

def CalcD0(particle):

  """
  Calculates d0 of a particle's track
  Assuming lieanr tracks since there is no mgentic field
  d0 = production vertex vector X produced particle momentum vector
  d0 is in the transverse plane
  d0 = [vertex x spatial component * (Py/Pt)] - [vertex y spatial component * (Px/Pt)]
  """
  momentum =particle.momentum
  pt = momentum.pt()
  px=momentum.px
  py=momentum.py
  
  ver= particle.production_vertex.position
  xver= ver.x
  yver= ver.y

  d0= (xver* (py/pt)) - (yver* (px/pt))
  return d0

def CalcPhi(particle):

  momentum =particle.momentum
  px=momentum.px
  py=momentum.py
  phi = np.arctan2(py, px)
  return phi
 
 
file_path = "HEPData-ins1831504-v2-pt-d0_muon_efficiency.root"
root_file = TFile.Open(file_path)
directory = root_file.GetDirectory("pt-d0 muon efficiency")
graph = directory.Get("Graph2D_y1")

hist= graph.GetHistogram()
histo = ROOT.TH1F("hist", "Acceptance x Efficiency", 1, 0, 1)
def eff_func (lepton):

  x= lepton.momentum.pt()
  y= abs(CalcD0(lepton))
  binX = hist.GetXaxis().FindBin(x)
  binY = hist.GetYaxis().FindBin(y)
    
  eff_value= hist.GetBinContent(binX, binY)
  return eff_value

good_event=0

def process_pairs(lepton1,lepton2):
    lead, sub = lepton1, lepton2
      
        

    particle1_vector = create_vector(lead)
    particle2_vector = create_vector(sub)
        
    delta_R = particle1_vector.DeltaR(particle2_vector)

    if delta_R >= 0.2:
      break
        
    else:
            return False  # Exit the loop if delta_R < 0.2

    return True
  
hepmc_file = "tag_1_pythia8_events.hepmc"


#hist =  ROOT.TH2F("hist", "hist" ,10, 65, 765, 8, 0, 400)

with hep.open(hepmc_file) as f:
    # Loop over events in the file
    
    for event in f:
      particles=[]
      leptons=[]
      signal_leptons=[]
      pt_sub=0
      pt_leading=0
      list=[]
      for particle in event.particles:
        
        #if particle.status == 1:
         # particles.append(particle)
          
        if abs(particle.pid) == 13 and particle.status == 1 and particle.momentum.pt()> 65 and CalcEta(particle)> -2.5 and CalcEta(particle) < 2.5 and abs(CalcD0(particle))>3 and abs(CalcD0(particle)) <300 :
        
          leptons.append(particle)
      
          
      leptons.sort(key=lambda lepton: -lepton.momentum.pt())

        # Select the top two leptons (if there are at least two)
      #if len(leptons) >= 2:
      
      pt=[]
      eta=[]
      phi=[]
      mass=[]
      acc=[]
      weights= []
      if len(leptons) >= 2:
      
        n = len(leptons)
        for i in range(n):
          for j in range(n):
            if j > i:
              check_R = process_pairs(lepton[i],lepton[j])
              if check_R == True:
              
                if lepton[i] not in acc:
                  acc.append(lepton[i])
                  
                if lepton[j] not in acc:
                  acc.append(lepton[j])
                  
                  
        for k in range(len(acc)):
         eff= eff_func(acc[k])
         weights.append(eff)

For my own reference: getting muons with pt <= 20 GeV

This is the code used to get muons with pt<= 20 GeV:

import matplotlib.pyplot as plt
import numpy as np
import pyhepmc as hep
import pyhepmc.io as ihep
import pyhepmc.view as vhep
import uproot
import ROOT
from collections import Counter

hepmc_file = "tag_1_pythia8_events.hepmc"



muons = []

with hep.open(hepmc_file) as f:
    # Loop over events in the file
    for event in f:
      for particle in event.particles:
        momentum =particle.momentum
        pt = momentum.pt()
        #All final status muons
        if particle.status == 1 and abs(particle.pid) == 13 and pt <=20:
          muons.append((particle.pid, particle.id, event.event_number))
          
          
print(muons)
          

"Interpolating" Efficiencies

I accessed the ROOT file of the efficiency graph:

And I tried to use the Interpolate() function to match a muon with a given pt and d0 to its reconstruction efficiency. The graph has data points for a few d0's and pt's with interval/gap between them. The issue is when we have a point that does not exactly match a data point's pt or d0, the interpolate function returns zero:

For pt = 150 GeV and d0 = 50 mm, the interpolated efficiency is 0.5208171428571429.

For pt = 100 GeV and d0 = 25 mm, the interpolated efficiency is 0.54118.

For pt = 90 GeV and d0 = 25 mm, the interpolated efficiency is 0.0.

For pt = 105 GeV and d0 = 25 mm, the interpolated efficiency is 0.0.


There are two ways I suggest to get such efficiencies:

  1. The easiest route is to locate the closest data point in the graph to the given pt and d0, i.e.

‎Untitled ‎001

In this case, for example, we give the muon with pt and do the efficiency of point A. Distances here is the Eucleadean distance in the x-y plane, which is the pt-d0 plane.

  1. Probably more accurate and consnsitent way to do it is to manually interpolate the efficiency as a function of pt and d0.

‎2 ‎001

equation

equation-2

Here I'm assuming that pt, d0 and efficiency are approximately linear the space between A and B. Pt and d0 in green are our given particle's, eff1 is the efficiency of point A at d0_1 and pt_1

Please let me know what you think, thanks!
@trholmes

Signal Muons Plot Issue

Here is the plot for signal muons Pts (status ==1 and is produced by a smuon vertex):

histogram

And the plot for final status muons Pts:

pt_plot

Acceptance_Weights_Expected Gives Inaccurate Output

The output from Acceptance_Weights_Expected.py consists of a bunch of zero values.

Troubleshooting steps:

  • Review code line-by-line to ensure I understand what I'm doing :octocat:
  • Check outputs at various stages of event/particle loops

(As of writing this issue, I've already composed a solution, ergo the back-to-back time stamps. Future me will be better about creating Issues as issues present themselves.)

Weights

I fixed my valuation for 3 leptons, and it agrees with the formula:

image

I have an issue with my weights, most of the accepted events (94%) have a weight of 0.0. The distribution peaks around zero, but here is the distribution where I only plot the nonzero weights.
image

Wrong Units in .HEPMC File

In the initial generation process, Alaa and I have ended up with .HEPMC files that record data in different units.

Specifically, Alaa's seem to be in GeV, while mine are in MeV.

Troubleshooting steps:

  • Identify what part of the generation process produces the .HEPMC file
  • Compare raw data sets this ended up being unnecessary

High Pt Muons and Final Status Muons

As discussed in our meeting, the code that goes down the decay chain to get muons with status ==1 was not structured in a way to only go down a muon line and not a photon line. Thus it might have ignored a few muons coming from smuon decays, and instead picked up muons from photon pair production.

def get_final_muon_descendant(particle):
    """
    Get the final state muon descendant of a given particle.

    """
    # Check if the particle itself is a final state muon
    if particle.status == 1 and abs(particle.pid) == 13:
        return particle
    # If the particle has an end vertex, check its descendants recursively
    elif particle.end_vertex:
        for p in particle.end_vertex.particles_out:
            final_muon = get_final_muon_descendant(p)
            if final_muon is not None:
                return final_muon
    # If the particle is not a final state muon and does not have an end vertex, return None
    return None

And then is called in

# Check if the particle is a muon produced by a decaying 2000013 or -2000013
        if abs(particle.pid) == 13 and particle.production_vertex and any(p.pid in [2000013, -2000013] for p in particle.production_vertex.particles_in):
            # Get the final muon descendant of the muon
            final_muon = get_final_muon_descendant(particle)
            if final_muon is not None:
                muons.append(final_muon)
                if final_muon.momentum.pt()> 65:
                  histSignalPt.Fill(final_muon.momentum.pt())

Expected Events from Weights: Nov 21st

By running this script (instruction here in README), we get:

Area under the histogram (number of surviving events):  2241.720458984375
Bin content:  2241.720458984375  error =  26.004912373817632
Expected events:  8.134295648868406  +/-  0.09436129501507277 events

And we get this efficiency histogram that is identical to the one in the paper:

image

image

And the passed number of events histogram:

image

From the histogram we have 7879 accepted events out of 20,000. Thus we have acceptance of 0.394 (the paper had 0.36 for the acceptance)

The yield is 2241.720458984375/20000= 0.112086, which is compared to the yield from the paper.

In the paper, the fraction of surviving events for the 0.1 ns 400-GeV smuons is:

acceptance X efficiency = 0.36 * 0.355 = 0.1278

Generation via CMVFS/LCG Source

MadGraph, as sourced from LCG on LXPlus, is not set up to allow for showering via Pythia.

image

The process I used to produce this result:

  1. Connect to LXPlus via SSH using Termius.
  2. Ensure Kerberos tickets are valid.
    klist and, if necessary, kinit
  3. Authenticate grid proxy with CMS VOMS.
    voms-proxy-init --rfc --voms cms -valid 192:00
  4. Source the view.
    source /cvmfs/sft.cern.ch/lcg/views/LCG_105/x86_64-el9-gcc13-opt/setup.sh (for example)
  5. Run MadGraph.
    mg5_aMC
  6. Manual input of commands contained in this proc card up to shower=Pythia8.
  7. Glare at error.

When I try to get MadGraph to install Pythia for showering (by entering the command install pythia8 after step 6), it yells at me about the file system being read-only.

image

I have tried the following releases: 105, 105a, 104, 104swan

Acceptance, Friday March 1st

The acceptance of the 400 GeV 1 ns smuon events is 0.682 in the paper, but I am getting 0.360 (after I redefined the d0 function)
image

Here is how I calculate d0:

Screenshot 2024-03-01 at 2 05 53 PM

def CalcD0(particle):

  
  ver= particle.production_vertex.position
  vx= ver.x
  vy= ver.y
  vz= ver.z
  
  momentum =particle.momentum
  pt = momentum.pt()
  px=momentum.px
  py=momentum.py
  pz=momentum.pz
  v0 = ROOT.TVector3(vx, vy, vz)  # Position vector from origin to production vertex
  p = ROOT.TVector3(px, py, pz)   # Momentum vector
  cross_product = v0.Cross(p)
 
  d0 = cross_product.Mag() / p.Mag()
  
  return d0

image

Fri Apr. 26 - 10 ns

For the 10 ns 400 GeV sample, I have an acceptance of 0.686 , which is almost double the value in the paper 0.380

image
Screenshot 2024-04-26 at 2 57 16 PM

6.582120E-17 GeV decay width corresponds to lifetime of 10 ns

Fraction of surviving events

In the paper, the fraction of surviving events for the 0.1 ns 400-GeV smuons is:

acceptance X efficiency = 0.36 * 0.355 = 0.1278

In our simulated events, the fraction of surviving events is 2275 out of 20,000 = 0.11375

Efficiencies as weights. October 3rd

The efficiency of reconstructing a lepton with a given Pt and d0 represents the probability that said lepton will be detected, let's call such efficiency e, and the probability of two leptons being reconstructed is P = e1 *e2

So

image

I'm implementing this expression in code so that we can assign weights to individual events

Concise "getting started":

please see #16

Automation issue

I tried running this bash script:

#!/bin/bash
./bin/generate_events
shower=ON

but it ignores the shower=ON part or anything once I run the program.
Also

#!/bin/bash
./generate_events > text.txt

with text.txt:

shower=Pythia8
analysis=OFF

It ignores the txt file input
It runs generate_events and expects an input through the terminal to the prompts. Anything after running the program is not being inputed

Tuesday Dec 5th

I found an error in my earlier code regarding the number of accepted events. It only applied the ΔR to the leading and subleasing leptons, while ignoring other pairs. In my current version of the code this issue has been fixed. The resented numbers and rations are accurate (and were so in the last two weeks). The difference was ~ 20-30 events.

I generated another sample with the same parameters (400GeV mass and τ = 0.1 ns):

expected

Area under the histogram (number of surviving events):  2241.99365234375
Bin content:  2241.99365234375  error = +/-  26.01434167400309
Expected events:  8.135286956926269   +/-  0.09439551012657924  events

Thus we have an acceptance rate of 0.393 and % yield of 11.2%, my first sample had acceptance of 0.394 and % yield of 11.2%
In the paper, the fraction of surviving events for the 0.1 ns 400-GeV smuons is:

acceptance X efficiency = 0.36 * 0.355 = 0.1278, thus a %yield of %12.78

I generated another sample with the same mass but different lifetime τ = 1 ns

image
 
with acceptance of 0.5492, the paper had 0.682

I will calculate the uncertainty in my calculation, this is a fresh result.

image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.