Giter Club home page Giter Club logo

pybert's Introduction

PyBERT

PyBERT is a serial communication link bit error rate tester simulator with a graphical user interface (GUI).

It uses the Traits/UI package of the Enthought Python Distribution (EPD) http://www.enthought.com/products/epd.php, as well as the NumPy and SciPy packages.

Notice: Before using this package for any purpose, you MUST read and understand the terms put forward in the accompanying "LICENSE" file.

User Installation

Developer Installation

Wiki

FAQ

Email List

Testing

Tox is used for the test runner and documentation builder. By default, it will try to unit test for any installed/supported of versions and it will skip any missing versions.

  • pip install tox
  • tox -p all

To run a single environment such as "docs" run: tox run -e docs

Documentation

PyBERT documentation exists in 2 separate forms:

Acknowledgments

I would like to thank the following individuals for their contributions to the PyBERT project:

David Patterson for being my main co-author and for his countless hours driving the PyBERT project across the Python2<=>Python3 divide, as well as, more recently, completely updating its build infrastructure to be more in sync. w/ modern Python package building/testing/distribution philosophy.

Peter Pupalaikis for sharing his expertise w/ both Fourier transform and S-parameter subtleties. The PyBERT source code wouldn't have nearly the mathematical/theoretical fidelity that it does had Peter not contributed.

Yuri Shlepnev for his rock solid understanding of RF fundamentals, as well as his infinite patience in helping me understand them, too. ;-)

Dennis Han for thoroughly beating the snot out of PyBERT w/ nothing but love in his heart and determination in his mind to drive PyBERT further towards a professional level of quality. Dennis has made perhaps the most significant contributions towards making PyBERT a serious tool for the working professional serial communications link designer.

Todd Westerhoff for helping me better understand what tool features really matter to working professional link designers and which are just in the way. Also, for some very helpful feedback, re: improving the real World PyBERT experience for the user.

Mark Marlett for first introducing me to Python/NumPy/SciPy, as an alternative to MATLAB for numerical computing and signal processing, as well as his countless hours of tutelage, regarding the finer points of serial communication link simulation technique. Mark is also the one, who insisted that I take a break from development and finally write some documentation, so that others could understand what I intended and, hopefully, contribute. Thanks, Mark!

Low Kian Seong for straightening out my understanding of the real purpose of the description field in the setup.py script.

Jason Ellison for many cathartic chats on the topic of quality open source software creation, maintenance, and distribution.

Michael Gielda & Denz Choe for their contributions to the PyBERT code base.

The entire SciKit-RF team for creating and supporting an absolutely wonderful Python package for working with RF models and simulations.

pybert's People

Contributors

capn-freako avatar dependabot[bot] avatar jdpatt avatar jjendryka avatar lowks avatar markmarlett avatar mgielda avatar pre-commit-ci[bot] avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pybert's Issues

PyBERT 3.1.1 Configuration Loading and Saving Problems

Using Anaconda3 2019.07 (64-bit) and PyBERT 3.1.1, loading a configuration file created with PyBERT 2.4.4 causes the following error pop-up:

The following error occured:
a bytes-like object is required, not 'str'
The configuration was NOT loaded.
Please, check terminal for more information.

There is nothing in the terminal to report.

The similar error occurs when trying to save a new configuration:

The following error occured:
write() argument must be str, not bytes
The configuration as NOT saved.
Please check terminal for more information.

There is nothing in the terminal to report.

Most of the time, PyBERT then crashes.

Complete PAM-4 Jitter Calculation

Currently, only the zero crossings are used, to calculate PAM-4 jitter. I did this, because enabling the +/-0.67 threshold crossings was causing multiple ideal crossings to occur at the same time. (Imagine a transition from -1 to +1.) It would be nice to include these other two thresholds in the total jitter calculation for PAM-4.

This doesn't affect duo-binary, because the two segments of a full transition are separated by UI, in the ideal case.

Cannot load S-parameter file

Using Anaconda3 2019.07 (64-bit) and PyBERT 3.1.1, opening a channel description file (S-parameters, pulse, or impulse) produces a save window rather than an open window. Hence, the file cannot be loaded.

Post-DFE jitter distribution and bathtub curve incorrect for marginal channels.

I'm noticing that in cases where the post-CTLE eye is closed, but the post-DFE eye is open, the post-DFE jitter distribution and bathtub curve are wrong.
The jitter distribution shows large spikes at both ends of the horizontal plot range.
And the bathtub curve shows no opening, which is consistent with the (erroneous) jitter distribution.

Look into this and fix it.

Options for EQ TUNE tab not being saved

Options set on the EQ TUNE tab are not being saved as part of the configuration.

Options not being saved are pre- and post-taps false, CTLE mode off, and Use DFE unchecked (note a period is after DFE and not a colon).

PAM4 Simulation Issue: 'float' cannot be interpreted as an integer.

Found while trying to add test coverage in my develop branch. Switching to a PAM4 simulation, causes an exception. Trying to run that simulation causes pybert to crash.

In python 2, / is integer division and in python 3, / is float division. // is the way to denote integer division in both 2 or 3. Here is at least one instance of where it needs to be changed:

pybert.py", line 723, in _get_t return array([i * t0 for i in range(npts)])
TypeError: 'float' object cannot be interpreted as an integer

Anywhere division is used, it should be checked and see if the output needs to be an int or a float. Since the divisions haven't really been touched from the port, its probably safe to assume that its all integer division and a find/replace is sufficient. NRZ/DUO seem to be working; so maybe there is a happy medium or just the checks for PAM4 need updating.

RuntimeError: Invalid Qt API 'pyqt5', valid values are: 'pyqt' or 'pyside'

Trouble running PyBert on my Win64 machine lately. I tried it a month back didn't have any issue. Not sure what change. I tried the instructions from https://github.com/capn-freako/PyBERT/wiki/instant_gratification

Output log:

(pybert64) C:\Users\denzchoe>pybert Traceback (most recent call last): File "C:\Users\denzchoe\AppData\Local\conda\conda\envs\pybert64\Scripts\pybert -script.py", line 5, in <module> from pybert.pybert import main File "C:\Users\denzchoe\AppData\Local\conda\conda\envs\pybert64\lib\site-packa ges\pybert\pybert.py", line 38, in <module> from pyface.api import FileDialog, OK File "C:\Users\denzchoe\AppData\Local\conda\conda\envs\pybert64\lib\site-packa ges\pyface\api.py", line 16, in <module> from about_dialog import AboutDialog File "C:\Users\denzchoe\AppData\Local\conda\conda\envs\pybert64\lib\site-packa ges\pyface\about_dialog.py", line 18, in <module> from toolkit import toolkit_object File "C:\Users\denzchoe\AppData\Local\conda\conda\envs\pybert64\lib\site-packa ges\pyface\toolkit.py", line 77, in <module> _init_toolkit() File "C:\Users\denzchoe\AppData\Local\conda\conda\envs\pybert64\lib\site-packa ges\pyface\toolkit.py", line 38, in _init_toolkit be = import_toolkit(ETSConfig.toolkit) File "C:\Users\denzchoe\AppData\Local\conda\conda\envs\pybert64\lib\site-packa ges\pyface\toolkit.py", line 31, in import_toolkit __import__(be + 'init') File "C:\Users\denzchoe\AppData\Local\conda\conda\envs\pybert64\lib\site-packa ges\pyface\ui\qt4\init.py", line 18, in <module> from pyface.qt import QtCore, QtGui, qt_api File "C:\Users\denzchoe\AppData\Local\conda\conda\envs\pybert64\lib\site-packa ges\pyface\qt\__init__.py", line 39, in <module> % qt_api) RuntimeError: Invalid Qt API 'pyqt5', valid values are: 'pyqt' or 'pyside'

But I found a temporary fix for now;
as shown here in another github Proj

by inserting a few lines to pybert-script.py file
import os
os.environ["QT_API"] = "pyqt"

Add Tx Pre-emphasis.

Add some transmitter pre-emphasis capability.

Suggested approach:

  • Add several independent variables to the PyBERT class defined in pybert.py, which hold the pre-emphasis tap weight values.
  • Add these new variables to the Tx Parameters VGroup in pybert_view.py.
    (Alternatively, change the existing VGroup to an HGroup containing the existing VGroup and a new VGroup, which contains these new variables. Note that this may require some major reallocation of screen real estate, as we are at the limit of what the "race car" interface will support. I dread the thought of moving all the configuration items into their own tab, but it may prove unavoidable.)
  • Add a my_run_tx() function to pybert_cntrl.py, which convolves its input with the impulse response of the pre-emphasis block defined by the coefficients, above.
  • Insert the call to my_run_tx() in between my_run_channel() and my_run_dfe(), in the my_run_simulation() function in the pybert_cntrl.py file.

Automated Parameter Sweeps

I would like to add a new tab to the GUI, which allows the user to specify a "parameter sweep plan" for subsequent simulation. In this way, a user could scan a certain section of the model parameter space, looking for optimum link performance.

Add the ability to upload figures to plot.ly.

Issue Description

It would be very nice to be able to instantly upload eye diagrams, bathtub curved, jitter spectrum pots, etc. to a Web location, from which geographically disperse groups could collaborate on link design.

Suggested Approach

Plot.ly offers just such graphical collaboration. And, they offer a very simple to use Python / matplotlib interface. Add this interface to the PyBERT code.

Update Instant Gratification for PyBERT 3

Update the Instant Gratification page to reflect that PyBERT 3 only supports 64-bit IBIS-AMI models, so the pybert32 environment will no longer work and is no longer needed.

More efficient CDR behavioral model.

Currently, the CDR behavioral model is the performance bottleneck. It's performance is only about 1/4 that of the next slowest element. See if I can use my simplified version of SiSoft/Telian's "Hula Hoop" algorithm to speed things up.

Relax the 100 Gbps maximum bit rate.

The bit rate seems to be capped at 100 Gbps.
This is unfortunate, as 112 Gbps PAM4 is of extreme interest right now.

Relax the current max. bit rate limit, so that 112 Gbps PAM4 data streams can be accommodated.

Implement true GetWave().

Currently, I'm cheating, as far as Tx GetWave() implementation. That is, I'm just sending in a step, getting the step response back, differentiating to get the equivalent impulse response, and continuing on from there, assuming LTI. This is not the intent of GetWave().

Action: Implement true GetWave() processing, in which the stimulus is actually passed into the GetWave() function, and the signal returned by GetWave() is used directly in subsequent processing.

Thanks to Todd Westerhoff, of SiSoft, for pointing out my error, in this regard.

Errors and Warnings when run Python -m install

I followed the instruction below using Anaconda2(64bit)
conda install enable
conda install traitsui
conda install chaco
conda install kiwisolver
conda install scipy
conda install Sphinx
conda install matplotlib
pip install PyBERT
python -m pybert

Looks the installs are almost good except the last one when run "python -m pybert"
There are still several errors or warnings. Not sure if they are critical

C**:\python\Anaconda2\lib\site-packages\pybert-2.0.2-py2.7.egg\pybert\pybert.py:1139: VisibleDeprecationWarning: using a non-integer number instead of an integer will result in an error in the future
C:\python\Anaconda2\lib\site-packages\chaco\array_data_source.py:123: FutureWarning: comparison to None will result in an elementwise object comparison in the future.
self._data = newdata
C:\python\Anaconda2\lib\site-packages\traits\trait_handlers.py:1599: FutureWarning: comparison to None will result in an elementwise object comparison in the future.
if value in self.values:
C:\python\Anaconda2\lib\site-packages\chaco\image_data.py:190: FutureWarning: comparison to None will result in an elementwise object comparison in the future.
self._data = newdata
Traceback (most recent call last):
File "C:\python\Anaconda2\lib\runpy.py", line 162, in _run_module_as_main
"main", fname, loader, pkg_name)
File "C:\python\Anaconda2\lib\runpy.py", line 72, in run_code
exec code in run_globals
File "build\bdist.win-amd64\egg\pybert_main
.py", line 3, in
File "C:\python\Anaconda2\lib\site-packages\traits\has_traits.py", line 2156, in configure_traits
kind, handler, id, scrollable, args )
File "C:\python\Anaconda2\lib\site-packages\traitsui\toolkit.py", line 245, in view_application
raise NotImplementedError
NotImplementedError**

Add COM metric reporting.

The Channel Operating Margin (COM) method of measuring high speed serial communication channel performance is becoming more and more popular. I'd like to add this ability to PyBERT.

Add *.s4p importing/checking to channel "fromFile" alternatives.

Currently, the user is only allowed to use impulse/step responses for channel modeling. This requires him to perform external translation of Touchstone (i.e. - *.s4p) files. And I think a lot of tools still don't do this correctly. Use my S-parameter Checker IPython notebook as a template for adding this conversion capability to PyBERT.

Fix "front porch" sizing for impulse response.

I think the current leader for the impulse response is hard-wired to some numerical time value (5 ns?). This is resulting in a pretty ridiculous "front porch", in cases of high symbol rate. Fix this. Perhaps, make the IR front porch a percentage (20% ?) of the total IR allowed length.

Add trimmed channel impulse response spectrum to plot.

Currently, I plot the spectrum of the original untrimmed channel impulse response, in Frequency Response tab, but I use the trimmed version for the simulation.
Add the trimmed version's spectrum to the plot, for user reference.
(Can be very useful in detecting bogus original channel data.)

Add more interconnect modeling options, including S-parameter importing.

Issue Description

Currently, the only interconnect model supported is Dr. Howard Johnson's 24 gauge twisted pair model. It would be nice to offer more options to the user. It would be particularly nice to offer an S-parameter import option, so that users could simulate their own custom interconnects.

Suggested Approach

Have a look at the calc_gamma() function in the pybert_util.py file. This function is where Dr. Johnson's model is "hard-wired" into PyBERT. Therefore, this is the "control entry" point for adding new options.

Uniform eye plots.

Currently, the last eye plot, which includes the DFE effects, uses a much lower number of bits than the other three. This has the effect of exaggerating the helpful effect of the DFE.

Action: Change the eye plotting so that all 4 plots use the same number of bit intervals.

Add optional zero padding and windowing to *.s4p importing.

Currently, two well known artifacts of frequency to time domain conversion are showing up, when importing a severely band limited *.s4p file to represent nearly ideal channels:

  1. Staircasing - This shows up because the natural sample interval, t0 = 1/2fmax, is insufficient to represent the sharp rise of the step response of nearly ideal channels.

    This problem can be solved, by zero-padding the Touchstone data, before converting to the time domain, thereby artificially increasing fmax and decreasing t0.
    In fact, if we're using the numpy.fft.irfft() function, this padding can be done automatically, by specifying the desired length of the output.
    (See the documentation.)

  2. Non-causality - This occurs, when the spectral energy of the channel is still significant at the highest frequency included in the original Touchstone data. It shows up as ringing just before the initial attack of the impulse response.

    This problem can be solved by windowing the Touchstone data, before converting to the time domain.
    (See the documentation for the scipy.signal.get_window() function, as well as this paper from Keysight.)

Allow for saving of reference waveforms.

Action: Add the ability to store the various plots and eye diagrams as "reference" waveforms, which can be called up at a later time and added to the existing plots and eyes of a new simulation, for comparison purposes.

Indicate error in Pulse Response Zero Forcing approximation to DFE behavior.

During linear EQ optimization, the user has the option to engage a Pulse Response Zero Forcing (PRZF) approximation to the DFE's actual behavior. Add some indication of how erroneous this approximation is, given the final actual pulse response, perhaps, by measuring the sum of squared pulse response values at each of N post-cursor sampling instants, where N is the number of DFE taps, and reporting that sum as a percentage of total pulse response energy.

Non-zero rise/fall time

PyBERT launches a signal with zero rise/fall times. The only way to fix this is to adjust Tx_Cout when using the internal transmission line model. How about an option to input the rise/fall time and then give it Gaussian shaping per Howard Johnson's second book?

Add Rx CTLE.

Add some receiver continuous time linear equalization (CTLE) capability.

Suggested approach:

  • Add several independent variables to the PyBERT class defined in pybert.py, which hold the:
    • d.c. response,
    • a.c. peak frequency, and
    • peak-to-d.c. ratio (preferably, expressed in dB)

of the CTLE filter.

  • Add these new variables to the Rx Parameters VGroup in pybert_view.py.
    (Alternatively, change the existing VGroup to an HGroup containing the existing VGroup and a new VGroup, which contains these new variables. Note that this may require some major reallocation of screen real estate, as we are at the limit of what the "race car" interface will support. I dread the thought of moving all the configuration items into their own tab, but it may prove unavoidable.)
  • Add a my_run_rx() function to pybert_cntrl.py, which convolves its input with the impulse response of the CTLE filter defined by the coefficients, above. (You'll find some nice filter building/fitting functions in scipy.signal.)
  • Insert the call to my_run_rx() in between my_run_tx() (assuming it's been added, before this issue gets addressed.) and my_run_dfe(), in the my_run_simulation() function in the pybert_cntrl.py file.

Add automated sweep capability.

Add a new tab to the UI, which allows the user to set up a series of simulation configurations, which will be run sequentially by the tool. Have certain metrics of the final eye quality and BER stored for later viewing/comparison by the user, so that he can determine the best configuration.

Fix equalization optimizer, so that CTLE peaking magnitude is properly tuned.

Currently, co-optimization doesn't really work, because the CTLE peaking magnitude doesn't get changed noticeably, despite having a much more optimum setting (found, via manual experimentation).

Fix this, perhaps, by employing a "nested optimizer", which sweeps the CTLE peaking magnitude in its outer loop, and the Tx pre-emphasis tap settings in its inner loop.

Can't install pybert package.

I've created pybert64 env and did this: conda install -c dbanas pybert

Anaconda gave the following errors...

Collecting package metadata (current_repodata.json): done
Solving environment: failed with current_repodata.json, will retry with next rep
odata source.
Initial quick solve with frozen env failed. Unfreezing env and trying again.
Solving environment: failed with current_repodata.json, will retry with next rep
odata source.
Collecting package metadata (repodata.json): done
Solving environment: failed

PackagesNotFoundError: The following packages are not available from current cha
nnels:

How can I install it manually ? Seems anaconda can't find pybert package in it's cloud.

IBIS-AMI Model Compatibility

I would like to add the option to use existing IBIS-AMI models, in place of the built-in Rx and Tx models currently used by PyBERT.

PyBERT 3.1.1 Opens Beyond Windows Desktop Space

PyBERT 3.1.1 opens so large that it extends beyond the top and bottom of the Windows Desktop. The PyBERT window's banner can't be accessed to maximize, minimize, or resize the window unless I do that through Process Explorer.

Tx parasitics ignored when using an imported channel model.

I just realized that, when the user checks the UseFile option (in the Channel group) and chooses to model the channel not with PyBERT's own internal model, but rather via a Touchstone, or step/impulse response file, the Tx source impedance and parasitic output capacitance are removed from the calculation. This is incorrect, as my intent is that such imported files model only the interconnect, not the complete channel, which I define as interconnect + Tx/Rx analog parasitics/non-idealities.

Fix this by ensuring that, no matter how the interconnect is being modeled:

  • PyBERT native (really, Howard Johnson's transmission line model),
  • imported file (Touchstone, or impulse/step response), or
  • new 3rd-party channel cross-sectional solver plug-in capability,

the analog parasitics of both Tx and Rx are still included in the final channel model.

Note: This issue is strongly related to, but may not completely resolve, issue #42 .

trouble with installation

Hi

I am trying to follow your guide in Win10, but I am getting the following errors:

(C:\Program Files\Anaconda2) C:\WINDOWS\system32>python -m pybert
Traceback (most recent call last):
File "C:\Program Files\Anaconda2\lib\runpy.py", line 174, in _run_module_as_main
"main", fname, loader, pkg_name)
File "C:\Program Files\Anaconda2\lib\runpy.py", line 72, in run_code
exec code in run_globals
File "C:\Program Files\Anaconda2\lib\site-packages\pybert_main
.py", line 1, in
from pybert import *
File "C:\Program Files\Anaconda2\lib\site-packages\pybert\pybert.py", line 72, in
from pybert_view import traits_view
File "C:\Program Files\Anaconda2\lib\site-packages\pybert\pybert_view.py", line 19, in
from pybert_cntrl import my_run_sweeps
File "C:\Program Files\Anaconda2\lib\site-packages\pybert\pybert_cntrl.py", line 25, in
from pybert_util import find_crossings, make_ctle, calc_jitter, moving_average, calc_eye, import_qucs_csv
File "C:\Program Files\Anaconda2\lib\site-packages\pybert\pybert_util.py", line 22, in
from pylab import plot, show, legend
File "C:\Program Files\Anaconda2\lib\site-packages\pylab.py", line 1, in
from matplotlib.pylab import *
File "C:\Program Files\Anaconda2\lib\site-packages\matplotlib\pylab.py", line 274, in
from matplotlib.pyplot import *
File "C:\Program Files\Anaconda2\lib\site-packages\matplotlib\pyplot.py", line 114, in
_backend_mod, new_figure_manager, draw_if_interactive, show = pylab_setup()
File "C:\Program Files\Anaconda2\lib\site-packages\matplotlib\backends_init
.py", line 32, in pylab_setup
globals(),locals(),[backend_name],0)
File "C:\Program Files\Anaconda2\lib\site-packages\matplotlib\backends\backend_qt5agg.py", line 16, in
from .backend_qt5 import QtCore
File "C:\Program Files\Anaconda2\lib\site-packages\matplotlib\backends\backend_qt5.py", line 31, in
from .qt_compat import QtCore, QtGui, QtWidgets, _getSaveFileName, version
File "C:\Program Files\Anaconda2\lib\site-packages\matplotlib\backends\qt_compat.py", line 175, in
"Matplotlib qt-based backends require an external PyQt4, PyQt5,\n"
ImportError: Matplotlib qt-based backends require an external PyQt4, PyQt5,

I am no sure what to do. You help is greatly appreciated. Thank you

Catalin

Allow manual override of impulse response length trimming.

Often, the impulse response trimming algorithm will cut the impulse response off, just after the first major reflection is completed. Then, after incurring the inherent delay of either the Tx pre-emphasis or CTLE filters, that reflection is lost. (It "slides out" the right edge of the impulse response trimming window.)

Action: Offer the user a manual override option for setting the impulse response length, so he can prevent this from happening.

Poor performance w/ low fmin Touchstone files.

Currently, PyBERT naively uses the minimum non-zero frequency point found as the frequency vector step, when importing Touchstone channel description files.
This can cause huge frequency vectors when using Touchstone files with very low minimum frequencies (even when the Touchstone file itself is fairly sparse, due to logarithmic spacing for instance).
And its' quite doubtful that any information below 10 MHz is going to have an appreciable effect on eye openings or jitter.

Add a GUI item specifying the frequency step to use, when importing and interpolating Touchstone data, and set its default value to 10 MHz.

Performance Tuning

Currently, the DFE and Plotting are clear bottlenecks to overall system performance.
It would be nice to profile these two sections and see if we can't improve their performance.

Add bathtub curve and eye contour plotting.

Issue Description

Currently, there are no "bathtub" curves or eye probability contour plots shown in the GUI.
It would be very nice to add these. Actually, the bathtub curve is already defined. For instance, see:

  • plot12 of the PyBERT class defined in pybert.py, and
  • update_results() function in pybert_cntrl.py.

Suggested Approach

  • Add a new tab to the main GUI section by adding a new plot container to the PyBERT class (in pybert.py) and adding a new tabbed Group to the GUI definition (in pybert_view.py).
  • Add the existing bathtub plot (i.e. - plot12) to this new plot container.
  • Create a new plot, which contains the eye BER probability contours and add this plot to the new plot container, also.

Hints

  • Again, check the NumPy and SciPy libraries for functions, which might be helpful in generating the new contour plots.

Add dual Dirac model tail fitting.

Issue Description

Currently, the random jitter standard deviation is calculated by simply measuring the variance of the jitter remaining, after all other components have been removed, and taking the square root. This is not a particularly accurate way of estimating the random unbounded portion of the jitter and will cause extrapolation to low BER to be unreliable. We need to add fitting to a known analytical model, in order to improve this situation. I propose the "dual Dirac" model, just because it is so prevalent. Any other opinions?

Suggested Approach

  • In the calc_jitter() function in the pybert_util.py file, look at the code just above the return statement. Note the simplistic way in which the random jitter standard deviation is calculated.
  • Replace this code with dual Dirac fitting code, instead.

Hints

  • Peruse the NumPy and SciPy libraries for functions, which might be helpful in curve fitting applications such as this one.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.