Giter Club home page Giter Club logo

qcodes's Introduction

QCoDeS PyPi DOCS PyPI python versions DOI

Build Status Github Build Status Github Docs Ruff OpenSSF

QCoDeS is a Python-based data acquisition framework developed by the Copenhagen / Delft / Sydney / Microsoft quantum computing consortium. While it has been developed to serve the needs of nanoelectronic device experiments, it is not inherently limited to such experiments, and can be used anywhere a system with many degrees of freedom is controllable by computer. To learn more about QCoDeS, browse our homepage .

To get a feeling of QCoDeS read 15 minutes to QCoDeS, and/or browse the Jupyter notebooks in docs/examples .

QCoDeS is compatible with Python 3.9+ (3.9 soon to be deprecated). It is primarily intended for use from Jupyter notebooks, but can be used from traditional terminal-based shells and in stand-alone scripts as well. The features in qcodes.utils.magic are exclusively for Jupyter notebooks.

Default branch is now main

The default branch in QCoDeS has been renamed to main. If you are working with a local clone of QCoDeS you should update it as follows:

  • Run git fetch origin and git checkout main
  • Run git symbolic-ref refs/remotes/origin/HEAD refs/remotes/origin/main to update your HEAD reference.

Install

In general, refer to here for installation.

Docs

Read it here . Documentation is updated and deployed on every successful build in main.

We use sphinx for documentations, makefiles are provided both for Windows, and *nix, so that you can build the documentation locally.

Make sure that you have the extra dependencies required to install the docs

pip install -r docs_requirements.txt

Go to the directory docs and

make html

This generate a webpage, index.html, in docs/_build/html with the rendered html.

QCoDeS Loop

The modules qcodes.data, qcodes.plots, qcodes.actions, qcodes.loops, qcodes.measure, qcodes.extensions.slack and qcodes.utils.magic that were part of QCoDeS until version 0.37.0. have been moved into an independent package called qcodes_loop. Please see it's repository and documentation for more information.

For the time being it is possible to automatically install the qcodes_loop package when installing qcodes by executing pip install qcodes[loop].

Code of Conduct

QCoDeS strictly adheres to the Microsoft Open Source Code of Conduct

Contributing

The QCoDeS instrument drivers developed by the members of the QCoDeS community but not supported by the QCoDeS developers are contained in

https://github.com/QCoDeS/Qcodes_contrib_drivers

See Contributing for general information about bug/issue reports, contributing code, style, and testing.

License

See License.

qcodes's People

Contributors

adriaanrol avatar akshita07 avatar alexcjohnson avatar astafan8 avatar basnijholt avatar bors[bot] avatar dependabot[bot] avatar dominik-vogel avatar eendebakpt avatar farbo avatar gatebuilder avatar github-actions[bot] avatar giulioungaretti avatar jana-d avatar jenshnielsen avatar lakhotiaharshit avatar liangosc avatar nataliejpg avatar peendebak avatar pre-commit-ci[bot] avatar qsaevar avatar rubenknex avatar samantha-ho avatar sohailc avatar spauka avatar stefand986 avatar thorvaldlarsen avatar trevormorgan avatar williamhpnielsen avatar yakbizzarro avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

qcodes's Issues

default parameters of MockInstrument

server_name is an optional argument of MockInstrument, but the code does not work with the defaults. The same holds for the model argument. We should either make server_name required, or make the code work with the defaults.

from qcodes import MockModel, MockInstrument

model = MockModel(name='dummymodel')

class TestInstrument(MockInstrument):

    def __init__(self, name, model, server_name=None):
        super().__init__(name, model=model, server_name=None)
        self.xxx='hello!'
        print(self.xxx)

# good!        
tmp = TestInstrument(name='test', server_name=None, model=model)

class TestInstrument2(MockInstrument):

    def __init__(self, name, model):
        super().__init__(name, model=model)
        self.xxx='hello 2!'
        print(self.xxx)

# fail!
tmp = TestInstrument2(name='test2', model=model)

No Async data retrieval

@alexcjohnson I am using the #70 Inst Process branch with real instruments (#53).
This works fine so far, all that talking works, but not async.

I am using a super simple dmm, you can:

  • Read a value (This starts a measurement in the instrument, and then returns the value (and blocks while doing so))
  • Init a measurement (non-blocking) and ask for the data(blocking, but fast) later

The second approach is what i'm used to do for async stuff, Tell all instruments that I want information and then ask for it later.

I add a parameter like this:

self.add_parameter('volt',
                   get_cmd='READ?',
                   get_parser=float)

I tried timing a few things here:

# create Instruments
k1 = keith.Keithley_2600('Keithley1', 'GPIB0::15::INSTR',channel='a')
k2 = keith.Keithley_2600('Keithley2', 'GPIB0::15::INSTR',channel='b')

a1 = agi.Agilent_34400A('Agilent1', 'GPIB0::11::INSTR')
a2 = agi.Agilent_34400A('Agilent2', 'GPIB0::6::INSTR')

# set integration time (number of line cycles)
a1.NPLC.set(10)
a2.NPLC.set(10)
station1 = qc.Station(a1,a2)
station1.set_measurement(a1.volt)
station2 = qc.Station(a1,a2)
station2.set_measurement(a1.volt, a2.volt)
# Time single readings
with Timer('Time s1'):
    station1.measure()
with Timer('Time s2'):
    station2.measure()
[Time s1]
Elapsed: 0.41602373123168945
[Time s2]
Elapsed: 0.8230471611022949
# Time single readings
with Timer('Time a1'):
    a1.volt.get()
with Timer('Time a2'):
    a2.volt.get()
[Time a1]
Elapsed: 0.4150238037109375
[Time a2]
Elapsed: 0.4080233573913574
with Timer('Time Loop 1'):
    data = qc.Loop(k1.volt[-5:5:1], 0).each(a1.volt).run(location='testsweep', overwrite=True,background=False)

with Timer('Time Loop 2'):
    data = qc.Loop(k1.volt[-5:5:1], 0).each(a1.volt, a2.volt).run(location='testsweep', overwrite=True,background=False)
DataSet: DataMode.PUSH_TO_SERVER, location='testsweep'
   volt: volt
   volt_set: volt
started at 2016-04-15 16:35:52
[Time Loop 1]
Elapsed: 4.65026593208313
DataSet: DataMode.PUSH_TO_SERVER, location='testsweep'
   volt_1: volt
   volt_0: volt
   volt_set: volt
started at 2016-04-15 16:36:00
[Time Loop 2]
Elapsed: 8.254472017288208
with Timer('Time Loop 1'):
    data = qc.Loop(k1.volt[-5:5:1], 0).each(a1.volt).run(location='testsweep', overwrite=True)
    while data.sync():
        time.sleep(0.1)
DataSet: DataMode.PULL_FROM_SERVER, location='testsweep'
   volt: volt
   volt_set: volt
started at 2016-04-15 16:36:03
[Time Loop 1]
Elapsed: 4.743271112442017
with Timer('Time Loop 2'):
    data = qc.Loop(k1.volt[-5:5:1], 0).each(a1.volt, a2.volt).run(location='testsweep', overwrite=True)
    while data.sync():
        time.sleep(0.1)
DataSet: DataMode.PULL_FROM_SERVER, location='testsweep'
   volt_1: volt
   volt_0: volt
   volt_set: volt
started at 2016-04-15 16:36:16
[Time Loop 2]
Elapsed: 8.797503232955933

apart from the negative delay warnings QCoDeS/Qcodes_loop#13 I now get flooded by additional Measurement timestamps(?)
[16:36:18.811 Measurement] [16:36:18.811 Measurement] [16:36:18.811 Measurement] [16:36:18.811 Measurement] [16:36:18.811 Measurement] [16:36:18.811 Measurement] [16:36:18.811 Measurement] [16:36:18.811 Measurement] [16:36:18.811 Measurement] [16:36:18.811 Measurement] [16:36:18.811 Measurement] [16:36:18.811 Measurement] [16:36:18.811 Measurement] [16:36:18.811 Measurement] [16:36:18.811 Measurement] [16:36:18.811 Measurement] [16:36:18.811 Measurement] [16:36:18.811 Measurement] [16:36:18.811 Measurement] [16:36:18.811 Measurement] [16:36:18.811 Measurement] [16:36:18.811 Measurement] [16:36:18.811 Measurement] [16:36:18.811 Measurement] [16:36:18.811 Measurement] [16:36:18.811 Measurement ERR] WARNING:root:negative delay -0.000027 sec [16:36:19.643 Measurement ERR] WARNING:root:negative delay -0.000024 sec [16:36:20.470 Measurement ERR] WARNING:root:negative delay -0.000027 sec

Error for unpick-able instruments

Similar to the situation described in QCoDeS/Qcodes_loop#13
A numerical simulation is written in instrument form and I try to run it with the following code

loop = qcodes.Loop(system.parameters["energy"][0:1:0.1], delay=0)
data = loop.run(background=True)

which gives the following error:

BrokenPipeError                           Traceback (most recent call last)
<ipython-input-8-996cbf03db06> in <module>()
      1 loop = qcodes.Loop(system.parameters["energy"][0:1:0.1], delay=0)
----> 2 data = loop.run(background=True)

d:\damaz\pycharm\qcodes\qcodes\loops.py in run(self, *args, **kwargs)
    171         '''
    172         default = Station.default.default_measurement
--> 173         return self.each(*default).run(*args, **kwargs)
    174 
    175 

d:\damaz\pycharm\qcodes\qcodes\loops.py in run(self, background, use_async, enqueue, quiet, data_manager, **kwargs)
    442             p.is_sweep = True
    443             p.signal_queue = self.signal_queue
--> 444             p.start()
    445             self.data_set.sync()
    446             self.data_set.mode = DataMode.PULL_FROM_SERVER

C:\Anaconda3\envs\kwant_env\lib\multiprocessing\process.py in start(self)
    103                'daemonic processes are not allowed to have children'
    104         _cleanup()
--> 105         self._popen = self._Popen(self)
    106         self._sentinel = self._popen.sentinel
    107         _children.add(self)

C:\Anaconda3\envs\kwant_env\lib\multiprocessing\context.py in _Popen(process_obj)
    210     @staticmethod
    211     def _Popen(process_obj):
--> 212         return _default_context.get_context().Process._Popen(process_obj)
    213 
    214 class DefaultContext(BaseContext):

C:\Anaconda3\envs\kwant_env\lib\multiprocessing\context.py in _Popen(process_obj)
    311         def _Popen(process_obj):
    312             from .popen_spawn_win32 import Popen
--> 313             return Popen(process_obj)
    314 
    315     class SpawnContext(BaseContext):

C:\Anaconda3\envs\kwant_env\lib\multiprocessing\popen_spawn_win32.py in __init__(self, process_obj)
     64             try:
     65                 reduction.dump(prep_data, to_child)
---> 66                 reduction.dump(process_obj, to_child)
     67             finally:
     68                 context.set_spawning_popen(None)

C:\Anaconda3\envs\kwant_env\lib\multiprocessing\reduction.py in dump(obj, file, protocol)
     57 def dump(obj, file, protocol=None):
     58     '''Replacement for pickle.dump() using ForkingPickler.'''
---> 59     ForkingPickler(file, protocol).dump(obj)
     60 
     61 #

BrokenPipeError: [Errno 32] Broken pipe

I feel that this error is not very helpful.
I assume the error is that my instrument driver is non-pickable. I think that the error should reflect this. Is it possible to check if a driver is pickable, and if not raise a very specific and clear error to the user such that it is clear were the problem actually is?
The Error I would like to get from this is something that contains the name or class of the instrument and the fact that it is unpickable.
I don't know if it is possible to give more specific information on why it is non-pickable, if it were possible I would like to see that information in the error as well

Finally, related to #53 , I would like it a lot if it were possible to still run this simulation in the background for an unpickable instrument.
(The problem is that Kwant systems are unpickable, such that it is not possible to write my Instrument in a pickable way)

Overwriting parameters in inherited instruments

Currently parameters are added to an instrument in the init. Once a parameter is added it is not possible to overwrite this by using another add_parameter command. When inheriting an instrument you generally want to overwrite inherited functions as well as some parameters. This is currently not possible.

The current workaround would be to overwrite the entire init and copy paste all the add_parameter commands that were in the parent class. I think it makes sens to run super().init(...) and then only overwrite those parameters that are overwritten.

Status widget moved away

Sorry, I accidentally posted with Guens account, @MerlinSmiles is the author!

Not really sure how i did this, but i detached the status area, moved it about, and now I cannot move it anymore because the titlebar is behind the browser frame ๐Ÿ˜‡ The only thing I can do is save the notebook ๐Ÿ˜‚

image

instruments in separate repo?

Not sure if this has been discussed before - but perhaps it's an idea to have a separate repo for instrument drivers?
This will keep the drivers nice and separate from the core code. Also, this will enable folks who use their own measurement code to use drivers & contribute. I like the idea of having universal instrument driver code which is completely usable on its own (without Qcodes dependencies).

See also: https://github.com/Galvant/InstrumentKit

Qcodes.Task

When I run my simulation with the following loop:

experiment = qcodes.Loop(system.L[10:100:1], delay=0).each(
    system.build, 
    qcodes.Loop(system.parameters["energy"][0:1:0.1], delay=0)
)

I get the error

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-25-ccd2d38d651b> in <module>()
----> 1 data = experiment.run(background=False, location='testsweep', overwrite=True, data_manager=False)

d:\damaz\pycharm\qcodes\qcodes\loops.py in run(self, background, use_async, enqueue, quiet, data_manager, **kwargs)
    446             self.data_set.mode = DataMode.PULL_FROM_SERVER
    447         else:
--> 448             loop_fn()
    449             self.data_set.read()
    450 

d:\damaz\pycharm\qcodes\qcodes\loops.py in _run_wrapper(self, *args, **kwargs)
    486     def _run_wrapper(self, *args, **kwargs):
    487         try:
--> 488             self._run_loop(*args, **kwargs)
    489         finally:
    490             if(hasattr(self, 'data_set') and hasattr(self.data_set, 'close')):

d:\damaz\pycharm\qcodes\qcodes\loops.py in _run_loop(self, first_delay, action_indices, loop_indices, current_values, **ignore_kwargs)
    526                 f(first_delay=delay,
    527                   loop_indices=new_indices,
--> 528                   current_values=new_values)
    529 
    530                 # after the first action, no delay is inherited

TypeError: build() got an unexpected keyword argument 'first_delay'

If I run the code with

experiment = qcodes.Loop(system.L[10:100:1], delay=0).each(
    qcodes.Task(system.build), 
    qcodes.Loop(system.parameters["energy"][0:1:0.1], delay=0)
)

it works fine. I feel that the syntax without the Task method is nicer, but it seems to require that my build is able to receive arbitrary arguments. Although this is easy to achieve, it also feels unnecessary. But I don't know how you all think about that.

positioning of plotting windows

I am trying to position the plot widows. Can we add a general setGeometry method to the BasePlot object?

The QtPlot object has member plotQ.win which is a pyqtgraph.GraphicsWindow. I can call plotQ.win.setGeometry to set the position (this works), but if I look at the object I get the following error:

> plotQ.win
===== Remote process raised exception on request: =====
Traceback (most recent call last):
  File "/usr/local/lib/python3.4/dist-packages/pyqtgraph/multiprocess/remoteproxy.py", line 213, in handleRequest
    result = getattr(opts['obj'], opts['attr'])
AttributeError: 'GraphicsWindow' object has no attribute '_ipython_display_'

===== Local Traceback to request follows: =====
===== Remote process raised exception on request: =====
Traceback (most recent call last):
  File "/usr/local/lib/python3.4/dist-packages/pyqtgraph/multiprocess/remoteproxy.py", line 213, in handleRequest
    result = getattr(opts['obj'], opts['attr'])
AttributeError: 'GraphicsWindow' object has no attribute '_repr_png_'
...

Parameter __call__ redirect to set or get

Currently we have to
param.set(11)
and
param.get()
It adds up to a lot of writing and can easily be replaced in the parameter class to allow the following:
param(11)
param()

    def __call__(self, *args):
        if len(args) == 0:
            if self.has_get:
                return self.get()
            else:
                raise NoCommandError('no get cmd found in' +
                                     ' Parameter {}'.format(self.name))
        else:
            if self.has_set:
                self.set(*args)
            else:
                raise NoCommandError('no set cmd found in' +
                                     ' Parameter {}'.format(self.name))

and a few other modifications.

I was trying to do this, and it works fine as long as I am calling it on local instrument, if i do this on a remote instrument I get the following error:

AttributeError: 'RemoteMethod' object has no attribute 'set'

The RemoteMethod has a __call__ already, which makes sense, the RemoteParameter does not, but somehow the parameter became a method...?

@alexcjohnson If I want someone (you) to be able to test this I have to make it a pull request? is that the way to do it?

I would like to see this working as it simplifies writing things a lot, but there might be reasons to not implement this.
What do people think about a feature like this?

PyQtGraph live plotting enhancements

I am implementing the pyqtgraph 2D live-plotting in our setup and though that instead of creating my own I'd try to improve the one @alexcjohnson made. So far I've found that there are some features I'd like to add, some minor bugs and some things I don't understand. I thought it would be good to create a single issue which I can close once my upcoming pull-request is done. So here it goes.

  • What are the ways I can send data to the pyqtgraph? The docstring says '''Plot x/y lines or x/y/z heatmap data. ''', the example notebook sends a dataset.dataarray which has the right units on the x and y axis, the right labels etc. I guess it is mostly my lack of understanding of what a datarray/dataset is/contains but I haven't yet succeeded in getting the same results by just sending arrays (and/or dicts) with data. I assume we also want our plotting to work with simple arrays as you will be doing this a lot when debugging or running the notebook. (The same applies for add() ).
  • I am not a big fan of _args and *_kwargs, I've found myself following the trace from QtPlot to the base plot to the add, add to plot etc and got lost somewhere. I understand that we want the function to be polymorphic to different types of input datashapes and at the same time be compatible with multiple plotting backends but surely there must be a more explicit way of achieving this. If anyone has explicit suggestions on how to do this I'd be happy to edit this and modify the docstrings.
  • Clear a window. Currently a new pyqtgraph is instantiated everytime a plot is created. It makes sense to reuse the same pyqtgraph window to prevent ending up with 20 open windows aswell as not having to move/recreate the existing window all the time. The win.clear() function in pyqtgraph, Using this in qcodes correctly clears the window but does not remove the traces, should be easy to add.
  • Explicitly updating data. Currently the live plotting updates by calling the update() function once every "interval", works great but it assumes the underlying dataset changes (which is true for qcodes dataset), I found a workaround by explicitly setting my_qcplot.traces[0]['z']=z_data but that is obviously less than ideal. I would suggest adding it as **kwargs to the update that you can explicitly overwrite data in some traces. An alternative is to add a special function for this purpose.
  • Allow the add() function to either add a plot in a new row or in a new column. (current seems a sensible default)
  • There is also some minor bugs with data not appearing or having the wrong scales but I am convinced that it is due to me not understanding how the plotting class works and not a real bug so I'll try to figure that out first.

@guenp

I made a repo with some additions to pyqtgraph, like buttons and widgets such that you can see a snapshot of a graph in-line in the ipython notebook, which could be useful for Qcodes. It's here: https://github.com/guenp/colorgraphs

He Guen, I'd like to take a look at your widgets buttons etc, however the link is broken. Could you post a new one?

There will probably be some more things that will come up later :)

instrument driver documentation

Hi,

At this moment I am writing a driver for the Keithley 2700. The add_parameter and add_function functionality of the Instrument class is really helpful and prevents a lot of duplicated code. I use for example:

k1.add_function('readnext', get_cmd=':DATA:FRESH?', return_parser=float)

Is there a method to add documentation for the generated functions and parameters? For normal functions I can use something like

keitley.readnext.__doc__ = 'read value from instrument'

but for some reason this does not work with the qcodes.instrument.function.Function class.

Data structures

@alexcjohnson
Hi Alex, I noticed you mentioned that one of the next steps will be to improve data saving so I thought I'd share my ideas on it before everything is made and fully integrated. One of the things I want to do after the data saving is move over our analysis and/or built a framework for easily analysing data. Automation of analysis of experiments is one of the things I have in mind when talking about data structures but is a subject that is large enough to warrant it's own discussion.

With respect to data structures I think there are a few aspects that are relevant

  • Structure of data in the file
  • data format (JSON, HDF5, data-server or other)
  • file-handling and interfaces (default directories, search and load functions etc)

Structure of the data file

I like the current structure a lot, I think the way a dataset can be passed to a plot and contains all the metadata. However when doing analysis I find that I require some more functionality.
To be specific, I require

  • A snapshot of the settings at the time of the experiment
  • The ability to store additional data in the file (fit-results, rotated and normalised data etc)
  • Other metadata (commit hash of our personal code, datatime etc)

The way we address this currently is by having a hierarchical data structure that makes a distinction between a dataset (array's of values + labels) and the datafile (contains different groups of which the dataset is one). I would propose to use a similar hierarchical format for Qcodes.

Data format

We have had a lot of discussions on our side with respect to the data format. Currently JSON is the default and we (in Delft) are more used to HDF5, let me just summarise the pros and cons as I see them with the goal of making our use of both better. I guess as long as there is a good data handler the underlying backend does not really matter (similar to how we are with plotting now) but I guess the default will significantly impact how we think about it.
@cdickel , I know you are pro HDF5 can you go over this and see if there is anything I missed/you want to add?

  • Pro JSON
    • Simple/text based/openable in any editor
    • Native support for dictionaries (we use a workaround in HDF5)
    • current QCodes default
  • Pro H5PY/HDF5
    • HDF5 Hierarchical data-viewer (allows browsing datafile and folding of folders, looked around but did not find equivalent for JSON)
    • File-sizes (this is a real problem when taking single shot data for long measurements)
    • Possible to write/extend a dataset that is open, prevents data loss when experiment crashes
    • Easily extract data in hierarchical fashion (no string parsing or custom import functions)
    • Can easily add new data groups to an existing file
    • Delft default (admittedly only a pro for us)
  • Alternatives
    • If anyone knows of any alternatives please let me know

I would be very willing to switch to JSON but to me it does not offer the same as HDF5 at this point. I would love to have some discussion on this.

Interface/file-handling

We currently use a format that saves any datafile as follows: user_datadirectory/YYMMDD/HHMMSS_label/HHMMSS_label.hdf5. The user_directory is a global variable that is set upon initialisation of the environment (similar to creating an instrument upon init). This directory typically does not change for several months. The label is a non-unique identifier for an experiment such as e.g. T1_qubit_3, we typically do the same run multiple times and do not want to worry about coming up with unique names, the timestamp ensures uniqueness. The file is nested in a folder with the same name to allow saving figures that come out of the analysis next to it.

On the analysis side we have functions that look for data files in one of three ways. If a label is specified it looks for the last file that has that label, e.g. T1 would find our T1_qubit_3 file. If a timestamp is specified it uses this as a unique identifier for a file. If nothing is specified it will default to the last generated file. On top of that we have a whole bunch of higher level functions.

This scheme is heavily qtlab inspired and works pretty well for us. Some of the drawbacks we have encountered are:

  • timestamp is not unique when doing very short experimental runs (sometimes multiple start within the same second)
  • Only possible to pick one user_datadirectory

I'd be interested in what your ( @alexcjohnson ) plans are for this kind of interface and file/folder handling. I understand that you currently have to specify all this when using the Loop but I have to admit I have not used that in a while.

@cdickel, Can you let me know what you think of all this? I tried to incorporate ideas from our discussions over the last year in it but I am sure I have missed some.
@damazter , what do you think? You guys do things slightly different so I am curious.

refactor Parameter out of Instrument

from @AdriaanRol discussion in #38

I noticed that the parameter class is located inside the instrument folder. Previously this made sense but I think that in recent PRs #39 and #40 we have made the notion of a parameter more abstract and indepenent of the instrument. Would it make sense to separate/refactor this?

Yes, yes it would make sense

Running setup.py gives an error

I tried to install qcodes using python setup.py develop. This correctly adds qcodes to my path (and all works fine) however it raises a confusing error message which I think comes from trying to install coverage

image

pip list tells me I have coverage (4.1b2) installed so I don't know what is wrong here.

As far as I can tell everything is still working fine for me but I don't think I should ignore the warnings. Also qcodes got correctly added to my path so that I can now import it.

I also tried running the test suite, this also fails. I have not attempted to diagnose this further but figured that any bug-reports on these things are useful.

PS D:\GitHubRepos\Qcodes> python setup.py nosetests
Detected build version: 14.0
running nosetests
running egg_info
writing requirements to qcodes.egg-info\requires.txt
writing qcodes.egg-info\PKG-INFO
writing dependency_links to qcodes.egg-info\dependency_links.txt
writing top-level names to qcodes.egg-info\top_level.txt
reading manifest file 'qcodes.egg-info\SOURCES.txt'
writing manifest file 'qcodes.egg-info\SOURCES.txt'
......E......................FFDetected build version: 14.0
running nosetests
running egg_info
writing qcodes.egg-info\PKG-INFO
writing requirements to qcodes.egg-info\requires.txt
writing top-level names to qcodes.egg-info\top_level.txt
writing dependency_links to qcodes.egg-info\dependency_links.txt
F.Detected build version: 14.0
Detected build version: 14.0
.reading manifest file 'qcodes.egg-info\SOURCES.txt'
writing manifest file 'qcodes.egg-info\SOURCES.txt'
running nosetests
running egg_info
....writing top-level names to qcodes.egg-info\top_level.txt
writing dependency_links to qcodes.egg-info\dependency_links.txt
writing qcodes.egg-info\PKG-INFO
writing requirements to qcodes.egg-info\requires.txt
running nosetests
running egg_info
.........................
======================================================================
ERROR: test_yes (qcodes.tests.test_helpers.TestIsSequence)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "D:\GitHubRepos\Qcodes\qcodes\tests\test_helpers.py", line 104, in test_yes
    f = open(os.listdir('.')[0], 'r')
PermissionError: [Errno 13] Permission denied: '.git'

======================================================================
FAIL: test_qcodes_process (qcodes.tests.test_multiprocessing.TestQcodesProcess)
----------------------------------------------------------------------
writing dependency_links to qcodes.egg-info\dependency_links.txt
Traceback (most recent call last):
  File "C:\Anaconda3\lib\unittest\mock.py", line 1157, in patched
    return func(*args, **keywargs)
  File "D:\GitHubRepos\Qcodes\qcodes\tests\test_multiprocessing.py", line 82, in test_qcodes_process
    self.assertEqual(len(reprs), 2, reprs)
AssertionError: 3 != 2 : ['<p1, started daemon>', '<p2, started daemon>', '<p0, started daemon>']

writing qcodes.egg-info\PKG-INFO
======================================================================
writing top-level names to qcodes.egg-info\top_level.txt
FAIL: test_await (qcodes.tests.test_sync_async.TestAsync)
writing requirements to qcodes.egg-info\requires.txt
----------------------------------------------------------------------
Traceback (most recent call last):
  File "D:\GitHubRepos\Qcodes\qcodes\tests\test_sync_async.py", line 55, in test_await
    self.check_time(async2, 3, 100, 9, 0.1, 0.2)
  File "D:\GitHubRepos\Qcodes\qcodes\tests\test_sync_async.py", line 52, in check_time
    self.assertLess(t2 - t1, tmax)
AssertionError: 0.20902085304260254 not less than 0.2

======================================================================
FAIL: test_chain (qcodes.tests.test_sync_async.TestAsync)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "D:\GitHubRepos\Qcodes\qcodes\tests\test_sync_async.py", line 60, in test_chain
    self.check_time(async3, 5, 100, 25, 0.1, 0.2)
  File "D:\GitHubRepos\Qcodes\qcodes\tests\test_sync_async.py", line 52, in check_time
    self.assertLess(t2 - t1, tmax)
AssertionError: 0.2010200023651123 not less than 0.2

Name                                         Stmts   Miss  Cover   Missing
--------------------------------------------------------------------------
qcodes.py                                       27      9    67%   15-25
qcodes\data\data_array.py                       98     78    20%   31-55, 59, 63-67, 80-105, 113-138, 141-142, 148, 158-
162, 165, 174, 177-178, 181-185, 191-195, 202-205, 208
qcodes\data\data_set.py                        179    140    22%   38-55, 79-94, 99-104, 123-135, 186-212, 215-221, 224-
244, 251-254, 257-264, 272, 280-285, 300-333, 347-353, 360-384, 395-399, 405-407, 414-420, 426-427, 430, 433-438
reading manifest file 'qcodes.egg-info\SOURCES.txt'
reading manifest file 'qcodes.egg-info\SOURCES.txt'
writing manifest file 'qcodes.egg-info\SOURCES.txt'
writing manifest file 'qcodes.egg-info\SOURCES.txt'
qcodes\data\format.py                          225    190    16%   44-55, 61-62, 70, 78-97, 100, 114-148, 157-188, 232,
238, 246, 258-349, 352, 355-358, 361-368, 375-424, 427-441, 444, 447-451
qcodes\data\io.py                               76     50    34%   71-84, 90-91, 94, 97, 103, 109-110, 119-148, 154-161,
 166, 169, 172, 175-179, 182, 185-186
qcodes\data\manager.py                         124     89    28%   15-20, 31, 34, 48-62, 65-67, 70, 76-77, 80-84, 90-102
, 108-110, 117-121, 142-151, 154-182, 185, 188, 207-208, 215-221, 227-229, 237, 243, 249
qcodes\instrument.py                             0      0   100%
qcodes\instrument\base.py                       61      4    93%   66-67, 154-155
qcodes\instrument\function.py                   43      2    95%   96-97
qcodes\instrument\ip.py                         20     12    40%   10-16, 19-20, 24-25, 29-38
qcodes\instrument\mock.py                       63      0   100%
qcodes\instrument\parameter.py                 156     11    93%   135-141, 144-151
qcodes\instrument\sweep_values.py              107     34    68%   67, 196-207, 220-227, 238-252, 255-277
qcodes\instrument\visa.py                       35     23    34%   10-25, 28-32, 35-36, 40-41, 47, 56-57, 61-63, 67
qcodes\instrument_drivers\QuTech.py              0      0   100%
qcodes\instrument_drivers.py                     0      0   100%
qcodes\instrument_drivers\rohde_schwarz.py       0      0   100%
qcodes\instrument_drivers\signal_hound.py        0      0   100%
qcodes\instrument_drivers\tektronix.py           0      0   100%
..............qcodes\loops.py                                285    239    16%   65-74, 81-91, 120-122, 133-141, 153-165
, 172-173, 188-207, 216-240, 243-313, 316-321, 324-350, 354-362, 371-375, 378-381, 414-454, 457-474, 477-484, 487-491, 5
10-534, 537-543, 559-561, 564, 577, 580, 590-608, 611-618, 627-628, 631
.qcodes\station.py                               35     24    31%   17-32, 35, 45-50, 60, 67-82, 88
qcodes\utils.py                                  0      0   100%
qcodes\utils\helpers.py                        119     55    54%   144, 151-166, 171-182, 208-218, 227-248, 257-271
qcodes\utils\metadata.py                        13      0   100%
qcodes\utils\multiprocessing.py                 85     45    47%   24-33, 63-72, 117-118, 121-122, 127-138, 148-150, 153
-175, 178
qcodes\utils\sync_async.py                     114      9    92%   166, 171-173, 176, 180, 184, 189-191, 218
.qcodes\utils\validators.py                     110      3    97%   58, 61, 64
qcodes\widgets.py                                0      0   100%
--------------------------------------------------------------------------
TOTAL                                         1975   1017    49%
----------------------------------------------------------------------
Ran 63 tests in 1.398s

FAILED (errors=1, failures=3)
PS D:\GitHubRepos\Qcodes>

.awg fileformat minimal working example

Hi guys,

The last day and a half I've been struggling with the tektronix 5014 AWG driver. Most of it is working except for writing .awg files. Sending over the data and creating the file works fine, however when trying to load a file it gives met the very undescriptive error E11401: File C:\\Waveforms\test.awg wrong format.

I am quite sure the error has something to do with the difference between data formats between python2 and python3 but as the file that get's generated is quite massive it is rather hard to debug.

As most of the labs are actually using the tektronix 5014 I was wondering if anyone has a minimum working example that sends and loads a file using python3.

From experience I know that sending a .awg file without some of the settings loads just fine, but sets the default settings for those settings that are not specified. I tried reducing my minimal working example to send just a filename with the records "MAGIC", "VERSION" and 1 setting but even that does not load correctly, possible because this is too little or potentially because there is some error in my encoding.
I am using the struct.pack method for packing my bytes and the visa_handle.write_raw() method to write it.

For those interested in giving it a go,
The dataformat is described in the "online-help" of the tektronix under "Fiile and Record Format of the AWG".

Any help would be much appreciated.

Units, 1e9 vs 1G vs 1000000000

@alexcjohnson
Yesterday we had a quite intense discussion in the lab on units.
I am personally a big fan of using SI units for everything and then adding e9 (or e-9 or e6 etc) to it to convert it to whatever quantity we need.
There are several pros and cons for this way of doing it
Pros:

  • The same everywhere
  • Intuitive to remember/convert

Cons:

  • type eX every time you set a value/range/linspace, everything!
  • hard to read when printed or output to terminal unless explicit formatter is specified

displaying problem

I looked a bit in the print problem but it looks like there is no easy solution to this. I found the following things which are related but don't cover it completely: numpy set printoptions, converting to a custom float class.
I guess other alternatives (which I haven't found) would be , add a very short conversion function to "pretty float" for easy printing i.e. print(pretty(x)).
Modify the base python float class repr (not possible as it is immutable and would require recompiling python)
Note if we are going down the pretty float road we can also incorporate the G,M,k, , m, u, n prefixes.

Specifying input problem

I do not know the best way to address this one, from what I understood typing e9 every time is considered a pain. A valid alternative would be to use scientifix prefixes after typing the number i.e. x=1.24G would set x to 1.24e9.

Snapshot does not work for manual parameter

I have a qubit object which contains a bunch of (now manual parameters).
Sadly the snapshot functionality does not work for these new type of parameters.

This issue should be quite trivial to reproduce but below an example of how you can see that it does not work.

image

getting a set-only parameter

Even though it is something we absolutely want to avoid there sometimes are instruments that only have set-commands, in this case I want the get command to return the value that was last set.

It is now already possible to extract the last set value by using the snapshot, however that does not make the interface the same when using it like any other parameter in some higher level script.

To give an example I have added a screenshot below of me running into this problem when developing a driver for an instrument where get-functionality has not yet been implemented.
image

The syntax I would propose would be something along the lines of

self.add_parameter('name', set_cmd='set_string{}', get_cmd=soft_get 

where soft get is either a reserved keyword (bad idea IMO), an importable function or code or some other object.

For now I'll just settle for the dirty hack of adding it in by hand.
EDIT: making get return _last_value explicitly

Storage and scheduling separation.

Continued from the discussion in #2: how should the relation between "user-requested" measurements and information and "background" measurements and information.

In the current implementation the user-requested parth is defined in SweepValues, SweepStorage, the background is in Monitor. SweepValues has a priority, in that it tells Monitor how to behave, but cooperation from the Monitor is expected.

Upper limit of sweep range

At the moment, a sweep like Loop(parameter[1:5:1],1) goes through the values 1,2,3,4 as per standard Python slice behavior. I think it would be more intuitive (especially for people who aren't used to Python) if it included the upper limit, i.e. did the same as Loop(parameter[1,2,3,4,5],1).
Assuming this doesn't clash with something else, parameter.__getitem__ could be changed to

if keys.step == None:
    step = 1
else:
    step = keys.step
outkeys = slice(keys.start, keys.stop + step, keys.step)
return SweepFixedValues(self, outkeys)

Docstring and argument should be available by shift-tab in the notebook

Related to my last comment in #78.

From the user perspective, this is an important issue and will determine how effective users actually use the code.

The docstrings of functions have to somehow magically go through the processes stuff, but it is actually needed for all important things in the notebook, at any point, I as a user want to do shift tab and see what that function does.

The same goes for available arguments, I only figured out that update=True exists for the snapshot by digging through the code on my way to figure out where to implement it, we cannot expect this from a regular user.

test suite errors

On my system the tests (python setup.py nosetests) fail:

Traceback (most recent call last):
  File "/home/eendebakpt/develop/Qcodes/qcodes/tests/test_helpers.py", line 104, in test_yes
    f = open(os.listdir('.')[0], 'r')
IsADirectoryError: [Errno 21] Is a directory: 'qcodes'

I think the code is intended to check whether a file handle is a sequence. On my system the first element of os.listdir('.') is a directory and this generates an error. I suggest removing the file handle altogether.

A second error occurs in the multi processing tests:

======================================================================
FAIL: test_qcodes_process (qcodes.tests.test_multiprocessing.TestQcodesProcess)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/lib/python3.4/unittest/mock.py", line 1136, in patched
    return func(*args, **keywargs)
  File "/home/eendebakpt/develop/Qcodes/qcodes/tests/test_multiprocessing.py", line 82, in test_qcodes_process
    self.assertEqual(len(reprs), 2, reprs)
AssertionError: 3 != 2 : ['<p2, started daemon>', '<p1, started daemon>', '<ForkProcess(SyncManager-1, started)>']

Apparently on my system not only the daemons are started, but also a side process which gets counted accidentally. I think we should either replace the check self.assertEqual(len(reprs), 2, reprs) with something more sophisticated (e.g. check names or id's) or do not do this kind of tests as they are very system dependent.

Conversion of input/output parameters

Related to isue #56 there is another problem, internal conversion of values.

Sometimes the data you get or set in a measurement is not what that data actually represents.
For instance:

  • One might have a voltage divider at the output of a voltage source. Here I would like a conversion factor somewhere in the instrument that takes care of that. At the end of the day I'd like to say: bias.set(1e-3) but the instrument might have to output 1.213V.
  • There could be an analog instrument in from of the instrument you measure. For instance a Current amplifier that translates Amperes to Volts, When I measure the voltage this represents the current. At the end of the day I'd like to say current.get() and get something in Amperes.

Both cases deal with manual additions/changes to instruments, or rather the representation of their values.
Should we have, Is there, or is there planned to have a standardized conversion of this?

version information

Version information about the qcodes package is defined in setup.py, but does not appear in the packages as qcodes.__version__. Such a version string is useful for other packages (for example the qcodes setup.py uses these strings for version checking).

We should add qcodes.__version__. The only question is how to keep the version in setup.py and qcodes.__version__ the same. A method that is used by numpy is to extract the version in setup.py from a file numpy/version.py (see https://github.com/numpy/numpy/blob/master/setup.py line 103). If we agree this is a suitable method I can make a PR.

Break loop based on measured value

For an experiment measuring the critical current of a superconductor I want to do an IV-trace.
Easy enough, but Also I would like to stay in the normal state as little time as possible, so I need some way to break a loop when some condition is fulfilled, where I would want to use the last measured data...

How would I go for this, without reading the same value again?

    data = qc.Loop(k1.volt[0:5:0.1], 0).each(
                    a1.volt,
                    a2.volt,
                    # If a2.volt >= 1, kill this loop
    ).run()

Switching to QCodes alpha/beta version

@alexcjohnson
Hi Alex,
When you visited just before Christmas we had some good discussions on QCodes and we agreed that I would provide a list of features that would be required in order for us to start using QCodes.

Essentials for switching

  • Instrument drivers
  • QTLab instrument converter (optional)
  • Basic live-plotting monitor (refreshing the plot command is not very nice)
  • HDF5 backend (we'll probably make this ourselves)

Modifications to Loop() concepts we use that are currently not mapable to QCodes

  • Multi-parameter sweeps (as opposed to Loop.loop which is fixed to a grid)
  • Adaptive measurements (passing the measurent function to an arbitrary adaptive function, different from your current implementation but allows arbitrary pre-existing adaptive functions)
  • Nesting of loops(execute a Loop() object within an existing loop) (not trivial from a datasaving standpoint)

In addition to these essentials there are quite a few modifications we need to make to our code to transfer over. For instance we need to translate our sweep and detector functions to set-able and get-able parameters, we need to change our analysis to work with changes to the dataformat etc.

Our plan for beta testing is to first transfer over the bare-essentials. This means transferring over basic instrument drivers but continue using our existing measurement control and analysis. The plans are to convert our sweep and detector functions to parameters such that they can be used both by QCodes and our measurement control before switching over completely.

When beta-testing those first features I will probably run into all the little details of features that are nice to have. I will also add those to this issue

Practical notes/features and small misc things

  • Some kind of config file to load settings on startup, containing e.g. a default data directory
  • As I am messing around and trying to understand the code I find that it would be super useful to have some kind of example notebook explains how to make a custom instrument and define custom parameters and what the different ways to do that are. This could be similar to your example notebook but more of an advanced lesson.
  • With respect to the syntax, I think it makes sense to separating the parameter that is being looped from the points over which it is looped (this is also more natural when extending to N-D parameter sweeps or when doing adaptive loops).

Some random questions

Q: I saw some notes in the code about python 3.3 vs 3.5, which version are you actually using and why (and can I just use 3.5 or do I need 3.3)
Q: Do you prefer if I mess around on a branch of QCodes or if I make a fork when trying out new things (e.g. demonstration notebook )

P.S. I tried to keep this issue short to prevent the whole wall of text thing and to make it a nice short checklist. However it does rely a bit on the discussions we had before christmas. If you (or anyone else who is interested) has any questions, comments etc feel free

optional arguments to Functions

This is something @AdriaanRol and I have talked about before, and I just noticed his comment in the AWG5014 driver: "set function has optional args and therefore does not work with QCodes." I still think that for Parameters there should always be one and only one argument, because a Parameter is supposed to be a single degree of freedom and other factors, like how you set it to that value, are not supposed to matter, just the value you set it to. I can certainly see wanting to do this, when writing drivers, but I think if we can maintain this restriction, it will pay off for both our users (in terms of conceptual simplicity of what a Parameter is and how to use it) and for our core code (which then only ever has to handle a single value when setting a Parameter)

However, a Function does not represent a single degree of freedom, it represents some operation applied to the instrument, and here optional arguments make a lot of sense as they do with any Python function. You can of course already do this by just attaching a method to the instrument, rather than using an explicit Function object via the add_function method, but the advantages of add_function are:

  • you can use command strings, not just wrap other real functions
  • you get built-in parameter validation, and valid inputs are all collected in one place
  • the function is listed in self.functions in case a user wants to list all the instrument functionality, separate from the base instrument methods

So with that in mind, how do we implement optional arguments in Function? Right now the arguments are specified as a list of Validator objects. This already drops what each argument means - ie it's positional only, no name. It would be great if we could name all the args, and accept them the same ways regular functions accept them, as positional or keyword, and with default values where appropriate.

On the call side I think it's clear what to do, but how do we specify these args when creating the Function? We could make args be a list of (Validator, name[, default]) where an arg is required if default is not provided (and of course required args must come before optional). Or we could be more explicit and make args a list of dicts {'validator': Validator, 'name': name[, 'default': default]}

(as a related note, I'm currently refactoring Parameters and Functions anyway per #42 and Function used to call these parameters but that's obviously confusing vs Parameter - so I changed them to args)

Instrument driver organization and access

Ported from the discussion here

Several related issues:

  • how do we organize our driver files
  • how do users access them
  • how to handle drivers that work with multiple instrument models

File organization

The options here seem to be:

  1. Flat, all files in one directory (downside: we'll soon have hundreds, searching this will be a pain)
  2. Company subdirectories (downside: companies change names sometimes)
  3. Functional subdirectories (downside: categories blur, hard to know where to put or find things - I think we can already rule this out)

User access

The explicit option will of course always work. Assuming we do company subdirectories:

from qcodes.instrument_drivers.tektronix.AWG5014 import Tektronix_AWG5014
awg = Tektronix_AWG5014(address='...')

But this is awfully verbose, and also doesn't help users find the driver they need, particularly if the company name is ambiguous. We could import drivers in __init__.py files to shorten this to:

from qcodes.instrument_drivers import Tektronix_AWG5014
awg = Tektronix_AWG5014(address='...')

but then we're importing all drivers all the time, which could be a big drag on the system. We could make a custom importer, so users could do:

awg = qcodes.instruments.Tektronix_AWG5014(address='...')

That would let us only import drivers we need, alias company names (so qcodes.instruments.HP_34401A would be the same driver as qcodes.instruments.Keysight_34401A, good for backward compatibility). Users could find drivers with dir(qcodes.instruments) or tab completion (we'd have to implement __dir__) but the correspondence between this and the directory structure would be implicit, particularly for aliased companies.

Drivers that work with multiple instrument models

This should certainly be encouraged! Already, @AdriaanRol 's RS_SGS100A driver works not only for the RS_SGS100A but also for the RS_SMB100A.

Do we name these drivers to indicate this (something like RS_S100A_series)? Do we create wrapper files? Then users wouldn't need to know that they were using the same driver, but maybe they should?

# SMB100A.py
from .SGS100A import RS_SGS100A as RS_SMB100A
# let users know that this is an alias?
print('RS_SMB100A loaded using the compatible RS_SGS100A driver')

error for running a loop with background=False

as described in #59 ,running a measurement in the background is not possible for numerical simulations.
hence I run my loop as:

loop = qcodes.Loop(system.parameters["energy"][0:1:0.1], delay=0)
data = loop.run(background=False, location='testsweep', overwrite=True)

This gives me the error:

AttributeError                            Traceback (most recent call last)
<ipython-input-8-2396f66e8e80> in <module>()
      1 loop = qcodes.Loop(system.parameters["energy"][0:1:0.1], delay=0)
----> 2 data = loop.run(background=False, location='testsweep', overwrite=True)

d:\damaz\pycharm\qcodes\qcodes\loops.py in run(self, *args, **kwargs)
    171         '''
    172         default = Station.default.default_measurement
--> 173         return self.each(*default).run(*args, **kwargs)
    174 
    175 

d:\damaz\pycharm\qcodes\qcodes\loops.py in run(self, background, use_async, enqueue, quiet, data_manager, **kwargs)
    447         else:
    448             loop_fn()
--> 449             self.data_set.read()
    450 
    451         if not quiet:

d:\damaz\pycharm\qcodes\qcodes\data\data_set.py in read(self)
    405         if self.location is False:
    406             return
--> 407         self.formatter.read(self)
    408 
    409     def write(self):

d:\damaz\pycharm\qcodes\qcodes\data\format.py in read(self, data_set)
     85         # in case the DataArrays exist but haven't been initialized
     86         for array in data_set.arrays.values():
---> 87             if array.data is None:
     88                 array.init_data()
     89 

d:\damaz\pycharm\qcodes\qcodes\utils\helpers.py in __getattr__(self, key)
    186         raise AttributeError(
    187             "'{}' object and its delegates have no attribute '{}'".format(
--> 188                 self.__class__.__name__, key))
    189 
    190     def __dir__(self):

AttributeError: 'DataArray' object and its delegates have no attribute 'data'

I do unfortunately not know what causes this error. The datafile gets created fine, but with this error I do not get a handle towards this file.

custom subprocess widget

To work outside the jupyter notebook I am writing a simple widget similar to the subprocess widget. The widget is nearly complete, but I need to connect it to updates from the qcodes framework.

Currently I use a simple thread which calls station.snapshot() (see code below). What would be the preferred way to do this in Qcodes? I would like to update at least the values of parameters of all instruments.

import threading
import time

class ParameterViewer(...):
    def __init__(self, station):
      self._station = station
      # create the viewer widget ...
      ...

   def updatedata(self):        
       dd=self._station.snapshot()
       # update the viewer widget
      ...

class QCodesTimer(threading.Thread):   
   def __init__(self, fn, dt=2):
        super().__init__()
        self.fn = fn
        self.dt=dt
   def run(self):
      while 1:
        logging.debug('QCodesTimer: start sleep')
        time.sleep(self.dt)
        # do something
        logging.debug('QCodesTimer: run!')
        self.fn()

# create custom viewer which gathers data from a station object
w=ParameterViewer(station)

# enable updates
timer = QCodesTimer(fn=w.updatedata)
timer.start()

abort active measurement

How can I abort an active measurement? I tried

> qc.active_children()
[<DataServer, started daemon>, <Measurement, started daemon>]
> qc.active_children()[1].terminate()

But then when trying to start a new measurement I get
RuntimeError('Already executing a measurement')

As a side note: what would be a good place to gather answers to questions such as this one? I can place it either in qcodes/docs/objects.md or in qcodes/docs/examples/Qcodes example.ipynb

"Holder" parameters

Originally part of #8, making a separate issue because it is self contained and I am currently running into this problem a lot while rebuilding our "qubit-object" "meta-instrument".

simple creation of "parameter holding" parameters

When converting QTLab instruments I run into a lot of instruments that have simple "holder" parameters. They exist t ensure some parameter of an instrument is logged and viewable. They usually look something like

def _do_set_paramname(value): 
    self.paramname = value
def _do_get_paramname(): 
    return self.paramname

To have the same behaviour in QCodes I would have to do the following
Add the parameter to the init.

self.add_parameter('paramname',
           get_cmd=self._do_get_paramname,
           set_cmd=self._do_set_paramname,
           vals=vals.Anything())

Replace the set and get functions with an underscore

def _do_set_paramname(value): 
    self._paramname = value
def _do_get_paramname(): 
    return self._paramname

There is probably also another way by creating a parameter from scratch in the init and using that but I haven't looked into that yet.

As I very much like the idea of passing strings for instrument driver write commands I thought a similar shortcut for such 'holder' variables would be very nice, something along the lines of

self.add_parameter('paramname',
           get_cmd=holder_getfunc,
           set_cmd=holder_setfunc,
           vals=vals.Anything())

I do not know how best to implement how best to implement this as I am not familiar enough with all the under the hood things in QCodes but as it is such a common task I thought it might be a good idea to consider.

Is there some reason why you can't completely code-gen this (parameter-holding parameters)? Define an "add_simple_parameter" method that just takes the parameter name and vals, and that generates the getter and setter methods and adds them to the object directly?
I know you could do this back in old-school Python (I'm still vintage 1.4), but I don't know if 3.x has made it impossible.

@alan-geller This is exactly what I had in mind, though I am not sure yet on the implementation.

"Holder" parameters are an interesting case, and you're right that these will come up all the time. I'm guessing these are mainly for manual settings that the software otherwise has no way of knowing about, like what kind of attenuation, resistors, cables you have installed, or switches/dials that are invisible to software?

I'll make a Parameter subclass and Instrument method for them, they'd be pretty simple. I'm not sure about the name though... on the one hand I want to call them SoftParameter because they exist only in software, but that contradicts the fact that they are mirroring physical objects that have NO presence in software. Perhaps we call them ManualParameter because when you manually change them outside the software, you have to also manually change them inside software?

Which brings up the other challenge with these parameters: making sure they stay in sync with reality. The only way I see to promote this is visibility: making sure these parameters are very apparent (and easy to update) in whatever monitoring panel we make.

@alexcjohnson What do you think the best implementation is? I see several options with their own pros and cons, note that these relate mostly to how they will be used.

  • How do we name this ?
    • soft(ware) parameter
    • holder parameter
    • simple/basic paramater
  • Do we want to add an option/keyword to the existing add-parameter or do we want a new add_parameter function?

@damazter, I guess this is also tangentially related to the new types of parameters you were discussing in #28, what do you think of this (in relation to the add_parameter function)

requirements check on packages

The current version of qcodes does not work with pyqtgraph 0.9.8, it does work with 0.9.10. Where can we add checks on the versions of packages? For a normal python package the place for this is setup.py, but this is not yet present in qcodes.

DiskIO discards absolute path information

> my_io = qcodes.DiskIO('/home/eendebakpt/tmp')
> print(my_io)
<DiskIO, base_location=/mounts/d3/home/eendebakpt/svn/qtt/home/eendebakpt/tmp>

The DiskIO object converts my absolute path to a relative path. The problem is in def _normalize_slashes(self, location) from qcodes/data/io.py.
I am not sure about what _normalize_slashes should do, so I am not sure how to fix this

version requirement of dependencies

When I try installing Qcodes with python setup.py develop I get an error:

Traceback (most recent call last):
  File ".\setup.py", line 81, in <module>
    if StrictVersion(module.__version__) < StrictVersion(min_version):
  File "C:\anaconda\lib\distutils\version.py", line 40, in __init__
    self.parse(vstring)
  File "C:\anaconda\lib\distutils\version.py", line 137, in parse
    raise ValueError("invalid version number '%s'" % vstring)
ValueError: invalid version number 'pyqtgraph-0.9.10-100-gf224936'

So I have a specific branch of pyqtgraph installed that has a somehow funny version number, but it is larger than the required 0.9.10.

Is there any way to make this work without swiching the pyqtgraph branch and then switching it back?

Hierarchy

This is what I'd suggest for the object hierarchy, based on previous discussions in #2.
See also #4 for discussions on the Monitor.

  • Station
    • BaseInstrument: IPInstrument, VisaInstrument, MockInstrument
      • Parameter
        • Validator: Anything, Strings, Numbers, Ints, Enum, MultiType
      • Function
        • Validator
    • Scheduler - delegates instrument communication for measurement processes by priority
      • Monitor - measurement process with software quench protection, user notifications, etc.
    • MeasurementSet
      • DataSet
      • .sweep
        • SweepValues: SweepFixedValues, AdaptiveSweep
        • Parameters - some subset of instrument parameters to measure at each datapoint
      • .monitor
        • Time interval
        • Parameters
      • .apply_pulse_scheme
      • ...
    • DataManager
      • DataServer
        • DataSet: CSVStorage, AzureStorage, TempStorage

sweep_monitor

Syntax

Continuing discussion from #2
Should we have 1 sweep function or multiple nested (user-defined) measurement functions?

E.g. for each B, apply my_pulse 10.000x and monitor the system for 10 minutes

measurement.sweep((B[-6:6:0.1], AWG.applier('my_pulse.dat', n=10000)), 60, repeat(600), 1)

or

measurement.sweep(B[-6:6:0.1],60).apply_pulse_scheme('my_pulse.dat', n=10000).monitor(t=600).run()

and e.g. 2D sweep measurement

measurement.sweep(B[-6:6:0.1], 60, c0[-20:20:0.1], 0.2)

or

measurement.sweep(B[-6:6:0.1], 60).sweep(c0[-20:20:0.1], 0.2).run()

I would prefer to chop the syntax up in parts:

  • single method calls often require a ton of input parameters, making the code hard to understand at a first glance
  • .run() allows recycling of the measurement object
  • more flexibility for users, not fixed to using sweep but can define own routines

any thoughts/ideas?

validator enhancements

Whenever I enter a value that is not allowed for a parameter I get the following (nondescript ) exception

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-284-99639b8c76f7> in <module>()
----> 1 S1.power.set(30)

d:\githubrepos\qcodes\qcodes\instrument\parameter.py in _validate_and_set(self, value)
    350 
    351     def _validate_and_set(self, value):
--> 352         self.validate(value)
    353         self._set(value)
    354         self._save_val(value)

d:\githubrepos\qcodes\qcodes\instrument\parameter.py in validate(self, value)
    207         if not self._vals.is_valid(value):
    208             raise ValueError(
--> 209                 '{} is not a valid value for {}'.format(value, self.name))
    210 
    211     def __getitem__(self, keys):

ValueError: 30 is not a valid value for power

As this will be the error the end user will be seeing quite often I would propose the following enhancements:

  • Return the reason why an entered value is not valid (e.g. datatype y != datatype x, value larger than max value x), strangely when creating a val this has quite explicit exceptions.
  • If a parameter has an instrument related to it, make a reference to the instrument in the Exception as the call is usually buried in some loop making it hard to find what call failed.
  • Have a way to view and change the validator from outside of the parameter/instrument.

I had some issues with integers not being accepted for a Numbers validator and it working fine when I used floats. However I do not seem to be able to reproduce this so the point below can be omitted, although clearer exceptions would help me understand why it failed.

  • Make the Numbers validator accept integers (and optionally convert to float)

EDIT: see below for this problem.

qcodes usage without jupyter notebook

For developing code I prefer a dedicated IDE (such as spyder) over the notebook used by jupyter (or ipython). Will this be possible with qcodes?

For example, right now the qc.show_subprocess_widget() command does not work outside of the notebook environment.

Default validator

The parameter class at this point, uses Numbers() as its default validator.
I think it would be better, as a default, to use Anything()

What do you all think?

Syntax suggestions from Matthias Troyer

In presenting Qcodes to Matthias and colleagues, they felt it could be even more readable to others with a few enhancements.
His complaints:

  • delay isn't labeled in any way
  • including a gettable parameter in the action list doesn't explicitly say you're measuring it
  • setting a parameter is going to be a very common Task so make a special case for it. Anything else?

So lets say I have this Loop:

data = Loop(c1[-15:15:1], 0.1).each(
    Task(c0.set, -10),
    qubit1.t1,
    fridge.mc_temp,
    Loop(c0[-15:15:1], 0.01).each(meter.amplitude),
    Task(c0.set, -10),
    Wait(0.1),
    Loop(c2[-10:10:0.2], 0.01),
    Task(c2.set, 5)
).run()

If I make these changes, it would look like:

data = Loop(c1[-15:15:1], delay=0.1).each(
    Set(c0, -10),
    Measure(qubit1.t1, fridge.mc_temp),
    Loop(c0[-15:15:1], delay=0.01).each(Measure(meter.amplitude)),
    Set(c0, -10),
    Wait(0.1),
    Loop(c2[-10:10:0.2], delay=0.01),
    Set(c2, 5)
).run()

In general I think he's right, though I'm not sure if we can actually convince people to use these if they aren't strictly necessary, except Set, which would be trivial to implement and seems like a nice simplification. delay= you can already do, and Measure would basically be a noop (other than potentially doing a little error checking).

PR #70 breaks parameter .get and .set functionality

I cannot debug the issue properly because all the objects are multiprocessing objects. A minimal example showing the issue:

%matplotlib nbagg
import matplotlib.pyplot as plt
import time
import numpy as np
import qcodes as qc

from toymodel import AModel, MockGates, MockSource, MockMeter, AverageGetter, AverageAndRaw

# now create this "experiment"
model = AModel()
gates = MockGates('gates', model=model)

c0, c1, c2 = gates.chan0, gates.chan1, gates.chan2
print('fine so far...')

print('error...')
c2.get()
print('no effect?')
c2.set(0.5)

Warning on negative delay in IVVI driver

@alexcjohnson
As we discussed previously when I was developing the driver for the IVVI I get warnings when setting values on the IVVI rack.

These warnings relate to the fact that it takes more time to set the dac-value than the specified delay.

image

I see several different solutions to this problem and would like to get input on what should be done

  • Explicitly ignore this warning
  • Set a longer delay such that the delay is indeed always shorter
  • Change the behaviour of the warnings/delay counter.

My current preference goes to explicitly going for ignoring the warning. This has obvious drawbacks so I would like some input on what the best course of action is here.

example /Load-and-plot-old-data.ipynb is broken

The example fails with the error: ValueError: ('unrecognized DataSet mode', None)
This is due to recent changes in data_set.py

Maybe it would be good to run the example ipython notebooks as part of the testing procedure. For example add the following to test.py

    from runipy.notebook_runner import NotebookRunner
    from nbformat.current import read

    notebook = read(open("docs/examples/Load-and-plot-old-data.ipynb"), 'json')
    r = NotebookRunner(notebook)
    try:
        r.run_notebook()
    except:
        print('error in example notebook....')

cannot Loop with a real instrument

The prize goes to @MerlinSmiles for being the first to actually try a data taking loop with a real instrument... and discovering that they can't be pickled to send to the measurement process. In hindsight this makes sense, otherwise you'd be able to make competing connections where only one is allowed.

I was hoping to avoid making a separate process for every instrument, as every new process makes the system more brittle and makes it harder to recover from errors, but at this point that's the only solution I see. I don't think this will be that hard, actually. Here's what I'm thinking:

  • On instantiation, we don't make any hardware calls, but we start a new process that connects and then just sits there forever listening to a queue.
  • Other methods that do talk to the hardware get a decorator, actually I think we'll need two, something like @ask_instrument and @write_instrument depending on whether we're supposed to wait for a return or not. These decorators look at an attribute like self._on_server - if it's called on the server it executes the function, but if not it passes the function call through the queue
  • For some specific cases we may want to short-circuit the decorator logic for a little performance boost. Like I bet we can handle everything in visa instruments just explicitly wiring .read, .write, and .ask to the queue. But that's an optimization we can work out later.
  • The server should also hold all instrument saved state (ie calls to .get_latest for any Parameter that's connected to an instrument) so that different processes will agree on the state.

Sound reasonable? Anyone see a better option?

unified commands for Instrument drivers

While we integrate more and more drivers into qcodes we will have to make sure they can be used in a uniform fashion from the user perspective.
We might already run into this in @eendebakpt comment in #74
I guess all DMMs have basically the same main functions, but different special functions, the same goes for SourceMeters, magnets and others.
If we have single users working on a new driver we somehow cant force them to go through potentially tens of other drivers to figure out the optimum syntax.

How can we unify this?

Also in line with this, most instruments have a model name, a manufacturer, version, software version, i even think the visa *IDN? command is unified, at least it looks the same for the instruments I looked for.
In my drivers I did:

        vendor, model, serial, software = map(str.strip, self.IDN.split(','))

and put that data in a dict, however, I think this should be unified as well and be saved in the snapshot.
How should we go about it?

[Inspiration] Zurich Instruments LabOne

Last week at the ScaleQIT conference there was a presentation by Zurich Instruments.
Afterwards I had a discussion with one their employees, about the software they created for controlling their instruments LabOne. I think there are some good ideas in there that we could use when thinking about a gui.

I think the central idea is to have on instrument server that all the specific gui's talk to (with the addition of showing the commands you clicked aswell). We could use a similar approach where we start the instrument server (and other qcodes processes) in one notebook to which we can then connect using other notebook instances and/or specific instrument gui's.

This might already be similar to the ideas floating around here but I tought it could be interesting to take a look at.

p.s. I'd say that strictly speaking this is not really an issue but I thought it would be useful to share it here anyway, if anyone has a better suggestion for these kinds of discussions I'm open to it.

Windows problems with pickling

I've already sent this in mail to Alex, but he requested that I post it as well.

I just got rid of all my changes, did a fresh pull and still get the following:

In [6]:

 # start a sweep (which by default runs in a seprarate process)
 # the sweep values are defined by slicing the parameter object
 # but more complicated sweeps (eg nonlinear, or adaptive) can
 # easily be used instead
swp = measurement.sweep(c0[-20:20:0.1], 0.2, location='testsweep')
swp
ร—
---------------------------------------------------------------------------
PicklingError                             Traceback (most recent call last)
<ipython-input-6-b8b21fba9fbb> in <module>()
      3 # but more complicated sweeps (eg nonlinear, or adaptive) can
      4 # easily be used instead
----> 5 swp = measurement.sweep(c0[-20:20:0.1], 0.2, location='testsweep')
      6 swp

C:\depot\Git\Qcodes\qcodes\station.py in sweep(self, location, storage_class, background, use_async, enqueue, *args)
    170             # (like log them somewhere, show in monitoring window)?
    171             p = mp.Process(target=sweep_fn, daemon=True)
--> 172             p.start()
    173 
    174             # flag this as a sweep process, and connect in its storage object

C:\Anaconda3\lib\multiprocessing\process.py in start(self)
    103                'daemonic processes are not allowed to have children'
    104         _cleanup()
--> 105         self._popen = self._Popen(self)
    106         self._sentinel = self._popen.sentinel
    107         _children.add(self)

C:\Anaconda3\lib\multiprocessing\context.py in _Popen(process_obj)
    210     @staticmethod
    211     def _Popen(process_obj):
--> 212         return _default_context.get_context().Process._Popen(process_obj)
    213 
    214 class DefaultContext(BaseContext):

C:\Anaconda3\lib\multiprocessing\context.py in _Popen(process_obj)
    311         def _Popen(process_obj):
    312             from .popen_spawn_win32 import Popen
--> 313             return Popen(process_obj)
    314 
    315     class SpawnContext(BaseContext):

C:\Anaconda3\lib\multiprocessing\popen_spawn_win32.py in __init__(self, process_obj)
     64             try:
     65                 reduction.dump(prep_data, to_child)
---> 66                 reduction.dump(process_obj, to_child)
     67             finally:
     68                 context.set_spawning_popen(None)

C:\Anaconda3\lib\multiprocessing\reduction.py in dump(obj, file, protocol)
     57 def dump(obj, file, protocol=None):
     58     '''Replacement for pickle.dump() using ForkingPickler.'''
---> 59     ForkingPickler(file, protocol).dump(obj)
     60 
     61 #

PicklingError: Can't pickle <function mock_sync.<locals>.<lambda> at 0x00000000078BAC80>: attribute lookup <lambda> on qcodes.utils.sync_async failed

parameter docstrings

@AdriaanRol re qdev-dk-archive#8 (comment) but taking it out of #8 because that's too long to find anything anymore!

Is there a way to add docstrings to parameters that are created through the add_parameter function?

I looked into this a bit - it would be quite difficult, as help(param) looks at effectively type(param).__doc__ (ie the class attribute), NOT at param.__doc__. I propose we make a property param.help and reference this in the (static) docstring for Parameter

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.