Giter Club home page Giter Club logo

qutip-tensorflow's Introduction

QuTiP: Quantum Toolbox in Python

A. Pitchford, C. Granade, A. Grimsmo, N. Shammah, S. Ahmed, N. Lambert, E. Giguère, B. Li, J. Lishman, S. Cross, A. Galicia, P. Menczel, P. Hopf, P. D. Nation, and J. R. Johansson

Build Status Coverage Status Maintainability license PyPi Downloads Conda-Forge Downloads

QuTiP is open-source software for simulating the dynamics of closed and open quantum systems. It uses the excellent Numpy, Scipy, and Cython packages as numerical backends, and graphical output is provided by Matplotlib. QuTiP aims to provide user-friendly and efficient numerical simulations of a wide variety of quantum mechanical problems, including those with Hamiltonians and/or collapse operators with arbitrary time-dependence, commonly found in a wide range of physics applications. QuTiP is freely available for use and/or modification, and it can be used on all Unix-based platforms and on Windows. Being free of any licensing fees, QuTiP is ideal for exploring quantum mechanics in research as well as in the classroom.

Support

Unitary Fund Powered by NumFOCUS

We are proud to be affiliated with Unitary Fund and numFOCUS.

We are grateful for Nori's lab at RIKEN and Blais' lab at the Institut Quantique for providing developer positions to work on QuTiP.

We also thank Google for supporting us by financing GSoC students to work on the QuTiP as well as other supporting organizations that have been supporting QuTiP over the years.

Installation

Pip Package Conda-Forge Package

QuTiP is available on both pip and conda (the latter in the conda-forge channel). You can install QuTiP from pip by doing

pip install qutip

to get the minimal installation. You can instead use the target qutip[full] to install QuTiP with all its optional dependencies. For more details, including instructions on how to build from source, see the detailed installation guide in the documentation.

All back releases are also available for download in the releases section of this repository, where you can also find per-version changelogs. For the most complete set of release notes and changelogs for historic versions, see the changelog section in the documentation.

The pre-release of QuTiP 5.0 is available on PyPI and can be installed using pip:

pip install --pre qutip

This version breaks compatibility with QuTiP 4.7 in many small ways. Please see the changelog for a list of changes, new features and deprecations. This version should be fully working. If you find any bugs, confusing documentation or missing features, please create a GitHub issue.

Documentation

Documentation Status - Latest

The documentation for the latest stable release and the master branch is available for reading on Read The Docs.

The documentation for official releases, in HTML and PDF formats, can be found in the documentation section of the QuTiP website.

The latest development documentation is available in this repository in the doc folder.

A selection of demonstration notebooks is available, which demonstrate some of the many features of QuTiP. These are stored in the qutip/qutip-tutorials repository here on GitHub.

Contribute

You are most welcome to contribute to QuTiP development by forking this repository and sending pull requests, or filing bug reports at the issues page. You can also help out with users' questions, or discuss proposed changes in the QuTiP discussion group. All code contributions are acknowledged in the contributors section in the documentation.

For more information, including technical advice, please see the "contributing to QuTiP development" section of the documentation.

Citing QuTiP

If you use QuTiP in your research, please cite the original QuTiP papers that are available here.

qutip-tensorflow's People

Contributors

agaliciamartinez avatar jakelishman avatar quantshah avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

qutip-tensorflow's Issues

Missing specialisations

This issue contains a list of the missing specialisations in qutip-tensorflow. These are shorted by priority.

  • reshape (PR #22)
  • norm (PR #24)
  • ptrace
  • permute
  • dimensions
  • indices
  • eigs (written locally but tests missing)
  • make:
  • diag
  • one_element
  • constant
  • zeros
  • identity
  • properties
  • isherm
  • isdiag
  • iszero
  • tidyup (should not be needed after PR 1615 in QuTiP)

Note that although these operations do not have specialisations, the will still work since QuTiP will default to the specialisation for one of the other data types. This is a problem for auto differentiation though it realistically only affects ptrace, permute and eigs.

Exposing hidden conversions between data types in qutip-tensorflow.

Description

One of features that QuTiP brings with the new data layer is the possibility to automatically convert between data layers to perform operations for which an specialisation does not exist. This is useful in some cases as it means you will not need to define every specialisation once a new data type has been added. However, this may pose some problems for qutip-tensorflow as we do not want automatic conversion of data types to happen, for example, when using the following non-defined specialisation:
add(TfTensor128, TfTenso64)
I am no sure whether TfTensor128 will be downcasted to TfTensor64 or the other way around. In any case, if such operation is being performed I would prefer a warning being raised. Similarly, converting from Dense to TfTensor is okay but converting TfTensor to Dense is probably not desired in most (if not all) cases. This is because it can happen automatically and not be noticed while using the automatic differentiation feature.

I anticipate that in most cases this issue will affect the user because the user forgot to convert one of the Qobj to the appropriate data type. Raising an error in some of these cases would be ideal to help debugging.

Note, that we still want to use the to method when used by the user explicitly.

Possible solution 1 - not ideal

We can create an specialisation for the cases were no conversion is wanted, add(TfTensor128, TfTensor64), add(Dense, TfTensor64) for example. This specialisation will raise a TypeError exception. However, this means that new specialisations added by other packages will still be converted, so it is not ideal.

Possible solution 2

Instead of adding a new specialisation, we can make the conversion specialisations (_tf_to_dense in this case) to raise a custom warning: NotASafeConversion. This conversion would then be caught by the to method in Qobj and be ignored assuming that the to method for Qobj is used only if the user is sure of what it is doing.

I would ideally like to raise an exception and not a warning but I am not sure how to do this as raising an exception in the conversion specialisation would also break the to method at the Qobj level. Maybe modifying the dispatcher allow this to work?

QObj Backwards Compatibility Issues

Some issues with using some basic regular qutip commands within a QObj defined with qutip_tensorflow.

Example 1: QObj.unit()

import tensorflow as tf
import qutip as qp
import numpy as np
import qutip_tensorflow as qtf

g = qp.basis(2,0).to('TfTensor128')
e = qp.basis(2,1).to('TfTensor128')

plus = (g+e).unit()
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
----> 9: plus = (g+e).unit()
File ....\lib\site-packages\qutip\core\qobj.py:1062, in Qobj.unit(self, inplace, norm, kwargs)
-> 1062     out = self / norm
File ...\lib\site-packages\tensorflow\python\util\traceback_utils.py:153, in filter_traceback.<locals>.error_handler(*args, **kwargs)
--> 153   raise e.with_traceback(filtered_tb) from None
File ...\lib\site-packages\tensorflow\python\framework\constant_op.py:102, in convert_to_eager_tensor(value, ctx, dtype)
--> 102 return ops.EagerTensor(value, ctx.device_name, dtype)
ValueError: TypeError: object of type 'Qobj' has no len()

Example 2: Trace QObj.tr()

import tensorflow as tf
import qutip as qp
import numpy as np
import qutip_tensorflow as qtf

g = qp.basis(2,0).to('TfTensor128')
e = qp.basis(2,1).to('TfTensor128')

plus = (g+e)/np.sqrt(2)

plus_dm = qp.ket2dm(plus)

print(plus_dm.tr())
 ---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
---> 13 print(plus_dm.tr())
File ...\lib\site-packages\qutip\core\qobj.py:788, in Qobj.tr(self)
--> 788 return out.real if self.isherm else out
File ...\lib\site-packages\tensorflow\python\framework\ops.py:446, in Tensor.__getattr__(self, name)
--> 446 self.__getattribute__(name)
AttributeError: 'tensorflow.python.framework.ops.EagerTensor' object has no attribute 'real'

_Versions:

  • Tensorflow v2.10.0
  • Qutip v5.0.0.dev0+116dc88
  • Numpy v1.23.1
  • Qutip-Tensorflow v0.1.0.dev0+65faa45_

Paths from the traceback data has been redacted for privacy

Functions that lack test in QuTiP.

These are some modules in QuTiP for which there are no tests written yet. I will try to write them since they are very useful to define what an operation should do. This facilitates the development of new data types.

  • Eigen
  • ptrace

Put all the convert.py hooks into __init__.py?

I was trying to find out where the initial hooking of qutip-tensorflow and qutip happens and it turns out that its all done in convert.py when imported in the the init.py. I feel it would be a bit more verbose to see this happen inside the init as qutip-cupy does it. That makes it very clear to see and follow the magic of how qutip-tensorflow hooks on to the data layer.

@AGaliciaMartinez what do you think? I will follow this for qutip-jax.

Tf function support.

I have been able to make the following code work:

@tf.function
def fastfunction(tensor):
    print('hi')
    qobj = qutip.Qobj(tensor)
    qobj += qobj
    return qobj.data._tf

fastfunction(tf.constant([10]))  # prints hi
fastfunction(tf.constant([10]))  # Does not print hi since it only executes the graph.

It does not work with the current version of TfTensor as the instantiation process does as first step:

data = tf.constant(data)

Not sure what the error message I get means, but removing this line of code makes the above compiled function to work. tf.constant is meant to create "constant" nodes in the graph, so it makes sense it complains as you are making a constant value from a variable input. The fix is quite easy, just check if it is a tensor an do not do the tf.constant in that case. However, having fastfunction accept Qobj as input is a little bit more involved and will most likely require us to create our own qtf.function. I believe it is doable though.

Functions to be written.

These are all the functions that the TensorFlow data layer could implement. Not all of these will be specialized as the may not be necessary thanks to the to method. These are not sorted in any particular way.

Data-layer:

  • copy
  • to_array
  • conj
  • transpose
  • adjoint
  • trace

Specializations:

  • convert:
    • to
    • create
  • add:
    • add
    • sub
  • matmul
  • expm
  • adjoint:
    • adjoint
    • conj
    • transpose
  • expect:
    • expect
    • expect_super
  • inner:
    • inner
    • inner_op
  • mul:
    • mul
    • imul
    • neg
  • kron
  • norm:
    • l2
    • one
    • frobenius
    • max
    • trace
  • pow
  • trace

Remaining specialisations sorted by priority:

  • eigen:
    • eigs
  • reshape
    • reshape
    • column_stack
    • column_unstack
    • split_columns
  • expect:
    • expect_super
  • project
  • ptrace
  • make
    • diag
    • one_element
  • constant
    • zeros
    • identity
  • properties
    • ishemr
    • isdiag
    • iszero
  • permute
    • dimensions
    • indices
  • tidy_up

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.