Giter Club home page Giter Club logo

mit-han-lab / torchquantum Goto Github PK

View Code? Open in Web Editor NEW
1.2K 25.0 168.0 96.56 MB

A PyTorch-based framework for Quantum Classical Simulation, Quantum Machine Learning, Quantum Neural Networks, Parameterized Quantum Circuits with support for easy deployments on real quantum computers.

Home Page: https://torchquantum.org

License: MIT License

Python 31.21% Jupyter Notebook 68.77% Shell 0.02%
pytorch-quantum quantum quantum-machine-learning neural-network machine-learning quantum-computing pytorch deep-learning system ml-for-systems

torchquantum's Introduction

torchquantum Logo

Quantum Computing in PyTorch

Faster, Scalable, Easy Debugging, Easy Deployment on Real Machine

Documentation MIT License Chat @ Slack Chat @ Discord Website Pypi Pypi Pypi Pypi


👋 Welcome

What it is doing

Simulate quantum computations on classical hardware using PyTorch. It supports statevector simulation and pulse simulation on GPUs. It can scale up to the simulation of 30+ qubits with multiple GPUs.

Who will benefit

Researchers on quantum algorithm design, parameterized quantum circuit training, quantum optimal control, quantum machine learning, quantum neural networks.

Differences from Qiskit/Pennylane

Dynamic computation graph, automatic gradient computation, fast GPU support, batch model tersorized processing.

News

  • v0.1.8 Available!
  • Check the dev branch for new latest features on quantum layers and quantum algorithms.
  • Join our Slack for real time support!
  • Welcome to contribute! Please contact us or post in the Github Issues if you want to have new examples implemented by TorchQuantum or any other questions.
  • Qmlsys website goes online: qmlsys.mit.edu and torchquantum.org

Features

  • Easy construction and simulation of quantum circuits in PyTorch
  • Dynamic computation graph for easy debugging
  • Gradient support via autograd
  • Batch mode inference and training on CPU/GPU
  • Easy deployment on real quantum devices such as IBMQ
  • Easy hybrid classical-quantum model construction
  • (coming soon) pulse-level simulation

Installation

git clone https://github.com/mit-han-lab/torchquantum.git
cd torchquantum
pip install --editable .

Basic Usage

import torchquantum as tq
import torchquantum.functional as tqf

qdev = tq.QuantumDevice(n_wires=2, bsz=5, device="cpu", record_op=True) # use device='cuda' for GPU

# use qdev.op
qdev.h(wires=0)
qdev.cnot(wires=[0, 1])

# use tqf
tqf.h(qdev, wires=1)
tqf.x(qdev, wires=1)

# use tq.Operator
op = tq.RX(has_params=True, trainable=True, init_params=0.5)
op(qdev, wires=0)

# print the current state (dynamic computation graph supported)
print(qdev)

# obtain the qasm string
from torchquantum.plugin import op_history2qasm
print(op_history2qasm(qdev.n_wires, qdev.op_history))

# measure the state on z basis
print(tq.measure(qdev, n_shots=1024))

# obtain the expval on a observable by stochastic sampling (doable on simulator and real quantum hardware)
from torchquantum.measurement import expval_joint_sampling
expval_sampling = expval_joint_sampling(qdev, 'ZX', n_shots=1024)
print(expval_sampling)

# obtain the expval on a observable by analytical computation (only doable on classical simulator)
from torchquantum.measurement import expval_joint_analytical
expval = expval_joint_analytical(qdev, 'ZX')
print(expval)

# obtain gradients of expval w.r.t. trainable parameters
expval[0].backward()
print(op.params.grad)


# Apply gates to qdev with tq.QuantumModule
ops = [
    {'name': 'hadamard', 'wires': 0}, 
    {'name': 'cnot', 'wires': [0, 1]},
    {'name': 'rx', 'wires': 0, 'params': 0.5, 'trainable': True},
    {'name': 'u3', 'wires': 0, 'params': [0.1, 0.2, 0.3], 'trainable': True},
    {'name': 'h', 'wires': 1, 'inverse': True}
]

qmodule = tq.QuantumModule.from_op_history(ops)
qmodule(qdev)

Guide to the examples

We also prepare many example and tutorials using TorchQuantum.

For beginning level, you may check QNN for MNIST, Quantum Convolution (Quanvolution) and Quantum Kernel Method, and Quantum Regression.

For intermediate level, you may check Amplitude Encoding for MNIST, Clifford gate QNN, Save and Load QNN models, PauliSum Operation, How to convert tq to Qiskit.

For expert, you may check Parameter Shift on-chip Training, VQA Gradient Pruning, VQE, VQA for State Prepration, QAOA (Quantum Approximate Optimization Algorithm).

Usage

Construct parameterized quantum circuit models as simple as constructing a normal pytorch model.

import torch.nn as nn
import torch.nn.functional as F
import torchquantum as tq
import torchquantum.functional as tqf

class QFCModel(nn.Module):
  def __init__(self):
    super().__init__()
    self.n_wires = 4
    self.measure = tq.MeasureAll(tq.PauliZ)

    self.encoder_gates = [tqf.rx] * 4 + [tqf.ry] * 4 + \
                         [tqf.rz] * 4 + [tqf.rx] * 4
    self.rx0 = tq.RX(has_params=True, trainable=True)
    self.ry0 = tq.RY(has_params=True, trainable=True)
    self.rz0 = tq.RZ(has_params=True, trainable=True)
    self.crx0 = tq.CRX(has_params=True, trainable=True)

  def forward(self, x):
    bsz = x.shape[0]
    # down-sample the image
    x = F.avg_pool2d(x, 6).view(bsz, 16)

    # create a quantum device to run the gates
    qdev = tq.QuantumDevice(n_wires=self.n_wires, bsz=bsz, device=x.device)

    # encode the classical image to quantum domain
    for k, gate in enumerate(self.encoder_gates):
      gate(qdev, wires=k % self.n_wires, params=x[:, k])

    # add some trainable gates (need to instantiate ahead of time)
    self.rx0(qdev, wires=0)
    self.ry0(qdev, wires=1)
    self.rz0(qdev, wires=3)
    self.crx0(qdev, wires=[0, 2])

    # add some more non-parameterized gates (add on-the-fly)
    qdev.h(wires=3)
    qdev.sx(wires=2)
    qdev.cnot(wires=[3, 0])
    qdev.qubitunitary(wires=[1, 2], params=[[1, 0, 0, 0],
                                            [0, 1, 0, 0],
                                            [0, 0, 0, 1j],
                                            [0, 0, -1j, 0]])

    # perform measurement to get expectations (back to classical domain)
    x = self.measure(qdev).reshape(bsz, 2, 2)

    # classification
    x = x.sum(-1).squeeze()
    x = F.log_softmax(x, dim=1)

    return x

VQE Example

Train a quantum circuit to perform VQE task. Quito quantum computer as in simple_vqe.py script:

cd examples/vqe
python vqe.py

MNIST Example

Train a quantum circuit to perform MNIST classification task and deploy on the real IBM Quito quantum computer as in mnist_example.py script:

cd examples/mnist
python mnist.py

Files

File Description
devices.py QuantumDevice class which stores the statevector
encoding.py Encoding layers to encode classical values to quantum domain
functional.py Quantum gate functions
operators.py Quantum gate classes
layers.py Layer templates such as RandomLayer
measure.py Measurement of quantum states to get classical values
graph.py Quantum gate graph used in static mode
super_layer.py Layer templates for SuperCircuits
plugins/qiskit* Convertors and processors for easy deployment on IBMQ
examples/ More examples for training QML and VQE models

Coding Style

torchquantum uses pre-commit hooks to ensure Python style consistency and prevent common mistakes in its codebase.

To enable it pre-commit hooks please reproduce:

pip install pre-commit
pre-commit install

Papers using TorchQuantum

Manuscripts

Manuscripts

Dependencies

  • 3.9 >= Python >= 3.7 (Python 3.10 may have the concurrent package issue for Qiskit)
  • PyTorch >= 1.8.0
  • configargparse >= 0.14
  • GPU model training requires NVIDIA GPUs

Contact

TorchQuantum Forum

Hanrui Wang [email protected]

Contributors

Jiannan Cao, Jessica Ding, Jiai Gu, Song Han, Zhirui Hu, Zirui Li, Zhiding Liang, Pengyu Liu, Yilian Liu, Mohammadreza Tavasoli, Hanrui Wang, Zhepeng Wang, Zhuoyang Ye

Citation

@inproceedings{hanruiwang2022quantumnas,
    title     = {Quantumnas: Noise-adaptive search for robust quantum circuits},
    author    = {Wang, Hanrui and Ding, Yongshan and Gu, Jiaqi and Li, Zirui and Lin, Yujun and Pan, David Z and Chong, Frederic T and Han, Song},
    booktitle = {The 28th IEEE International Symposium on High-Performance Computer Architecture (HPCA-28)},
    year      = {2022}
}

torchquantum's People

Contributors

abclzr avatar bopardikarsoham avatar caitaozhan avatar d-bharadwaj avatar denvitko avatar dogukantai avatar dtlics avatar eltociear avatar frogcjn avatar genericp3rson avatar googlercolin avatar gopal-dahale avatar hanrui-wang avatar hosseinberg avatar jeremiemelo avatar jessding avatar jinleic avatar kit1980 avatar micheallscarn avatar mihirsinhchauhan avatar mohammadrezatavasoli avatar pandey-tushar avatar teaguetomesh avatar vivekyy avatar yezhuoyang avatar yunariai avatar zhaoyilunnn avatar zlianghahaha avatar zwang-1 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

torchquantum's Issues

GPU is not utilized during VQE training

I tried to use codes in VQE examples but found that GPU was not utilized. However, 2GB of the GPU memory is used.

My configuration:

[2022-05-31 13:54:52.758] /home/yuxuan/.julia/conda/3/envs/qtorch39/bin/python  examples/vqe/xxz_noncritical_configs.yml --gpu 0
[2022-05-31 13:54:52.758] Training started: "runs/vqe.xxz_noncritical_configs".
dataset:
  name: vqe
  input_name: input
  target_name: target
trainer:
  name: params_shift_trainer
run:
  steps_per_epoch: 10
  workers_per_gpu: 8
  n_epochs: 10
  bsz: 1
  device: gpu
model:
  transpile_before_run: False
  load_op_list: False
  hamil_filename: examples/vqe/h2.txt
  arch:
    n_wires: 6
    n_layers_per_block: 6
    q_layer_name: seth_0
    n_blocks: 6
  name: vqe_0
qiskit:
  use_qiskit: False
  use_qiskit_train: True
  use_qiskit_valid: True
  use_real_qc: False
  backend_name: ibmq_quito
  noise_model_name: None
  n_shots: 8192
  initial_layout: None
  optimization_level: 0
  max_jobs: 1
ckpt:
  load_ckpt: False
  load_trainer: False
  name: checkpoints/min-loss-valid.pt
debug:
  pdb: False
  set_seed: False
optimizer:
  name: adam
  lr: 0.05
  weight_decay: 0.0001
  lambda_lr: 0.01
criterion:
  name: minimize
scheduler:
  name: cosine
callbacks: [{'callback': 'InferenceRunner', 'split': 'valid', 'subcallbacks': [{'metrics': 'MinError', 'name': 'loss/valid'}]}, {'callback': 'MinSaver', 'name': 'loss/valid'}, {'callback': 'Saver', 'max_to_keep': 10}]
regularization:
  unitary_loss: False
legalization:
  legalize: False

GPU status by Nvitop: (The last line is for VQE training process.)

image

Version information:

python                    3.9.11               h12debd9_2
tensorboard               2.9.0                    pypi_0    pypi
tensorboard-data-server   0.6.1                    pypi_0    pypi
tensorboard-plugin-wit    1.8.1                    pypi_0    pypi
tensorflow                2.9.1                    pypi_0    pypi
tensorflow-estimator      2.9.0                    pypi_0    pypi
tensorflow-io-gcs-filesystem 0.26.0                   pypi_0    pypi
tensorpack                0.11                     pypi_0    pypi
torch                     1.11.0+cu113             pypi_0    pypi
torchaudio                0.11.0+cu113             pypi_0    pypi
torchpack                 0.3.1                    pypi_0    pypi
torchquantum              0.1.0                     dev_0    <develop>
torchvision               0.12.0+cu113             pypi_0    pypi

Add expectation value of a pauli string as a function

Can we add a function that returns a single value when computing the expectation value of some observable Z1Z2? Right now it returns [ , ] as an array.

Can we also add a function that returns the expectation value by measuring the output state only once? For this, we can just utilize measure(q_device, n_shots = 1) and post-process the outcome? This feature is available in very common in libraries like penny lane, and we should adopt this for our users.

def exp_val(q_device, n_shots = 1):

Code for QuantumNAS missing

Hi Hanrui,

I tried hard but still couldn't find the code for QuantumNAS. Is it not yet open? Or did I find the wrong place?

Thanks in advance.

inconvenient to run VQE example

I find it not very convenient to run VQE example. If I run python examples/vqe/train.py directly, I will get an error message

......
    from examples.vqe import builder
ModuleNotFoundError: No module named 'examples.vqe'

But python -c "from examples.vqe import builder" works fine, which is strange to me.

And my current way to run the script is by opening a python REPL and running

from examples.vqe import train
import sys
sys.argv.append('examples/vqe/vqe_configs.yml')
train.main()

I wonder whether I can do it in a simpler way. Or there is a need to modify codes.

Install fails on arm64 MacOS

System: MacOS Ventura 13.0
Python: 3.11.0 (inside Conda env)

  note: This error originates from a subprocess, and is likely not a problem with pip.
  ERROR: Failed building wheel for tweedledum
Failed to build tweedledum
ERROR: Could not build wheels for tweedledum, which is required to install pyproject.toml-based projects

Problem is in pip install tweedledum

This seems like a well known issue in tweedledum and in qiskit-terra

Raw probabilities for measurement results

When it comes to using measurement results in learning methods, is there a way to simply get the exact probability for each measurement result (as opposed to the expectation for each register) from the device? Xanadu Pennylane has the qml.probs() function for that.

Train data label and image are different

Hi ,
I tried testing your quantum neural network code on jupyter notebook.
I think, there is some bug in the training data.

dataset = MNIST(root='../Data_Manu',
                 train_valid_split_ratio=[0.9, 0.1],
            digits_of_interest=[3, 5],
#             n_train_samples = 75,
            n_test_samples=75)

data_train= dataset['train'][0]

{'image': tensor([[[-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
           -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
           -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
           -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
          [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
           -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
           -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
           -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
          [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
           -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
           -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
           -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
          [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
           -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
           -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
           -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
          [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
           -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
           -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
           -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
          [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
           -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
           -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
           -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
          [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
           -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
           -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.3733, -0.1696,
           -0.1696,  0.8868,  1.4468,  2.1087,  0.0213, -0.4242, -0.4242],
          [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
           -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
           -0.4242, -0.4242,  0.1995,  0.6704,  1.8032,  1.9560,  2.7960,
            2.8088,  2.7960,  2.7960,  2.7960,  2.4142, -0.4242, -0.4242],
          [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
            1.6250,  2.1723,  1.3068,  1.3068,  1.3196,  1.3068,  1.3068,
            1.4978,  2.5542,  2.7069,  2.7960,  2.7960,  2.7960,  2.7960,
            2.8088,  2.7960,  2.7960,  2.3251,  1.8160, -0.4242, -0.4242],
          [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
            2.2996,  2.7960,  2.7960,  2.7960,  2.8088,  2.7960,  2.7960,
            2.7960,  2.7960,  2.8088,  2.7960,  2.7960,  2.7960,  2.7960,
            2.0323,  0.9886,  0.3140, -0.3606, -0.4242, -0.4242, -0.4242],
          [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
            0.7977,  2.8088,  2.8088,  2.8088,  2.8215,  2.8088,  2.8088,
            2.8088,  2.8088,  2.6433,  1.9560,  0.8232,  0.6322, -0.4242,
           -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
          [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
            0.5049,  2.7960,  2.7960,  2.7960,  2.8088,  2.4778,  1.2941,
            0.6322,  0.0722, -0.0424, -0.4242, -0.4242, -0.4242, -0.4242,
           -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
          [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,  0.4795,
            2.6433,  2.7960,  2.7960,  2.7960,  1.0523, -0.2715, -0.4242,
           -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
           -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
          [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,  2.2360,
            2.7960,  2.7960,  2.5160,  0.5813, -0.4242, -0.4242, -0.4242,
           -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
           -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
          [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,  2.8088,
            2.7960,  2.7960,  2.4906,  0.8232,  0.8359,  0.8232,  0.8232,
            0.8232, -0.1315, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
           -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
          [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,  2.0451,
            2.8088,  2.8088,  2.8088,  2.8088,  2.8215,  2.8088,  2.8088,
            2.8088,  2.8088,  2.0451,  0.1740, -0.4242, -0.4242, -0.4242,
           -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
          [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,  0.1740,
            1.8796,  2.5415,  2.5415,  2.6433,  2.5542,  2.5415,  2.5415,
            2.5415,  2.6433,  2.8088,  2.3760, -0.2460, -0.4242, -0.4242,
           -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
          [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
           -0.4242, -0.4242, -0.4242, -0.0424, -0.4242, -0.4242, -0.4242,
           -0.4242, -0.0424,  2.4142,  2.7960,  2.1087, -0.3478, -0.4242,
           -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
          [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
           -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
           -0.4242, -0.2206,  2.2360,  2.7960,  2.7960, -0.1824, -0.4242,
           -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
          [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
           -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.0296,
            1.4850,  2.3378,  2.8088,  2.7960,  2.7960,  0.3013, -0.4242,
           -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
          [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
           -0.4242, -0.4242, -0.4242, -0.4242,  0.6195,  1.6505,  2.8088,
            2.8088,  2.8088,  2.8215,  2.8088,  1.9051, -0.3224, -0.4242,
           -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
          [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
           -0.4242, -0.3351,  0.5049,  2.0196,  2.8088,  2.7960,  2.7960,
            2.7960,  2.7960,  2.4524,  1.2050, -0.2715, -0.4242, -0.4242,
           -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
          [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
           -0.3351,  1.7269,  2.7960,  2.7960,  2.8088,  2.7960,  2.7960,
            2.6306,  1.7905,  0.3395, -0.4242, -0.4242, -0.4242, -0.4242,
           -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
          [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
           -0.3097,  1.4341,  2.2869,  2.2869,  2.2996,  1.7141,  1.0650,
           -0.1315, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
           -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
          [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
           -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
           -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
           -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
          [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
           -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
           -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
           -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
          [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
           -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
           -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
           -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
          [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
           -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
           -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
           -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242]]]),
 'digit': 1}
The tensor matrix contains 5 but the label shows 1?

Exponential of linear combination of Paulis

Hi, could we please add a function to compute the exponential of a linear combination of Multi Qubit Paulis?

E.g. H =0.1Z1 Z2 X3 Y4 + 0.2X1 Z2 X3 Z4 + 0.01* Z1 Z2 X3 Z4, then can we implement the exp(iHt)? Allow H to be arbitrary!

Request to add grouping for pauli strings

Thanks for the fantastic codebase.

Can we add a new feature to group the Pauli strings when conducting measurements in VQE? Since part of the Pauli strings can be measured simultaneously, we may group them together to make measurement more efficient.

Multi-GPU Training

Hi,

In the intro section on the git page of TorchQuantum, it is mentioned that TorchQuantum can simulate ~30qubits with multiple GPUs? However, it looks like that a q_device as defined currently in the code isn't easily split using PyTorch model parallelism (e.g., the state vector cannot be split?). Could you pls provide some comments on multi-GPU use of TorchQuantum?

Thanks!

Using TorchQuantum with GPU

I've encountered a weird scenario where measurement seems to change the device of the tensor from cuda to cpu after measurement (using tq.MeasureAll(tq.PauliZ) ). This bug only occurs when I try to run a network without first encoding the data using tq.AmplitudeEncoder() , notably, if I try to use tq.MultiPhaseEncoder(['u3', 'u3', 'u3', 'u3']) instead, I get the same issue. This is notably important for the code I'm currently working on, as it has to do with generating custom embeddings.

Is there a fix for this? I can't directly change the measured output to be a gpu tensor, so I was hoping there was some other way of fixing the problem.

A simple way to improve the regression example

loss = F.mse_loss(output_all[:, 1], target_all)

In this example, after the measurement, you get 3 numbers (output_all.shape = [bsz, 3]).
However, in the loss function, only the 2nd number is utilized, i.e., [:, 1]. This leads to poor performance.
A simple fix can significantly improve the performance (I already tested).

  1. Add res = self.linear(res) to the last step of the self.forward() function, where self.linear = nn.torch.Linear(self.n_wires, 1)
  2. The targets need to unsqueeze(dim=-1) so the dimension of outputs_all and targets match

BTW: I have been playing around with torchquantum recently. It is a very good tool.

The Advantage of quantum-lstm versus classical lstm

Thank you very much for creating and uploading valuable resources for Quantum Machine Learning.

Among the codes you provided, I am interested in utilizing quantum LSTM. However, I'm curious about its advantages compared to classical LSTM. While classical LSTM seems to perform better in terms of the attached performance metrics, are there any advantages in terms of computational complexity or other aspects?

Thank you so much for your hard work!

Got stuck while running .\artifact\example2

I tried to run torchquantum-master\artifact\example2\quantumnas\1_train_supercircuit.sh, but I got stuck.

The program seems stuck after it begins to train. After the message "0% 0/92 [00:00<?, ?it/s]" came out, I waited for hours but nothing happened. The output is in the file"errorlog.log".

The version information is as follow,

>>> import qiskit
>>> qiskit.version.QiskitVersion()
{'qiskit-terra': '0.19.2', 'qiskit-aer': '0.10.3', 'qiskit-ignis': '0.7.0', 'qiskit-ibmq-provider': '0.18.3', 'qiskit-aqua': '0.9.5', 'qiskit': '0.34.2', 'qiskit-nature': None, 'qiskit-finance': None, 'qiskit-optimization': None, 'qiskit-machine-learning': None}

and I'm running the code under python 3.9.

I wonder how to deal with it. Any help would be greatly appreciated.

errorlog.log

Testing of mnist_example_no_binding.py file produces error

Hi,
I tried testing the mnist_example_no_binding.py file. I keep getting the below error:

  File "C:\Users\manuc\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\__init__.py", line 126, in <module>
    raise err
OSError: [WinError 1455] The paging file is too small for this operation to complete. Error loading "C:\Users\manuc\AppData\Local\Programs\Python\Python39\
lib\site-packages\torch\lib\cusparse64_11.dll" or one of its dependencies.
Traceback (most recent call last):
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "<string>", line 1, in <module>

Please suggest me a way to resolve this. Because of this error, my testing is not yet complete.

Code for reproducing QuantumNAS results missing

The .py and .yml files referenced in the shell scripts used to reproduce the results from the QuantumNAS paper seem to be missing - when running the Colab notebooks, I always run into this error:

can't open file 'examples/train.py': [Errno 2] No such file or directory

I tried searching for the files in the repo manually, but could not find the .py or the .yml files anywhere.

Building a 4-class classifier with a 2-qubit circuit

I am wondering about a possible alternative approach for using quantum measurement results to perform a 4-class classification task. Basically I would like to be able to carry out the classification with a 2-qubit architecture. Each class would be assigned to one of the computational basis states among the possible measurement results. I would perform the appropriate operations on the circuit, measure the state with a certain number of shots, and create a 4-element tensor each element corresponding to the number of shots producing each measurement results, so you have one value for the number of "00" results and so on. I would then take a softmax of those values to get the classification result. What I am wondering is whether such a method cam work with pytorch quantum. If so, since I don't have a lot of experience with pytorch quantum, I would appreciate if someone would be willing to show me how to make sure that I am doing this without accidentally breaking the computational graph of pytorch and preventing the parameters of my model from being updated.

Support for fake backends

First of all, I appreciate your effort! This framework is so helpful for new learners!

I think it would be great if this framework supports fake backends as well for reproducibility!

Thank you.

Torchquantum reinforcement learning agent behaves randomly even after epsilon-greedy phase

First of all, thanks again so much for your help with the parameter updating problem that I reported a few weeks ago. I have now encountered another issue. What seems to be happening is that even after the epsilon-greedy phase is over, the agent still behaves randomly. Since I deactivated randomness for the linear and convolutional layers, I am wondering what the source of the problem is. My code as it is right now is provided below:

import math
import torch
import torch.optim as optim
from torch.optim.lr_scheduler import CosineAnnealingLR
import torch.nn as nn
import torch.nn.functional as F
import time
import datetime
import calendar
import random
from minigrid.wrappers import *
import logging
from torchpack.callbacks import (InferenceRunner, MeanAbsoluteError,
                                 MaxSaver, MinSaver,
                                 Saver, SaverRestore, CategoricalAccuracy)
from torchpack.environ import set_run_dir
from torchpack.utils.config import configs
from torchpack.utils.logging import logger
from torchtest import assert_vars_change
from torch.nn.parameter import Parameter
import torchquantum as tq
import torchquantum.functional as tqf
from torchquantum.measurement import *
import matplotlib.pyplot as plt
import pickle as pkl
from obs_wrappers import ImgObsFlatWrapper
import gymnasium as gym
from gymnasium.wrappers.record_video import RecordVideo
from collections import namedtuple, deque
#from gymnasium.wrappers.record_episode_statistics import RecordEpisodeStatistic
import numpy as np
import os
from gymnasium.envs.registration import *
##from ._utils import _import_dotted_name
##from ._six import string_classes as _string_classes
##from torch._sources import get_source_lines_and_file
##from torch.types import Storage
##from torch.storage import _get_dtype_from_pickle_storage_type
##from typing_extensions import TypeAlias
##import copyreg



    

class ReplayMemory(object):
    def __init__(self, capacity):
        self.capacity = capacity
        self.memory = []
        self.position = 0

    def push(self, *args):
        if len(self.memory) < self.capacity:
            self.memory.append(None)
        self.memory[self.position] = Transition(*args)
        self.position = (self.position + 1) % self.capacity

    def sample(self, batch_size):
        return random.sample(self.memory, batch_size)

    def output_all(self):
        return self.memory

    def __len__(self):
        return len(self.memory)


Transition = namedtuple('Transition',
                        ('state', 'action', 'reward', 'next_state', 'done'))

def SO4(q_device, RY, RZ, CNOT, wires, static=None, parent_graph=None):
##    rz_pi = np.asarray([[np.exp(-1j * (np.pi / 4)), 0],
##                        [0, np.exp(1j * (np.pi / 4))]])
##    rz_neg_pi = np.asarray([[np.exp(1j * (np.pi / 4)), 0],
##                        [0, np.exp(-1j * (np.pi / 4))]])
##    ry_pi = np.asarray([[np.cos(np.pi / 4), -1 * np.sin(np.pi / 4)],
##                        [np.sin(np.pi / 4), np.cos(np.pi / 4)]])
##    ry_neg_pi = np.asarray([[np.cos(-1 * np.pi / 4), -1 * np.sin(-1 * np.pi / 4)],
##                            [np.sin(-1 * np.pi / 4), np.cos(-1 * np.pi / 4)]])
##    tqf.qubitunitary(device, wires=wires[0], params=rz_pi)
##    tqf.qubitunitary(device, wires=wires[1], params=rz_pi)
##    tqf.qubitunitary(device, wires=wires[1], params=ry_pi)
    tqf.rz(q_device, wires=wires[0], params=torch.tensor([np.pi / 2]), static=static_mode, parent_graph=graph)
    tqf.rz(q_device, wires=wires[1], params=torch.tensor([np.pi / 2]), static=static_mode, parent_graph=graph)
    tqf.ry(q_device, wires=wires[1], params=torch.tensor([np.pi / 2]), static=static_mode, parent_graph=graph)
    tqf.cnot(q_device, wires=[wires[1], wires[0]], static=static_mode, parent_graph=graph)
    RZ[0](q_device, wires=wires[0])
    RZ[1](q_device, wires=wires[1])
    RY[0](q_device, wires=wires[0])
    RY[1](q_device, wires=wires[1])
    RZ[2](q_device, wires=wires[0])
    RZ[3](q_device, wires=wires[1])
    tqf.cnot(q_device, wires=[wires[1], wires[0]], static=static_mode, parent_graph=graph)
    tqf.ry(q_device, wires=wires[1], params=torch.tensor([-np.pi / 2]), static=static_mode, parent_graph=graph)
    tqf.rz(q_device, wires=wires[0], params=torch.tensor([-np.pi / 2]), static=static_mode, parent_graph=graph)
    tqf.rz(q_device, wires=wires[1], params=torch.tensor([-np.pi / 2]), static=static_mode, parent_graph=graph)
##    tqf.qubitunitary(device, wires=wires[1], params=ry_neg_pi)
##    tqf.qubitunitary(device, wires=wires[0], params=rz_neg_pi)
##    tqf.qubitunitary(device, wires=wires[1], params=rz_neg_pi)


    

class TreeTensorAgent(tq.QuantumModule):
    class QLayer(tq.QuantumModule):
        def __init__(self):
            super().__init__()
            self.n_wires = 8
            self.n_actions = 4
##                self.q_device = tq.QuantumDevice(n_wires=self.n_wires)
                
            
            #self.bias = torch.tensor(np.random.rand(4), requires_grad=True)
            self.rz_0_0 = tq.RZ(has_params=True, trainable=True)
            self.rz_0_1 = tq.RZ(has_params=True, trainable=True)
            self.ry_0_0 = tq.RY(has_params=True, trainable=True)
            self.ry_0_1 = tq.RY(has_params=True, trainable=True)
            self.rz_0_2 = tq.RZ(has_params=True, trainable=True)
            self.rz_0_3 = tq.RZ(has_params=True, trainable=True)
            self.rz_1_0 = tq.RZ(has_params=True, trainable=True)
            self.rz_1_1 = tq.RZ(has_params=True, trainable=True)
            self.ry_1_0 = tq.RY(has_params=True, trainable=True)
            self.ry_1_1 = tq.RY(has_params=True, trainable=True)
            self.rz_1_2 = tq.RZ(has_params=True, trainable=True)
            self.rz_1_3 = tq.RZ(has_params=True, trainable=True)
            self.rz_2_0 = tq.RZ(has_params=True, trainable=True)
            self.rz_2_1 = tq.RZ(has_params=True, trainable=True)
            self.ry_2_0 = tq.RY(has_params=True, trainable=True)
            self.ry_2_1 = tq.RY(has_params=True, trainable=True)
            self.rz_2_2 = tq.RZ(has_params=True, trainable=True)
            self.rz_2_3 = tq.RZ(has_params=True, trainable=True)
            self.rz_3_0 = tq.RZ(has_params=True, trainable=True)
            self.rz_3_1 = tq.RZ(has_params=True, trainable=True)
            self.ry_3_0 = tq.RY(has_params=True, trainable=True)
            self.ry_3_1 = tq.RY(has_params=True, trainable=True)
            self.rz_3_2 = tq.RZ(has_params=True, trainable=True)
            self.rz_3_3 = tq.RZ(has_params=True, trainable=True)
            self.rz_4_0 = tq.RZ(has_params=True, trainable=True)
            self.rz_4_1 = tq.RZ(has_params=True, trainable=True)
            self.ry_4_0 = tq.RY(has_params=True, trainable=True)
            self.ry_4_1 = tq.RY(has_params=True, trainable=True)
            self.rz_4_2 = tq.RZ(has_params=True, trainable=True)
            self.rz_4_3 = tq.RZ(has_params=True, trainable=True)
            self.rz_5_0 = tq.RZ(has_params=True, trainable=True)
            self.rz_5_1 = tq.RZ(has_params=True, trainable=True)
            self.ry_5_0 = tq.RY(has_params=True, trainable=True)
            self.ry_5_1 = tq.RY(has_params=True, trainable=True)
            self.rz_5_2 = tq.RZ(has_params=True, trainable=True)
            self.rz_5_3 = tq.RZ(has_params=True, trainable=True)
            self.rz_6_0 = tq.RZ(has_params=True, trainable=True)
            self.rz_6_1 = tq.RZ(has_params=True, trainable=True)
            self.ry_6_0 = tq.RY(has_params=True, trainable=True)
            self.ry_6_1 = tq.RY(has_params=True, trainable=True)
            self.rz_6_2 = tq.RZ(has_params=True, trainable=True)
            self.rz_6_3 = tq.RZ(has_params=True, trainable=True)
            self.cnot = tq.CNOT(has_params=False, trainable=False)
                
        def forward(self, q_device, static_mode, graph):
            self.q_device = q_device
            #SO4(self.q_device, [self.ry_0_0, self.ry_0_1], [self.rz_0_0, self.rz_0_1, self.rz_0_2, self.rz_0_3], self.cnot, [0, 1], static=static_mode_mode, parent_graph=graph)
            #Layer 1 Gate 1 Start
            tqf.rz(q_device, wires=0, params=torch.tensor([np.pi / 2]), static=static_mode)
            tqf.rz(q_device, wires=1, params=torch.tensor([np.pi / 2]), static=static_mode)
            tqf.ry(q_device, wires=1, params=torch.tensor([np.pi / 2]), static=static_mode)
            tqf.cnot(q_device, wires=[1, 0], static=static_mode)
            self.rz_0_0(q_device, wires=0)
            self.rz_0_1(q_device, wires=1)
            self.ry_0_0(q_device, wires=0)
            self.ry_0_1(q_device, wires=1)
            self.rz_0_2(q_device, wires=0)
            self.rz_0_3(q_device, wires=1)
            tqf.cnot(q_device, wires=[1, 0], static=static_mode)
            tqf.ry(q_device, wires=1, params=torch.tensor([-np.pi / 2]), static=static_mode)
            tqf.rz(q_device, wires=0, params=torch.tensor([-np.pi / 2]), static=static_mode)
            tqf.rz(q_device, wires=1, params=torch.tensor([-np.pi / 2]), static=static_mode)
            #Layer 1 Gate 1 End
            
            #SO4(self.q_device, [self.ry_1_0, self.ry_1_1], [self.rz_1_0, self.rz_1_1, self.rz_1_2, self.rz_1_3], self.cnot, [2, 3], static=static_mode_mode, parent_graph=graph)
            #Layer 1 Gate 2 Start
            tqf.rz(q_device, wires=2, params=torch.tensor([np.pi / 2]), static=static_mode)
            tqf.rz(q_device, wires=3, params=torch.tensor([np.pi / 2]), static=static_mode)
            tqf.ry(q_device, wires=3, params=torch.tensor([np.pi / 2]), static=static_mode)
            tqf.cnot(q_device, wires=[3, 2], static=static_mode)
            self.rz_1_0(q_device, wires=2)
            self.rz_1_1(q_device, wires=3)
            self.ry_1_0(q_device, wires=2)
            self.ry_1_1(q_device, wires=3)
            self.rz_1_2(q_device, wires=2)
            self.rz_1_3(q_device, wires=3)
            tqf.cnot(q_device, wires=[3, 2], static=static_mode)
            tqf.ry(q_device, wires=3, params=torch.tensor([-np.pi / 2]), static=static_mode)
            tqf.rz(q_device, wires=2, params=torch.tensor([-np.pi / 2]), static=static_mode)
            tqf.rz(q_device, wires=3, params=torch.tensor([-np.pi / 2]), static=static_mode)
            #Layer 1 Gate 2 End
            
            #SO4(self.q_device, [self.ry_2_0, self.ry_2_1], [self.rz_2_0, self.rz_2_1, self.rz_2_2, self.rz_2_3], self.cnot, [4, 5], static=static_mode_mode, parent_graph=graph)
            #Layer 1 Gate 3 Start
            tqf.rz(q_device, wires=4, params=torch.tensor([np.pi / 2]), static=static_mode)
            tqf.rz(q_device, wires=5, params=torch.tensor([np.pi / 2]), static=static_mode)
            tqf.ry(q_device, wires=5, params=torch.tensor([np.pi / 2]), static=static_mode)
            tqf.cnot(q_device, wires=[5, 4], static=static_mode)
            self.rz_2_0(q_device, wires=4)
            self.rz_2_1(q_device, wires=5)
            self.ry_2_0(q_device, wires=4)
            self.ry_2_1(q_device, wires=5)
            self.rz_2_2(q_device, wires=4)
            self.rz_2_3(q_device, wires=5)
            tqf.cnot(q_device, wires=[5, 4], static=static_mode)
            tqf.ry(q_device, wires=5, params=torch.tensor([-np.pi / 2]), static=static_mode)
            tqf.rz(q_device, wires=4, params=torch.tensor([-np.pi / 2]), static=static_mode)
            tqf.rz(q_device, wires=5, params=torch.tensor([-np.pi / 2]), static=static_mode)
            #Layer 1 Gate 3 End
            
            #SO4(self.q_device, [self.ry_3_0, self.ry_3_1], [self.rz_3_0, self.rz_3_1, self.rz_3_2, self.rz_3_3], self.cnot, [6, 7], static=static_mode_mode, parent_graph=graph)
            #Layer 1 Gate 4 Start
            tqf.rz(q_device, wires=6, params=torch.tensor([np.pi / 2]), static=static_mode)
            tqf.rz(q_device, wires=7, params=torch.tensor([np.pi / 2]), static=static_mode)
            tqf.ry(q_device, wires=7, params=torch.tensor([np.pi / 2]), static=static_mode)
            tqf.cnot(q_device, wires=[7, 6], static=static_mode)
            self.rz_3_0(q_device, wires=6)
            self.rz_3_1(q_device, wires=7)
            self.ry_3_0(q_device, wires=6)
            self.ry_3_1(q_device, wires=7)
            self.rz_3_2(q_device, wires=6)
            self.rz_3_3(q_device, wires=7)
            tqf.cnot(q_device, wires=[7, 6], static=static_mode)
            tqf.ry(q_device, wires=7, params=torch.tensor([-np.pi / 2]), static=static_mode)
            tqf.rz(q_device, wires=6, params=torch.tensor([-np.pi / 2]), static=static_mode)
            tqf.rz(q_device, wires=7, params=torch.tensor([-np.pi / 2]), static=static_mode)
            #Layer 1 Gate 4 End
            
            #SO4(self.q_device, [self.ry_4_0, self.ry_4_1], [self.rz_4_0, self.rz_4_1, self.rz_4_2, self.rz_4_3], self.cnot, [1, 2], static=static_mode_mode, parent_graph=graph)
            #Layer 2 Gate 1 Start
            tqf.rz(q_device, wires=1, params=torch.tensor([np.pi / 2]), static=static_mode)
            tqf.rz(q_device, wires=2, params=torch.tensor([np.pi / 2]), static=static_mode)
            tqf.ry(q_device, wires=2, params=torch.tensor([np.pi / 2]), static=static_mode)
            tqf.cnot(q_device, wires=[2, 1], static=static_mode)
            self.rz_4_0(q_device, wires=1)
            self.rz_4_1(q_device, wires=2)
            self.ry_4_0(q_device, wires=1)
            self.ry_4_1(q_device, wires=2)
            self.rz_4_2(q_device, wires=1)
            self.rz_4_3(q_device, wires=2)
            tqf.cnot(q_device, wires=[2, 1], static=static_mode)
            tqf.ry(q_device, wires=2, params=torch.tensor([-np.pi / 2]), static=static_mode)
            tqf.rz(q_device, wires=1, params=torch.tensor([-np.pi / 2]), static=static_mode)
            tqf.rz(q_device, wires=2, params=torch.tensor([-np.pi / 2]), static=static_mode)
            #Layer 2 Gate 1 End
            
            #SO4(self.q_device, [self.ry_5_0, self.ry_5_1], [self.rz_5_0, self.rz_5_1, self.rz_5_2, self.rz_5_3], self.cnot, [5, 6], static=static_mode_mode, parent_graph=graph)
            #Layer 2 Gate 2 Start
            tqf.rz(q_device, wires=5, params=torch.tensor([np.pi / 2]), static=static_mode)
            tqf.rz(q_device, wires=6, params=torch.tensor([np.pi / 2]), static=static_mode)
            tqf.ry(q_device, wires=6, params=torch.tensor([np.pi / 2]), static=static_mode)
            tqf.cnot(q_device, wires=[6, 5], static=static_mode)
            self.rz_5_0(q_device, wires=5)
            self.rz_5_1(q_device, wires=6)
            self.ry_5_0(q_device, wires=5)
            self.ry_5_1(q_device, wires=6)
            self.rz_5_2(q_device, wires=5)
            self.rz_5_3(q_device, wires=6)
            tqf.cnot(q_device, wires=[6, 5], static=static_mode)
            tqf.ry(q_device, wires=6, params=torch.tensor([-np.pi / 2]), static=static_mode)
            tqf.rz(q_device, wires=5, params=torch.tensor([-np.pi / 2]), static=static_mode)
            tqf.rz(q_device, wires=6, params=torch.tensor([-np.pi / 2]), static=static_mode)
            #Layer 2 Gate 2 End
            #SO4(self.q_device, [self.ry_6_0, self.ry_6_1], [self.rz_6_0, self.rz_6_1, self.rz_6_2, self.rz_6_3], self.cnot, [2, 5], static=static_mode_mode, parent_graph=graph)
            #Layer 3 Gate 1 Start
            tqf.rz(q_device, wires=2, params=torch.tensor([np.pi / 2]), static=static_mode)
            tqf.rz(q_device, wires=5, params=torch.tensor([np.pi / 2]), static=static_mode)
            tqf.ry(q_device, wires=5, params=torch.tensor([np.pi / 2]), static=static_mode)
            tqf.cnot(q_device, wires=[5, 2], static=static_mode)
            self.rz_6_0(q_device, wires=0)
            self.rz_6_1(q_device, wires=1)
            self.ry_6_0(q_device, wires=0)
            self.ry_6_1(q_device, wires=1)
            self.rz_6_2(q_device, wires=0)
            self.rz_6_3(q_device, wires=1)
            tqf.cnot(q_device, wires=[5, 2], static=static_mode, parent_graph=graph)
            tqf.ry(q_device, wires=5, params=torch.tensor([-np.pi / 2]), static=static_mode)
            tqf.rz(q_device, wires=2, params=torch.tensor([-np.pi / 2]), static=static_mode)
            tqf.rz(q_device, wires=5, params=torch.tensor([-np.pi / 2]), static=static_mode)
            #Layer 3 Gate 1 End
    def __init__(self, input_size):
        super().__init__()
        self.n_wires = 8
        self.n_actions = 4
        self.input_size = input_size
        self.q_layer = self.QLayer()
        #self.bias = Parameter(torch.zeros(self.n_actions))
        self.smx = nn.Softmax()
        self.bitstrings = gen_bitstrings(self.n_wires)
        self.q_device = tq.QuantumDevice(n_wires=self.n_wires)
        #self.mps = MPS(input_dim = 147, output_dim = 8, bond_dim = 2, feature_dim = 2, use_GPU = False, parallel = True, init_std=1e-2)
##        self.feature_map = nn.Sequential(nn.Linear(self.input_size, 64), nn.ReLU(),
##                                         nn.Conv1d(self.input_size, 64, kernel_size=2, stride=2), nn.ReLU(),
##                                         nn.Conv1d(64, 1, kernel_size=2, stride=3), nn.Tanh())
        self.layer_1 = nn.Linear(1,256)
        #147 x 128
        self.layer_2 = nn.ReLU()
        self.layer_3 = nn.Conv1d(self.input_size, 256, kernel_size=2, padding=1, dilation=2)
        self.layer_4 = nn.ReLU()
        self.layer_5 = nn.Conv1d(256, 1, kernel_size=4, stride=8, padding=1, dilation=3)
        self.layer_6 = nn.CELU()
##        for param in self.mps.parameters():
##            print("Gradient required: " + str(param.requires_grad))
            
            
##        self.measure = tq.MeasureMultiPauliSum(
##            obs_list=[{"wires" : [3, 4],
##                       "observables" : ['z', 'z'],
##                       "coefficient" : [1, 1]}])
        
##        self.encoder1 = tq.GeneralEncoder(
##            [{"input_idx" : [0], "func" : "ry", "wires" : [0]},
##             {"input_idx" : [1], "func" : "ry", "wires" : [1]},
##             {"input_idx" : [2], "func" : "ry", "wires" : [2]},
##             {"input_idx" : [3], "func" : "ry", "wires" : [3]},
##             {"input_idx" : [4], "func" : "ry", "wires" : [4]},
##             {"input_idx" : [5], "func" : "ry", "wires" : [5]},
##             {"input_idx" : [6], "func" : "ry", "wires" : [6]},
##             {"input_idx" : [7], "func" : "ry", "wires" : [7]}])
##
##        self.encoder2 = tq.GeneralEncoder(
##            [{"input_idx" : [0], "func" : "rz", "wires" : [0]},
##             {"input_idx" : [1], "func" : "rz", "wires" : [1]},
##             {"input_idx" : [2], "func" : "rz", "wires" : [2]},
##             {"input_idx" : [3], "func" : "rz", "wires" : [3]},
##             {"input_idx" : [4], "func" : "rz", "wires" : [4]},
##             {"input_idx" : [5], "func" : "rz", "wires" : [5]},
##             {"input_idx" : [6], "func" : "rz", "wires" : [6]},
##             {"input_idx" : [7], "func" : "rz", "wires" : [7]}])

        self.encoder=tq.GeneralEncoder(
            [{"input_idx" : [0], "func" : "ry", "wires" : [0]},
             {"input_idx" : [1], "func" : "ry", "wires" : [1]},
             {"input_idx" : [2], "func" : "ry", "wires" : [2]},
             {"input_idx" : [3], "func" : "ry", "wires" : [3]},
             {"input_idx" : [4], "func" : "ry", "wires" : [4]},
             {"input_idx" : [5], "func" : "ry", "wires" : [5]},
             {"input_idx" : [6], "func" : "ry", "wires" : [6]},
             {"input_idx" : [7], "func" : "ry", "wires" : [7]},
             {"input_idx" : [8], "func" : "rz", "wires" : [0]},
             {"input_idx" : [9], "func" : "rz", "wires" : [1]},
             {"input_idx" : [10], "func" : "rz", "wires" : [2]},
             {"input_idx" : [11], "func" : "rz", "wires" : [3]},
             {"input_idx" : [12], "func" : "rz", "wires" : [4]},
             {"input_idx" : [13], "func" : "rz", "wires" : [5]},
             {"input_idx" : [14], "func" : "rz", "wires" : [6]},
             {"input_idx" : [15], "func" : "rz", "wires" : [7]},
             {"input_idx" : [16], "func" : "rx", "wires" : [0]},
             {"input_idx" : [17], "func" : "rx", "wires" : [1]},
             {"input_idx" : [18], "func" : "rx", "wires" : [2]},
             {"input_idx" : [19], "func" : "rx", "wires" : [3]},
             {"input_idx" : [20], "func" : "rx", "wires" : [4]},
             {"input_idx" : [21], "func" : "rx", "wires" : [5]},
             {"input_idx" : [22], "func" : "rx", "wires" : [6]},
             {"input_idx" : [23], "func" : "rx", "wires" : [7]},
             {"input_idx" : [24], "func" : "rz", "wires" : [0]},
             {"input_idx" : [25], "func" : "rz", "wires" : [1]},
             {"input_idx" : [26], "func" : "rz", "wires" : [2]},
             {"input_idx" : [27], "func" : "rz", "wires" : [3]},
             {"input_idx" : [28], "func" : "rz", "wires" : [4]},
             {"input_idx" : [29], "func" : "rz", "wires" : [5]},
             {"input_idx" : [30], "func" : "rz", "wires" : [6]},
             {"input_idx" : [31], "func" : "rz", "wires" : [7]}])
         
             

##    def get_angles_atan(self, in_x):
##        angles = torch.stack([torch.stack([torch.atan(item), torch.atan(item**2)]) for item in in_x])
##        return angles

    def forward(self, input_data, check=False):
        #measure_counts = np.zeros(self.n_actions)
        prob_dict = {}
        #x = self.feature_map(input_data)
        x_1 = self.layer_1(input_data)
        x_2 = self.layer_2(x_1)
        #print("Stage one size: " + str(x_2.shape))
        x_3 = self.layer_3(x_2)
        x_4 = self.layer_4(x_3)
        #print("Stage two size: " + str(x_3.shape))
        x_5 = self.layer_5(x_4)
        x_6 = self.layer_6(x_5)
        #print("Stage three size " + str(x_6.shape))
        #print(type(x))
        #print(x.shape)
        #x_angles = self.get_angles_atan(x)
        #x_angles = torch.stack([torch.atan(x), torch.atan(x ** 2)])
##        print("Angle array shape: " + str(x_angles.shape))
##        print("Gradient preserved: " + str(x_angles.requires_grad))
        #torch.reshape(x_angles, (1, 16))
##        new_x_angles = x_angles.view(1, 16)
##        x_angles = new_x_angles
##        print("Input shape: " + str(x_angles.shape))
##        print("Gradient preserved: " + str(x_angles.requires_grad))
##        if check:
##            print(x_angles)
        #print(x_angles[0][0])
        x_angles = torch.atan(x_6)
        for i in range(self.n_wires):
            
            tqf.hadamard(self.q_device, wires=i, static=self.static_mode, parent_graph=self.graph)
        
##        self.encoder1(self.q_device, x_angles[0][0])
##        self.encoder2(self.q_device, x_angles[1][0])
        self.encoder(self.q_device, x_angles)
        #print("Parent graph: " + str(self.graph))
        self.q_layer.forward(self.q_device, self.static_mode, self.graph)
        
##        SO4(self.q_device, [self.ry_0_0, self.ry_0_1], [self.rz_0_0, self.rz_0_1, self.rz_0_2, self.rz_0_3], self.cnot, [0, 1])
##        SO4(self.q_device, [self.ry_1_0, self.ry_1_1], [self.rz_1_0, self.rz_1_1, self.rz_1_2, self.rz_1_3], self.cnot, [2, 3])
##        SO4(self.q_device, [self.ry_2_0, self.ry_2_1], [self.rz_2_0, self.rz_2_1, self.rz_2_2, self.rz_2_3], self.cnot, [4, 5])
##        SO4(self.q_device, [self.ry_3_0, self.ry_3_1], [self.rz_3_0, self.rz_3_1, self.rz_3_2, self.rz_3_3], self.cnot, [6, 7])
##        SO4(self.q_device, [self.ry_4_0, self.ry_4_1], [self.rz_4_0, self.rz_4_1, self.rz_4_2, self.rz_4_3], self.cnot, [1, 2])
##        SO4(self.q_device, [self.ry_5_0, self.ry_5_1], [self.rz_5_0, self.rz_5_1, self.rz_5_2, self.rz_5_3], self.cnot, [5, 6])
##        SO4(self.q_device, [self.ry_6_0, self.ry_6_1], [self.rz_6_0, self.rz_6_1, self.rz_6_2, self.rz_6_3], self.cnot, [2, 5])
        #print("ops done")
##        device_states = self.q_device.get_states_1d()
##        #print(device_states)
##        circuit_state = tq.QuantumState(n_wires=self.n_wires)
##        circuit_state.set_states(device_states)
        #state_vec = self.q_device.get_states_1d().abs().detach().cpu().numpy()
##        print("State vector: ")
##        print(state_vec)
##        print(state_vec.shape)
        #measures = tq.measure(self.q_device, n_shots=4096)
##        for i in range(len(self.bitstrings)):
##            prob_dict[self.bitstrings[i]] = np.abs(state_vec[0][i]) ** 2
##        qbit_states = list(prob_dict.keys())
        #print(qbit_states)
##        print(type(measure_results))
        #print(measure_results)
        #qbit_states = [result.keys() for result in measure_results]
        #print(qbit_states)
##        for bitkey in qbit_states:
##            if bitkey[3] == '0' and bitkey[4] == '0':
##                measure_counts[0] += prob_dict[bitkey]
##            elif bitkey[3] == '0' and bitkey[4] == '1':
##                measure_counts[1] += prob_dict[bitkey]
##            elif bitkey[3] == '1' and bitkey[4] == '0':
##                measure_counts[2] += prob_dict[bitkey]
##            else:
##                measure_counts[3] += prob_dict[bitkey]
##        measure_norm = np.linalg.norm(measure_counts)
##        measure_counts = measure_counts / measure_norm
##        if check:
##            print("Measure outcomes: ")
##            print(measure_counts)
        #measure_weights = torch.tensor(measure_counts, requires_grad=True)
        obs_1 = expval_joint_analytical(self.q_device, "ZZZXXZZZ")
        obs_2 = expval_joint_analytical(self.q_device, "ZZZYYZZZ")
        obs_3 = expval_joint_analytical(self.q_device, "ZZZYXZZZ")
        obs_4 = expval_joint_analytical(self.q_device, "ZZZXYZZZ")
        expectations = torch.stack([obs_1, obs_2, obs_3, obs_4], dim=1)
        #measure_weights = self.smx(measure_results)
        if check:
            print("Measure weights: ")
            print(expectations)
        #print("Gradient preserved: " + str(measure_weights.requires_grad))
        measure_weights = expectations.view(4)
        #print("Output shape: " + str(measure_weights.shape))
##        if check:
##            print(measure_weights)
        #print("Measure results")
        #print(measure_counts)
        return measure_weights

def square_loss(labels, predictions):
    loss = 0
    for l, p in zip(labels, predictions):
##        print(type(l))
##        print(type(p))
        loss = loss + ((l - p) ** 2)
    loss = loss / len(labels)
    #print(type(loss))
    return loss

def epsilon_greedy(TreeTensor, epsilon, s, n_actions, timestep, rgen, check=False, train=False):
##    seed = int(time.time())
##    rng = np.random.default_rng(seed)
    if train or rgen.random() < ((epsilon / n_actions) + (1 - epsilon)):
        with torch.no_grad():
            measurements = TreeTensor(s, check=check)
            action = torch.argmax(measurements)
            if check:
                print("Argmax result: " + str(action))
            return action
            
        #print("Circuit")
    else:
##        seedval = int(time.time())
##        np.random.seed(seedVal)
        action = rgen.integers(0, high=n_actions)
##        if check:
##            print(choices)
        #action = np.bincount(choices).argmax()
##        if check:
##            print(action)
    
        print("Epsilon")
        action = torch.tensor(action)
        return action

def cost(model, features, labels, dev):
    #print(features)
    loss_func = nn.SmoothL1Loss()
    predictions = [model(item.state)[item.action] for item in features]
    loss_total = loss_func(torch.tensor(labels, requires_grad=True, device=dev), torch.tensor(predictions, requires_grad=True, device=dev))
    return loss_total

def ttn_train(env_name, model, alpha, gamma, epsilon, episodes, max_steps, n_actions, top_dev, opt, sched, render=True):
    
    act_range = [0, 1, 2, 6]
    logging.basicConfig(filename="ExperimentDebug1.txt", level=logging.DEBUG)
    logging.captureWarnings(True)
##    use_cuda = torch.cuda.is_available()
##    main_device = torch.device("cuda" if use_cuda else "cpu")
##    circuit = model.to(main_device)
    
    param_file = "TTN_params.bin"
    scores = []
    target_update = 20
    batch_size = 100
    optimize_steps = 5
    target_update_counter = 0
    iter_index = []
    iter_reward = []
    iter_total_steps = []
    cost_list = []
    timestep_reward = []
    random.seed(int(time.time()))
    seed = int(time.time())
    rng = np.random.default_rng(seed)
    memory = ReplayMemory(500)
##    seedVal = int(time.time())
##    np.random.seed(seedVal)
    #q_device = tq.QuantumDevice(n_wires=8)
    #print(type(q_device))
    #optimizer = optim.Adam(model.parameters(), lr=alpha, weight_decay=1e-4)
    #optimizer_mps = optim.Adam(model.mps.parameters(), lr=alpha, weight_decay=1e-4)
##    optimizer = optim.SGD(model.parameters(), lr=alpha, momentum=0.9)
##    optimizer_mps = optim.SGD(model.parameters(), lr=alpha, momentum=0.9)
    #scheduler = CosineAnnealingLR(optimizer, T_max=episodes)
    #scheduler_mps = CosineAnnealingLR(optimizer_mps, T_max=episodes)
    env = gym.make(env_name, max_episode_steps=max_steps, disable_env_checker=True, render_mode="human")
    #print(type(env))
    env = ImgObsFlatWrapper(env)
    env_record = RecordVideo(env, f"video/TTNMinigridTraining")
##    record_dict = env_record.__dict__
##    record_keys = record_dict.keys()
##    print(record_keys)
    start_state = None
    start_time = time.asctime()
    for episode in range(episodes):
        env_record.reset()
        print("Episode: " + str(episode))
        t = 0
        total_reward = 0
        #print("Reset reward")
        done = False
        #print("Not done")
        #rgen = np.random.RandomState(seedVal)
        seedVal = int(time.time())
        np.random.seed(seedVal)
##        print(type(observation))
##        print(observation)
        #print("Reset complete")
        if episode == 0:
            start_state = env_record.env.grid
            #print("Start state set")
        else:
            env_record.env.grid = start_state
            #print("Start state retrieved")
        if render:
            env_record.render()
        #print("Number of obstacles: " + str(len(env_record.env.obstacles)))
        observation = env_record.env.gen_obs()
        #print("Got observation")
        #print(type(observation))
#        print(observation)
        observation = torch.tensor(observation['image']).type('torch.FloatTensor').view(147, 1).to(top_dev)
        print("Observation shape: " + str(observation.shape))
        #print("Observation formatted")
        #print(observation)
        observation.requires_grad = True
        #observation = observation.to(top_dev)
        act = epsilon_greedy(model, epsilon, observation, n_actions, t, rng, check=True)
        #print("Action index: " + str(act))
##        print("Got action")
        action = act_range[act]
        #print("Action selection: " + str(action))
        #scores.append(total_reward)
        while t < max_steps:
            print("Episode: " + str(episode) + " , " + "Timestep: " + str(t))
            if render:
                env_record.render()
            #print("Time Step: " + str(t))
            t += 1
            target_update_counter += 1
            seedVal = int(time.time())
            np.random.seed(seedVal)
            next_obs, reward, done, _, info = env_record.step(action)
            print("Step reward: " + str(reward))
            #print("Step reward: " + str(reward))
            #print(type(next_obs))
            next_obs = torch.tensor(next_obs).type('torch.FloatTensor').view(147, 1).to(top_dev)
            next_obs.requires_grad = True
            #new_obs = next_obs.to(main_device)
            total_reward += reward
            act_ = epsilon_greedy(model, epsilon, next_obs, n_actions, t, rng, check=True)
            #print("Action index: " + str(act))
            action_ = act_range[act_]
            #print("Action selection: " + str(action))
            memory.push(observation, act, reward, next_obs, done)
            if len(memory) > batch_size and done:
                batch_sampled = memory.sample(batch_size)
##                batch = Transition(*zip(*transitions))
##                non_final_mask = torch.tensor(tuple(map(lambda s: s is not None, batch.next_state)), device=device, dtype=torch.bool)
##                non_final_next_states = torch.cat([s for s in batch.next_state
##                                                if s is not None])
##                state_batch = torch.cat(batch.state)
##                action_batch = torch.cat(batch.action)
##                reward_batch = torch.cat(batch.reward)
                Qtarget = [item.reward + (1 - int(item.done)) * gamma * torch.max(model(item.next_state)) for item in batch_sampled]
                loss = cost(model, batch_sampled, Qtarget, top_dev)
                #print("Loss Gradient Function: " + str(loss.grad_fn))
##                grads = torch.autograd.grad(loss, list(model.parameters()), allow_unused=True)
##                print("Grad type: " + str(grads))
                #print("Loss: " + str(loss))
    
                
                #optimizer_mps.zero_grad()
                opt.zero_grad()
                    #print(loss)
                    #print(type(loss))
##                loss.backward()
##                for param in model.parameters():
##                    print("Parameter gradient: " + str(param.grad))
                opt.step()
                #optimizer_mps.step()
                #print(model.parameters())
                #print("Optimization step")
            #scheduler.step()
            #scheduler_mps.step()
##            current_replay_memory = memory.output_all()
##            current_target_for_replay_memory = [item.reward + (1 - int(item.done)) * gamma * torch.max(model(item.next_state)) for item in current_replay_memory]
            if target_update_counter >= target_update:
                target_update_counter = 0
            observation, action = next_obs, action_
            if done or t == max_steps:
                epsilon = epsilon / ((episode / 750) + 1)
                alpha = 0.95 * alpha
                timestep_reward.append(total_reward)
                print("Reward data length: " + str(len(timestep_reward)))
                iter_index.append(episode)
                iter_total_steps.append(t)
                break
    stop_time = time.asctime()
    print("Start time: ")
    print(start_time)
    print("Stop time: ")
    print(stop_time)
    torch.save(model.state_dict(), param_file)
    return timestep_reward, iter_index, iter_reward, iter_total_steps


def test_agent(model, env_folder, epsilon, env_name, config_name, n_tests, max_steps, delay=1):
    act_range = [0, 1, 2, 6]
    n_successes = 0
    test_rewards = []
##    use_cuda = torch.cuda.is_available()
##    main_device = torch.device("cuda" if use_cuda else "cpu")
##    circuit = model.to(main_device)
    env = gym.make(env_name, max_episode_steps=max_steps, render_mode="human", height=64, width=64)
    env = SymbolicObsWrapper(env)
    env = ImgObsFlatWrapper(env)
    env_record = RecordVideo(env, f"video/TTNMinigridTraining")
    done = False
    for test in range(n_tests):
        reward_total = 0
        epsilon = 0
        env.reset()
        filename = env_folder + "/" + env_name + "_" + str(test)
        statefile = open(filename, "wb")
        state_data = pkl.load(statefile)
        new_grid = env_record.env.grid.decode(state_data)
        env_record.env.grid = new_grid
        while True:
            time.sleep(delay)
            s = torch.tensor(observation).type('torch.FloatTensor').view(1, -1)
            act = epsilon_greedy(model, epsilon, observation)
            a = act_range[act]
            next_obs, reward, done, info = env_record.step(a)
            next_obs = torch.tensor(next_obs).type('torch.FloatTensor').view(1, -1)
            reward_total += reward
            if done:
                if reward > 0:
                    n_successes += 1
                    print("Goal Reached")
                else:
                    print("Task Failed")
                test_rewards.append(reward_total)
                time.sleep(3)
                break
    return test_rewards, n_sucesses
        
        
            
                
                
    
def main():
##    register(
##        id="Minigrid-RandomLava-6Spots-v0",
##        entry_point="RandomLavaMinigrid:RandomLavaEnv",
##        kwargs={"size": 8, "n_obstacles": 6})
    #env_name = "Minigrid-RandomLava-6Spots-v0"
    env_name = "MiniGrid-Empty-8x8-v0"
    use_cuda = torch.cuda.is_available()
    device = torch.device("cuda" if use_cuda else "cpu")
    os.environ["CUBLAS_WORKSPACE_CONFIG"] = ":4096:8"
    torch.use_deterministic_algorithms(True)
    torch.backends.cudnn.benchmark = False
    #device = torch.device("cpu")
    alpha = 0.4
    gamma = 0.5
    epsilon = 1
    episodes = 1000
    max_steps = 100
    n_actions = 4
    seed = 42
    random.seed(seed)
    np.random.seed(seed)
    torch.manual_seed(seed)
    model = TreeTensorAgent(147).to(device)
    print("Model graph: " + str(model.graph))

    tn_opt = optim.Adam(model.parameters(), lr=5e-3, weight_decay=1e-4)
    scheduler = CosineAnnealingLR(tn_opt, T_max=episodes)
    timestep_reward, iter_index, iter_reward, iter_total_steps = ttn_train(env_name, model, alpha, gamma, epsilon, episodes, max_steps, n_actions, device, tn_opt, scheduler)
    x_vals = np.arange(episodes)
    y_vals = np.asarray(timestep_reward)
    fig, ax = plt.subplots()
    ax.plot(x_vals, y_vals)
    ax.grid()
    ax.set(xlabel="Episode", ylabel="Total Score", title="Deep Quantum TTN Learning Training Process: 6-Site Random Lava Minigrid")
    fig.savefig("TTNTrain.png")
    plt.close(fig)

if __name__ == "__main__":
    main()
        

I would greatly appreciate your help.

Cannot use Qiskit simulation when running example1

I tried to run torchquantum-master\artifact\example1\mnist_example.py, but I ran into some trouble when doing qiskit simulation, too.

Because the flie "examples.core.datasets" is missing, I copied it from the file from https://zenodo.org/record/5787244#.YbunmBPMJhE.(\torchquantum-master\examples\core\datasets).
To avoid BrokenPipeError, I reset the "num_workers" in line 152 to 0.

After these messages:

Test with Qiskit Simulator
[2022-10-20 14:33:05.579] Before transpile: {'depth': 36, 'size': 77, 'width': 8, 'n_single_gates': 52, 'n_two_gates': 21, 'n_three_more_gates': 0, 'n_gates_dict': {'ry': 19, 'rz': 18, 'rx': 13, 'cx': 20, 'crx': 1, 'h': 1, 'sx': 1, 'measure': 4}}
[2022-10-20 14:33:06.257] After transpile: {'depth': 31, 'size': 61, 'width': 8, 'n_single_gates': 37, 'n_two_gates': 20, 'n_three_more_gates': 0, 'n_gates_dict': {'ry': 11, 'rz': 7, 'rx': 6, 'cx': 20, 'u3': 11, 'u1': 2, 'measure': 4}}

an error occurred, saying "need at least one array to stack". the details are in the file "errorlog1"

I also tried to add ", parallel=False" and modify the file qiskit/assembler/assemble_circuits.py as the Issue#9 does, but another error occurred, the details are in the file "errorlog2"

The version information is as follow,

>>> import qiskit
>>> qiskit.version.QiskitVersion()
{'qiskit-terra': '0.19.2', 'qiskit-aer': '0.10.3', 'qiskit-ignis': '0.7.0', 'qiskit-ibmq-provider': '0.18.3', 'qiskit-aqua': '0.9.5', 'qiskit': '0.34.2', 'qiskit-nature': None, 'qiskit-finance': None, 'qiskit-optimization': None, 'qiskit-machine-learning': None}

and I'm running the code under python 3.9.

By the way, I tried the code in https://zenodo.org/record/5787244#.YbunmBPMJhE.. The same problem occurred.
I wonder how to deal with it. Any help would be greatly appreciated.

errorlog1.txt
errorlog2.txt

How to save the QNN model like a normal pytorch model?

Hi,

How can I save the QNN model in such a way that it can be loaded back in the same way we load a normal pytorch model. Basically, I want to load it for this use case.

I did check the saving example from the examples section, but it doesn't save the entire model but just a checkpoint.

Regarding the support of circuit length

Hi,

I wonder whether you will consider adding a new feature to this library, through which the user could get the circuit length of a given model before and after the compilation conveniently.

Density matrix and mixed state

Hi,

we are currently using Torchquantum to implement hybrid models, and we're wondering does Torchquantum plan to support mixed states and density matrix simulation in the near future since we'd like to implement e.g. something like qiskit.quantum_info.partial_trace?

Without density matrix/mixed states, is something like https://quantumai.google/reference/python/cirq/partial_trace_of_state_vector_as_mixture currently doable with Torchquantum?

Thanks for making such an awesome library available!

Potential bug

import torch
import torchquantum as tq
import torchquantum.functional as tqf

from qiskit import QuantumCircuit
import qiskit.quantum_info as qi

qc_AB = QuantumCircuit(2)
qc_AB.cx(0, 1)
qc_AB.cy(0, 1)
qc_AB.cz(1, 0)
qc_AB.h(1)
print(qi.Statevector(qc_AB).data)

x = tq.QuantumDevice(n_wires=2)
tqf.cx(x, wires=[0, 1])
tqf.cy(x, wires=[0, 1])
tqf.cz(x, wires=[1, 0])
tqf.h(x, wires=1)
print(x.get_states_1d())

This results in two different state vectors:

[0.70710678+0.j 0. +0.j 0.70710678+0.j 0. +0.j]
tensor([[0.7071+0.j, 0.7071+0.j, 0.0000+0.j, 0.0000+0.j]])

One question about noise in quantumnat.py

Hi Hanrui,

Code below from examples/quantumnat.py loads the noise model from IBM's ibmq_quito machine

    noise_model_tq = tq.NoiseModelTQ(
        noise_model_name="ibmq_quito",
        n_epochs=n_epochs,
        noise_total_prob=0.5,
        factor=0.1,
        add_thermal=True,
    )

Then I print the parsed_dict from noise_model_dict, however, I find that there is no noise for u3 gate and cu3 gate. See below:

print(self.parsed_dict.keys())
--> dict_keys(['id', 'sx', 'x', 'cx', 'reset', 'measure'])

I also tried all other free available IBM quantum machines, such as ibm_perth, ibm_lagos, etc. (9 free in total), and I find out that none has a noise model for u3, and cu3 gate.

Question: Is it true these free available IBM quantum machines don't have a noise model for u3 and cu3 gate? My parameterized quantum circuit has u3 and cu3 gates, and I need a noise model for these two. Did I miss anything here?

Thanks in advance!
Caitao

Input shape for 2-layer encoding operation

I am trying to write an encoding operation on an 8-qubit register with two layers of operators. The operation looks like this:

 self.encoder=tq.GeneralEncoder(
            [{"input_idx" : [0], "func" : "ry", "wires" : [0]},
             {"input_idx" : [1], "func" : "ry", "wires" : [1]},
             {"input_idx" : [2], "func" : "ry", "wires" : [2]},
             {"input_idx" : [3], "func" : "ry", "wires" : [3]},
             {"input_idx" : [4], "func" : "ry", "wires" : [4]},
             {"input_idx" : [5], "func" : "ry", "wires" : [5]},
             {"input_idx" : [6], "func" : "ry", "wires" : [6]},
             {"input_idx" : [7], "func" : "ry", "wires" : [7]},
             {"input_idx" : [8], "func" : "ry", "wires" : [0]},
             {"input_idx" : [9], "func" : "ry", "wires" : [1]},
             {"input_idx" : [10], "func" : "ry", "wires" : [2]},
             {"input_idx" : [11], "func" : "ry", "wires" : [3]},
             {"input_idx" : [12], "func" : "ry", "wires" : [4]},
             {"input_idx" : [13], "func" : "ry", "wires" : [5]},
             {"input_idx" : [14], "func" : "ry", "wires" : [6]},
             {"input_idx" : [15], "func" : "ry", "wires" : [7]}])

I am wondering what the correct shape for the input array would be. I have tried several different input array shapes with sixteen elements altogether, but all of them seem to raise errors.

Graph attribute of model shows up as Nonetype

I have written a QuantumModule with the following properties, showing only the constructor:

class QPTModel(tq.QuantumModule):
    class QLayer(tq.QuantumModule):
        def __init__(self):
            super().__init__()
            self.n_wires = 8
            self.n_actions = 4
            self.rz_0_0 = tq.RZ(has_params=True, trainable=True)
            self.rz_0_1 = tq.RZ(has_params=True, trainable=True)
            self.ry_0_0 = tq.RY(has_params=True, trainable=True)
            self.ry_0_1 = tq.RY(has_params=True, trainable=True)
            self.rz_0_2 = tq.RZ(has_params=True, trainable=True)
            self.rz_0_3 = tq.RZ(has_params=True, trainable=True)
            self.rz_1_0 = tq.RZ(has_params=True, trainable=True)
            self.rz_1_1 = tq.RZ(has_params=True, trainable=True)
            self.ry_1_0 = tq.RY(has_params=True, trainable=True)
            self.ry_1_1 = tq.RY(has_params=True, trainable=True)
            self.rz_1_2 = tq.RZ(has_params=True, trainable=True)
            self.rz_1_3 = tq.RZ(has_params=True, trainable=True)
            self.rz_2_0 = tq.RZ(has_params=True, trainable=True)
            self.rz_2_1 = tq.RZ(has_params=True, trainable=True)
            self.ry_2_0 = tq.RY(has_params=True, trainable=True)
            self.ry_2_1 = tq.RY(has_params=True, trainable=True)
            self.rz_2_2 = tq.RZ(has_params=True, trainable=True)
            self.rz_2_3 = tq.RZ(has_params=True, trainable=True)
            self.rz_3_0 = tq.RZ(has_params=True, trainable=True)
            self.rz_3_1 = tq.RZ(has_params=True, trainable=True)
            self.ry_3_0 = tq.RY(has_params=True, trainable=True)
            self.ry_3_1 = tq.RY(has_params=True, trainable=True)
            self.rz_3_2 = tq.RZ(has_params=True, trainable=True)
            self.rz_3_3 = tq.RZ(has_params=True, trainable=True)
            self.rz_4_0 = tq.RZ(has_params=True, trainable=True)
            self.rz_4_1 = tq.RZ(has_params=True, trainable=True)
            self.ry_4_0 = tq.RY(has_params=True, trainable=True)
            self.ry_4_1 = tq.RY(has_params=True, trainable=True)
            self.rz_4_2 = tq.RZ(has_params=True, trainable=True)
            self.rz_4_3 = tq.RZ(has_params=True, trainable=True)
            self.rz_5_0 = tq.RZ(has_params=True, trainable=True)
            self.rz_5_1 = tq.RZ(has_params=True, trainable=True)
            self.ry_5_0 = tq.RY(has_params=True, trainable=True)
            self.ry_5_1 = tq.RY(has_params=True, trainable=True)
            self.rz_5_2 = tq.RZ(has_params=True, trainable=True)
            self.rz_5_3 = tq.RZ(has_params=True, trainable=True)
            self.rz_6_0 = tq.RZ(has_params=True, trainable=True)
            self.rz_6_1 = tq.RZ(has_params=True, trainable=True)
            self.ry_6_0 = tq.RY(has_params=True, trainable=True)
            self.ry_6_1 = tq.RY(has_params=True, trainable=True)
            self.rz_6_2 = tq.RZ(has_params=True, trainable=True)
            self.rz_6_3 = tq.RZ(has_params=True, trainable=True)
            self.cnot = tq.CNOT(has_params=False, trainable=False)
                
            
           
    def __init__(self, input_size):
        super().__init__()
        self.n_wires = 8
        self.n_actions = 4
        self.input_size = input_size
        self.q_layer = self.QLayer()
        self.smx = nn.Softmax()
        self.q_device = tq.QuantumDevice(n_wires=self.n_wires)
        self.layer_1 = nn.Linear(1, 64)
        self.layer_2 = nn.ReLU()
        self.layer_3 = nn.Conv1d(self.input_size, 64, kernel_size=2, stride=2)
        self.layer_4 = nn.ReLU()
        self.layer_5 = nn.Conv1d(64, 1, kernel_size=2, stride=2)
        self.layer_6 = nn.Tanh()

        self.encoder=tq.GeneralEncoder(
            [{"input_idx" : [0], "func" : "ry", "wires" : [0]},
             {"input_idx" : [1], "func" : "ry", "wires" : [1]},
             {"input_idx" : [2], "func" : "ry", "wires" : [2]},
             {"input_idx" : [3], "func" : "ry", "wires" : [3]},
             {"input_idx" : [4], "func" : "ry", "wires" : [4]},
             {"input_idx" : [5], "func" : "ry", "wires" : [5]},
             {"input_idx" : [6], "func" : "ry", "wires" : [6]},
             {"input_idx" : [7], "func" : "ry", "wires" : [7]},
             {"input_idx" : [8], "func" : "ry", "wires" : [0]},
             {"input_idx" : [9], "func" : "ry", "wires" : [1]},
             {"input_idx" : [10], "func" : "ry", "wires" : [2]},
             {"input_idx" : [11], "func" : "ry", "wires" : [3]},
             {"input_idx" : [12], "func" : "ry", "wires" : [4]},
             {"input_idx" : [13], "func" : "ry", "wires" : [5]},
             {"input_idx" : [14], "func" : "ry", "wires" : [6]},
             {"input_idx" : [15], "func" : "ry", "wires" : [7]}])

If I create an instance of this model through model = QPTModel(100)

and I print out the attribute model.graph, I get an output of None.

I am not sure if this is supposed to be the default after the model when the class is first instantiated, but if not, I am wondering how to fix it. In this class, aside from the single numerical values, every attribute and function call is part of either torchquantum or standard Pytorch, so there is nothing that should interfere with normal Pytorch functionality. Note that the class instantiation happens in a different source file from the one in which the class is defined. Could that be a problem?

Add expectation value of a pauli string as a function

Can we add a function that returns a single value when computing the expectation value of some observable Z1Z2? Right now it returns [<Z1> , <Z2>] as an array.

we can just utilize measure(q_device, n_shots = n_shots), measure all the states on computational-basis and post-process the outcome. This feature is available in very common libraries like penny lane, and we should adopt this for our users. The function would be something like the following.

Let's also add the n_shots as input to the model because this allows the user to do unbiased estimation of the expectation if they so desire.

def exp_val(self, q_device, measure_wires, n_shots=1):
    """
    Measure the specified wires on the z-basis and return the expectation value.
    When n_shots =1, the function returns +1 or -1

    Args:
        q_device (QuantumDevice): The quantum device to be used.
        wires (list of ints): The wires to be measured.
        n_shots (int): The number of shots to be used, defaults to 1.
    """
    # measure all wires on the z-basis
    measure_bitstring = tq.measure(q_device, n_shots=n_shots)

    # calculate the expectation value from the measurement results
    exp_val = 0
    for bitstring in measure_bitstring:
        #add code here
    # return exp_val

Request to add expectation of weighted Joint Pauli.

Hi, could we add a feature to compute the expectation value of a weighted Pauli? It is crucial for VQEs for a larger systems. The function would look something like the following.

class MeasureMultiQubitPauliSum(tq.QuantumModule):
    """obs list:
    list of dict: example
    [{'coefficient': [0.5, 0.2]},
    {'wires': [0, 2, 3, 1],
    'observables': ['x', 'y', 'z', 'i'],
    },
    {'wires': [0, 2, 3, 1],
    'observables': ['y', 'x', 'z', 'i'],
    },
    ]
    Measures 0.5 * <x y z i> + 0.2 * <y x z i>
    """

    def __init__(self, obs_list, v_c_reg_mapping=None):
        super().__init__()
        self.obs_list = obs_list
        self.v_c_reg_mapping = v_c_reg_mapping
        self.measure_multiple_times = MeasureMultipleTimes(
            obs_list=obs_list[1:], v_c_reg_mapping=v_c_reg_mapping
        )

    def forward(self, q_device: tq.QuantumDevice):
        res_all = self.measure_multiple_times(q_device)
        return (res_all * self.obs_list[0]["coefficient"]).sum(-1)`

Error in mnist_example.py about the Qiskit

Hi Hanrui,

I am running the mnist_example.py and I am having the following error when running on Qiskit simulator and real Quantum Computers (before line 211 in mnist_example.py is good, but after line 211 run into errors). I am using the latest version of the code. Any clue why this happens?

[2023-03-27 23:21:22.055] Job failed because 'Number of input circuits does not match number of input parameter bind dictionaries', rerun now.

Thanks for your help!
Caitao

Cannot use qiskit simulation when running mnist_example.py

I tried to run mnist_example.py with IBM Q token already set. But I ran into trouble when doing qiskit simulation. The line is

valid_test(dataflow, 'test', model, device, qiskit=True)

I think the execution should be fast, but it got stuck after the following messages:

Test with Qiskit Simulator
[2022-03-22 22:36:14.573] Before transpile: {'depth': 32, 'size': 77, 'width': 8, 'n_single_gates': 62, 'n_two_gates': 11, 'n_three_more_gates': 0, 'n_gates_dict': {'ry': 19, 'rz': 24, 'rx': 17, 'cx': 10, 'crx': 1, 'h': 1, 'sx': 1, 'measure': 4}}
[2022-03-22 22:36:14.864] After transpile: {'depth': 23, 'size': 49, 'width': 8, 'n_single_gates': 33, 'n_two_gates': 12, 'n_three_more_gates': 0, 'n_gates_dict': {'ry': 8, 'rz': 7, 'rx': 4, 'cx': 12, 'u1': 2, 'u3': 11, 'u2': 1, 'measure': 4}}

I interrupted the program using ctrl+c after 2-3 minutes, getting a very long error log for interrupting multiprocessing. I want to know what causes that trouble and how to deal with it.

Thank!


I am using torchquantum master branch, qiskit 0.19.2.

error_log.txt

How to measure with bitstring counts in current implementation

For the experiment I am working on right now, I am trying to do a multi-shot measurement so that I can get result counts by bitstring. This should be doable with the measure function in measurements.py, but that function takes a tq.QuantumState object as one of its parameters. From what I can see, the QuantumState class is no longer included as an attribute of torchquantum. Has it been deleted from the project, or moved to a different location within the class hierarchy?

StateEncoder object has no attribute 'func_list'

Hi all, Thanks for looking at my question. I'm faced with a problem when using Qiskit with AmplitudeEncoder( ideal with StateEncoder).
When I set the use_qiskit parameter False, it runs well with my own GPU. However, when I run my code on Qiskit, I get the error message "'StateEncoder' object has no attribute 'func_list'".
To solve the problem, I check the "encoding.py", and find that Amplitude Encoder doesn't have func_list differently from GeneralEncdoer. Should I create a func_list on my own code or look elsewhere?

Thanks!

construction of a measurement function

Can we construct a measurement function that takes q_device as input and gives the measurement outcomes of classical values as output? It really took me 2 days to figure out how to obtain the distribution of classical outcomes upon measurement when I have q_device as an attribute of my class but not q_state.

it would be something like the following: This might help the future users!

def measure_device(q_device, n_shots=1024):
    """
    Measures the q_device and returns the classical bitstream distribution
    """
    q_state = tq.QuantumState(n_wires=q_device.n_wires)
    q_state.set_states(q_device.get_states_1d())
    return q_state.measure(q_state, n_shots=n_shots)

Given properties of the task: The class that I was defining was MAXCUT for QAOA whose attributes look like this.

class MAXCUT(tq.QuantumModule):
    def __init__(self, n_wires, input_graph):
        super().__init__()
        self.n_wires = n_wires
        self.input_graph = input_graph  # list of edges
        self.q_device = tq.QuantumDevice(n_wires=n_wires)
        self.rx0 = tq.RX(has_params=True, trainable=True)
        self.rz0 = tq.RZ(has_params=True, trainable=True)
        self.measure = tq.MeasureAll(tq.PauliZ)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.