Giter Club home page Giter Club logo

quantum_neural_network_classifiers's Issues

Amplitude -encoding

I have a question on the Amplitude-encoding demo. On a real Quantum Device, what parts of the simulation are done on the quantum computer and what part is done on a classification machine? For example, where is the training loop executed? Thanks.

Working version of example_encode on Mac M1

Hi,

Your code has multiple issues preventing it from running on the Mac M1 on Big Sur with Julia 1.8.2 . Below is a version that runs correctly:
Issues:

  1. PyPlot cannot be installed. Use Plots.heatmap to plot an array.
  2. I added "../src" to LOAD_PATH to make the example work.
  3. In case you have a dark background, change the line and text color of YaoPlots.plot:
CircuitStyles.textcolor[]="yellow"
CircuitStyles.linecolor[]="yellow"
Cheers, 

  Gordon

Working code

using Flux
using Yao, Zygote, YaoPlots #, CuYao, Yao.EasyBuild
using Yao.EasyBuild
using LinearAlgebra, Statistics, Random, StatsBase, ArgParse, Distributions
# Issue with Mac M1. I might need the M1 version of Anaconda
#using PyPlot
using Printf, BenchmarkTools, MAT, Plots
#using Flux: batch   # Does not work because batch does not exist? 
using Flux: batch

using YaoPlots: plot  # For some reason plot is not loaded. 

push!(LOAD_PATH, "../src")   
using Quantum_Neural_Network_Classifiers: ent_cx, params_layer, acc_loss_evaluation

# import the FashionMNIST data
vars = matread("../dataset/FashionMNIST_1_2_wk.mat")
x_train = vars["x_train"]
y_train = vars["y_train"]
x_test = vars["x_test"]
y_test = vars["y_test"]

num_train = 1000
num_test = 200
x_train = x_train[:,1:num_train]
y_train = y_train[1:num_train,:]
x_test = x_test[:,1:num_test]
y_test = y_test[1:num_test,:];

i = 13
a = real(vars["x_train"][1:256,i])
c1 = reshape(a,(16,16))
i = 6
a = real(vars["x_train"][1:256,i])
c2 = reshape(a,(16,16))
# matshow(hcat(c1,c2)) # T-shirt and ankle boot
heatmap(hcat(c1,c2))

num_qubit = 8    # number of qubits
depth = 10       # number of parameterized composite_blocks
batch_size = 100 # batch size
lr = 0.01        # learning rate
niters = 100;     # number of iterations
optim = Flux.ADAM(lr); # Adam optimizer

# index of qubit that will be measured
pos_ = 8;       
op0 = put(num_qubit, pos_=>0.5*(I2+Z))
op1 = put(num_qubit, pos_=>0.5*(I2-Z));

# if GPU resources are available, you can make modifications including 
# replacing  "|> cpu" by "|> cu"
x_train_yao = zero_state(num_qubit,nbatch=num_train)
x_train_yao.state = x_train;
cu_x_train_yao = copy(x_train_yao) |> cpu;

x_test_yao = zero_state(num_qubit,nbatch=num_test)
x_test_yao.state  = x_test;
cu_x_test_yao = copy(x_test_yao) |> cpu;

# define the QNN circuit, some functions have been defined before
ent_layer(nbit::Int64) = ent_cx(nbit)
parameterized_layer(nbit::Int64) = params_layer(nbit)
composite_block(nbit::Int64) = chain(nbit, parameterized_layer(nbit::Int64), ent_layer(nbit::Int64))
circuit = chain(composite_block(num_qubit) for _ in 1:depth)
# assign random initial parameters to the circuit

CircuitStyles.textcolor[]="yellow"
CircuitStyles.linecolor[]="yellow"

dispatch!(circuit, :random)
params = parameters(circuit);
YaoPlots.plot(circuit) # for a long circuit, the plot will cost much time

# record the training history
loss_train_history = Float64[]
acc_train_history = Float64[]
loss_test_history = Float64[]
acc_test_history = Float64[];

for k in 0:niters
    # calculate the accuracy & loss for the training & test set
    train_acc, train_loss = acc_loss_evaluation(circuit,cu_x_train_yao,y_train,num_train, pos_)
    test_acc, test_loss = acc_loss_evaluation(circuit,cu_x_test_yao,y_test,num_test, pos_)
    push!(loss_train_history, train_loss)
    push!(loss_test_history, test_loss)
    push!(acc_train_history, train_acc)
    push!(acc_test_history, test_acc)
    if k % 5 == 0
        @printf("\nStep=%d, loss=%.3f, acc=%.3f, test_loss=%.3f,test_acc=%.3f\n",k,train_loss,train_acc,test_loss,test_acc)
    end
    
    # at each training epoch, randomly choose a batch of samples from the training set
    batch_index = randperm(size(x_train)[2])[1:batch_size]
    x_batch = x_train[:,batch_index]
    y_batch = y_train[batch_index,:];
    # prepare these samples into quantum states
    x_batch_1 = copy(x_batch)
    x_batch_yao = zero_state(num_qubit,nbatch=batch_size)
    x_batch_yao.state = x_batch_1;
    cu_x_batch_yao = copy(x_batch_yao) |> cpu;
    batc = [zero_state(num_qubit) for i in 1:batch_size]
    for i in 1:batch_size
        batc[i].state = x_batch_1[:,i:i]
    end
    
    # for all samples in the batch, repeatly measure their qubits at position pos_ 
    # on the computational basis
    q_ = zeros(batch_size,2);
    res = copy(cu_x_train_yao) |> circuit
    for i=1:batch_size
        rdm = density_matrix(viewbatch(res, i), (pos_,))
        q_[i,:] = Yao.probs(rdm)
    end
    
    # calculate the gradients w.r.t. the cross-entropy loss function
    Arr = Array{Float64}(zeros(batch_size,nparameters(circuit)))
    for i in 1:batch_size
        Arr[i,:] = expect'(op0, copy(batc[i])=>circuit)[2]
    end
    C = [Arr, -Arr]
    grads = collect(mean([-sum([y_batch[i,j]*((1 ./ q_)[i,j])*batch(C)[i,:,j] for j in 1:2]) for i=1:batch_size]))
    
    # update the parameters
    updates = Flux.Optimise.update!(optim, params, grads);
    dispatch!(circuit, updates) 
end

Plots.plot(acc_train_history,label="accuracy_train",legend = :bottomright)
Plots.plot!(acc_test_history,label="accuracy_test",legend = :bottomright)
# Plots.savefig("acc.pdf")

Plots.plot(loss_train_history,label="loss_train")
Plots.plot!(loss_test_history,label="loss_test")
# Plots.savefig("loss.pdf")

res = copy(cu_x_train_yao) |> circuit
q_ = zeros(num_train,2);
for i=1:num_train
    q_[i,:] = Yao.probs(density_matrix(viewbatch(res, i), (pos_,)))
end
class1x = Int64[]
class2x = Int64[]
class1y = Float64[]
class2y = Float64[]
for i in 1:num_train
    if y_train[i,1] == 1.0
        push!(class1x,i)
        push!(class1y,q_[i,1])
    else
        push!(class2x,i)
        push!(class2y,q_[i,1])
    end
end
# predicted value (expectation value)
# lower loss leads to larger separation between the two classes of data points
Plots.plot(class1x, class1y, seriestype = :scatter)
Plots.plot!(class2x, class2y, seriestype = :scatter)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.