Giter Club home page Giter Club logo

securenn-public's Introduction

SecureNN: 3-Party Secure Computation for Neural Network Training

Sameer Wagh, Divya Gupta, and Nishanth Chandran

Secure multi-party computation (SMC/MPC) provides a cryptographically secure framework for computations where the privacy of data is a requirement. MPC protocols enable computations over this shared data while providing strong privacy guarantees – the parties only learn output of the computation while learning nothing about the individual inputs. Here we develop a framework for efficient 3-party protocols tailored for state-of-the-art neural networks. SecureNN builds on novel modular arithmetic to implement exact non-linear functions while avoiding the use of interconversion protocols as well as general purpose number theoretic libraries.

We develop and implement efficient protocols for the above set of functionalities. This work is published in Privacy Enhancing Technologies Symposium (PETS) 2019. Paper available here. Feel free to checkout the follow-up work Falcon and it's imporved implementation. If you're looking to run Neural Network training, strongly consider using this GPU-based codebase Piranha.

Table of Contents

Requirements


  • The code should work on any Linux distribution of your choice (It has been developed and tested with Ubuntu 16.04 and 18.04).

  • Required packages for SecureNN:

    Install these packages with your favorite package manager, e.g, sudo apt-get install <package-name>.

SecureNN Source Code


Repository Structure

  • files/ - Shared keys, IP addresses and data files.
  • lib_eigen/ - Eigen library for faster matrix multiplication.
  • mnist/ - Parsing code for converting MNIST data into SecureNN format data.
  • src/ - Source code for SecureNN.
  • utils/ - Dependencies for AES randomness.

Building SecureNN

To build SecureNN, run the following commands:

git clone https://github.com/snwagh/SecureNN.git
cd SecureNN
make

Running SecureNN

SecureNN can be run either as a single party (to verify correctness) or as a 3 (or 4) party protocol. It can be run on a single machine (localhost) or over a network. Finally, the output can be written to the terminal or to a file (from Party P_0). The makefile contains the promts for each. To run SecureNN, run the appropriate command after building (a few examples given below).

make standalone
make abcTerminal
make abcFile

Additional Resources


Neural Networks

SecureNN currently supports three types of layers, fully connected, convolutional (without padding), and convolutional layers (with zero padding). The network can be specified in src/main.cpp. The core protocols from SecureNN are implemented in src/Functionalities.cpp. The code supports both training and testing.

Debugging

A number of debugging friendly functions are implemented in the library. For memory bugs, use valgrind, install using sudo apt-get install valgrind. Then run a single party in debug mode:

* Set makefile flags to -g -O0 (instead of -O3)
* make clean; make
* valgrind --tool=memcheck --leak-check=full --track-origins=yes --dsymutil=yes <executable-file-command>

libmiracl.a is compiled locally, if it throws errors, download the source files from https://github.com/miracl/MIRACL.git and compile miracl.a yourself and copy into this repo.

Matrix multiplication assembly code only works for Intel C/C++ compiler. Use the non-assembly code from src/tools.cpp if needed (might have correctness issues).

Citation

You can cite the paper using the following bibtex entry:

@article{wagh2019securenn,
  title={{S}ecure{NN}: 3-{P}arty {S}ecure {C}omputation for {N}eural {N}etwork {T}raining},
  author={Wagh, Sameer and Gupta, Divya and Chandran, Nishanth},
  journal={Proceedings on Privacy Enhancing Technologies},
  year={2019}
}

Report any bugs to [email protected]

securenn-public's People

Contributors

kw-xyz avatar snwagh avatar zzz130981 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

securenn-public's Issues

The test accuracy is not changed with different training settings

Hi Snwagh,
In globals.h at /src file, after I change the #define MNIST false into #define MNIST true, NUM_LAYERS to 4, LOG_MINI_BATCH to 0 in #if MNIST frame, NO_OF_EPOCHS to 1, NUM_ITERATIONS to 1, then at secondary.cpp, I change for (int i = 0; i < NUM_ITERATIONS; ++i) in void test(NeuralNetwork* net) function to for (int i = 0; i < 10000; ++i). Finally, I change the network to be secureml in main.cpp and train the network with make standalone and do the test by changing

whichNetwork += " train";
train(network, config);

into

whichNetwork += " test";
test(network);

it gives me 99% accuracy which seems not correct as I just use 1 iteration (which means one piece of training data) to train the network, I changed the number of iteration but always get the same accuracy, can you give some tips for it or did I test it in wrong way?

Thanks very much!

About error Segmentation fault (core dumped)

Hello, @snwagh . When I run make, it goes well. But I am confused when I try to run make standalone, it gives me the fellow message. The other two commands also give me the same error. What's wrong with it? Thank you!

❯ make standalone
./BMRPassive.out STANDALONE 4 files/parties_localhost files/keyA files/keyAB files/data/mnist_data_8_samples files/data/mnist_labels_8_samples files/data/mnist_data_8_samples files/data/mnist_labels_8_samples
make: *** [makefile:36: standalone] Segmentation fault (core dumped)

Standalone (make mnistSA) segfaults on full parsed MNIST dataset

Hello!

First off big fan of your work!

I am having some issues with running the standalone (make mnistSA, where I edit mnist to true in globals) I always get a segfault.

GDB gives me the following

Program received signal SIGSEGV, Segmentation fault.
0x000055555555ea06 in CNNLayer::updateEquationsSA (this=0x5555557fc260, prevActivations=std::vector of length 100352, capacity 100352 = {...}) at src/CNNLayer.cpp:357
357							temp[i] += deltaRelu[loc];
(gdb) backtrace
-0  0x000055555555ea06 in CNNLayer::updateEquationsSA (this=0x5555557fc260, prevActivations=std::vector of length 100352, capacity 100352 = {...}) at src/CNNLayer.cpp:357

0x000055555555d62a in CNNLayer::updateEquations (this=0x5555557fc260, prevActivations=std::vector of length 100352, capacity 100352 = {...}) at src/CNNLayer.cpp:187

0x00005555555981a9 in NeuralNetwork::updateEquations (this=0x5555557fe0b0) at src/NeuralNetwork.cpp:125

0x00005555555979d5 in NeuralNetwork::backward (this=0x5555557fe0b0) at src/NeuralNetwork.cpp:54

0x000055555555965c in train (net=0x5555557fe0b0, config=0x5555557fe140) at src/secondary.cpp:177

0x00005555555abe40 in main (argc=10, argv=0x7fffffffdfc8) at src/main.cpp:139

Do you have any idea on what to do from here? I can run the 'make standalone' with mnist false.

Thank you in advance!

*edit, this is with batch size = 128, if I set the batch size to 64 then I don't get a segfault. Likewise it works with batch size 256

How to Configure mnist3PC

Hi, I am a graduate student willing to train mnist dataset on three hosts using your protocol. I have seen others asking about the way to run it in localhost, but I still don't know how to run it in different hosts. Could you please give me some instructions? Thanks.

[BUG]: Question on the Share Convert protocol when input is -1?

hi, @snwagh . I recently read the paper SecureNN and have a question on the proposed ReLU protocol.

In short, ReLU depends on the share convert protocol, which converts the share from Z_L to Z_{L-1}. What if the input is -1, which is encoded as L-1 over Z_L. It seems that -1 will be converted to 0 over Z_{L-1}? Consequently, the MSB(-1) = MSB(0) =0, which is wrong.

question about usage of negative numbers

Hi, I'm having trouble using this system to manipulate negative numbers. The scenario is that I want to recover a shared negative number , but only get 0 instead of correct ans(-1,-3)

My code is as follow:

int main(int argc, char** argv)
{
	 parseInputs(argc, argv);
/****************************** AES SETUP and SYNC ******************************/ 
	aes_indep = new AESObject(argv[4]);
	aes_common = new AESObject(argv[5]);
	aes_a_1 = new AESObject("files/keyD");
	aes_a_2 = new AESObject("files/keyD");
	aes_b_1 = new AESObject("files/keyD");
	aes_b_2 = new AESObject("files/keyD");
	aes_c_1 = new AESObject("files/keyD");
	aes_parallel = new ParallelAESObject(argv[5]);

	if (!STANDALONE)
	{
		initializeCommunication(argv[3], partyNum);
		synchronize(2000000);	
	}

	if (PARALLEL)
		aes_parallel->precompute();
/****************************** myWork*****************************/ 
	size_t size = 2;
	vector<myType> data(size);

	if(partyNum == PARTY_A) {
		data[0] =  floatToMyType(-0.5);
		data[1] =  floatToMyType(-1.5);
	} 	

	if(partyNum == PARTY_B) {
		data[0] =  floatToMyType(-0.5);
		data[1] =  floatToMyType(-1.5);
	} 	


	if(PRIMARY) {
		funcTruncate2PC(data, FLOAT_PRECISION, size, PARTY_A, PARTY_B);
		funcReconstruct2PC(data, data.size(), "data is");
	}
	
/****************************** CLEAN-UP ******************************/ 
	delete aes_common;
	delete aes_indep;
	delete aes_a_1;
	delete aes_a_2;
	delete aes_b_1;
	delete aes_b_2;
	delete aes_c_1;
	delete aes_parallel;
	if (partyNum != PARTY_S)
		deleteObjects();

	return 0;
}

I only got:

2020-02-15 20-35-41 的屏幕截图

I have tried what you said in #4(-1=2^64-1), I assign 2^64-0.5 and 2^64-1.0 to p0 and p1 instead of -0.5 and -1.0, and the "myWork" section of modified main() function is as follow:

	size_t size = 2;
	vector<myType> data(size);
	float a = (1<<64)-0.5;
	float b = (1<<64)-1.0;
	if(partyNum == PARTY_A) {
		data[0] =  floatToMyType(a);
		data[1] =  floatToMyType(b);
	} 	
	if(partyNum == PARTY_B) {
		data[0] =  floatToMyType(a);
		data[1] =  floatToMyType(b);
	} 	
	if(PRIMARY) {
		funcTruncate2PC(data, FLOAT_PRECISION, size, PARTY_A, PARTY_B);
		funcReconstruct2PC(data, data.size(), "data is");
	}

The result has no change:
2020-02-15 21-47-17 的屏幕截图

In order to figure this out, I studied the numbers at the bit level and wanted to find what happen in shifting a number FLOAT_PRECISION bits to the left and forcing it to myType, here is my code:

	float a = 13;
	printf("a=%f\n", a);
	bitset<BIT_SIZE>  aBits = bitset<BIT_SIZE> (a);
	for(int i = 63; i >= 0; i--) cout << aBits[i];
	cout << endl;

	//#define floatToMyType(a) ((myType)(a * (1 << FLOAT_PRECISION)))
	a = a * (1 << FLOAT_PRECISION);
	aBits = bitset<BIT_SIZE> (a);
	for(int i = 63; i >= 0; i--) cout << aBits[i];
	cout << endl;

	a = (myType)a;
	aBits = bitset<BIT_SIZE> (a);
	for(int i = 63; i >= 0; i--) cout << aBits[i];
	cout << endl;

When a is positive, everything works:

2020-02-15 21-24-29 的屏幕截图

However, when a is negative, all bits are zero?!
2020-02-15 21-46-15 的屏幕截图
So I am completely confused, I wonder how you operate negative numbers and how can I operate negative numbers to achieve my goal——to recover the shared negative numbers.

About secret sharing

I couldn't find any information about secret sharing, which has been stated in the paper.
Could you please point me in the direction of any relevant documentation?

How to do floating-point and negative number arithmetic?

Hello!
If I want to calculate a polynomial such as: 0.9258x^2-0.4642x+0.686, x ∈ [-2.0, 2.0], how to do it?

It seems that there is reaveal() in your system, but I cannot find it now. Does it exist? I know the principle behind this kind of function: Firstly, there is a number called precision such as 8192 etc. Then the floating-point is magnified by 8192 times and then rounded. Lastly, it is divided by 8192. So do I need to implement it by myself?

However, as for negative number, I have no idea.

funcMaxMPC() or debugMAX() doesn't function correctly in Ubuntu18.04 g++ 7.5.0

Hello, I modified src/main.cpp to invoke debugMAX():

int main(int argc, char** argv)
{
	 parseInputs(argc, argv);
/****************************** AES SETUP and SYNC ******************************/ 
	aes_indep = new AESObject(argv[4]);
	aes_common = new AESObject(argv[5]);
	aes_a_1 = new AESObject("files/keyD");
	aes_a_2 = new AESObject("files/keyD");
	aes_b_1 = new AESObject("files/keyD");
	aes_b_2 = new AESObject("files/keyD");
	aes_c_1 = new AESObject("files/keyD");
	aes_parallel = new ParallelAESObject(argv[5]);

	if (!STANDALONE)
	{
		initializeCommunication(argv[3], partyNum);
		synchronize(2000000);	
	}

	if (PARALLEL)
		aes_parallel->precompute();
/****************************** myWork*****************************/ 
	debugMax();

/****************************** CLEAN-UP ******************************/ 
	delete aes_common;
	delete aes_indep;
	delete aes_a_1;
	delete aes_a_2;
	delete aes_b_1;
	delete aes_b_2;
	delete aes_c_1;
	delete aes_parallel;
	if (partyNum != PARTY_S)
		deleteObjects();

	return 0;
}

Theoretically, it should output "max:41 \n maxIndex:8", however, the fact is:
2020-02-13 14-46-39 的屏幕截图
I think there is nothing wrong with your code, so is it wrong that my way to invoking the interfaces or my OS and c++ compiler?

My OS and compiler is as follow:
2020-02-13 14-50-02 的屏幕截图

How to reconstruct the result of matrix multiplication?

Hello, I'm doing some arithmetical operation, and I rewrote src/main.cpp,. The simplified core code is as follow:

size_t size = 1;
vector<myType> data1(size), data2(size), ans(size);
generateData(data1);	//generate size random numbers between [0,10] and assign them to data1
generateData(data2);	//generate size random numbers between [0,10] and assign them to data2
funcMatMulMPC(data1, data2, ans, 1, 1, 1, 0, 0);

cout << "data is as above:" << endl;
cout << data1[0] << " " << data2[0] <<  endl;
funcReconstruct2PC(ans, size, "ans is");

Then I run "make abcTerminal", however, I cannot reconstruct it and only get:

data is as above:
2 4 0
ERROR reading from socket
Size = 8, Left = 8, n = -1
Receive myType vector error
ans is: 0 
Execution completed

Screenshot as follows:
2020-02-11 23-31-54 的屏幕截图

I guess the reason is that the terminal only print P0's result, so I need to print results of P1&P2, isn't it? If I am right, I don't know how to do it.

question about "the appropriate command"

Hello, your work is excellent!
However, I am a beginner in c++. So I have some tiny and simple questions.(maybe stupid)

  1. I wonder if I want to do this machine learning work in a local network , what's "the appropriate command"?
  2. If I have some similar questions such as 1. how can I get "the appropriate command"?Might you offer a more detailed use manual?
  3. I only have a PC, how can I build a local network? I plan to use VMware Workstation, build two or three virtual machines, set Network Connection to be "NAT" and modify files/parties_LAN. Is there a better solution?

some tiny questions about funcPrivateCompareMPC() and debugPC()

Hello, I have used debugPC() and I get:
2020-02-15 22-09-37 的屏幕截图
I have serval tiny questions, hoping for your respond:
1.Is my terminal's output right?

2.What values are stored in the vector share_m? I know that each 64 bit of it represents a number, so in my opinion, the values of it is:
1 2 4 8 16 32 64 128 256 512

  1. What is the meaning of "1"?Does it mean that the ith number of share_m <= r?If so, I should understand my terminal's output one by one as follow:
    1:1 <= 5
    0: 2 > 6
    1: 4 <= 7
    1: 8 <= 8
    1: 16 <= 9
    0: 32 > 10
    1: 64 <= 11
    0: 128 > 12
    0: 256 > 13
    1: 512 <= 14
    This is obviously wrong. Could you tell where I am wrong, thank you!

bug of debugMax() or funcMaxMPC()

Just like I did in #7 , I invoked debugMax() in the main function.
As you know, what it does is to find the value and index of the MAX in an array a({0,1,0,4,5,3,10,6,41,9}), the expect result is "max:41 maxIndex:4".
However, I only got:
2020-02-18 19-52-55 的屏幕截图

Question about the processor being used by the code

Hello,

Nice work! I just have a question about the performance. If I run your code on a machine with a GPU device, will the code utilize GPU for the computations (especially the linear computations) or it is not designed to consider GPU (i.e., tensorflow vs. tensorflow-gpu)? I'm asking because the performance will be affected taking into account the device that is running the layers.

I assume the machine you ran in your paper (Amazon EC2) did not have GPU devices, right?

Thanks,
Parsa

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.