Comments (5)
@llCurious I've added responses in order for your questions below:
- Right, the
Pow
function needs to be vectorized. Take a look at this git issue for more details. The division protocol also needs to be modified accordingly. The function correctly reports the run-time but effectively computes only the first component correctly. - The
Pow
does indeed reveal information of the exponent \alpha and it is by design (see Fig. 8 here). This considerably simplifies the computation and the leakage is well quantified. However, the broader implications of revealing this value (such as can an adversary launch an attack using that information) is not studied in the paper. - A
BIT_SIZE
of 32 is sufficient for inference and the code to reproduce this is given in thefiles/preload/
. End-to-end training in MPC was not performed (given the prohibitive time and parameter tuning) though I suspect you're right, it would either require a larger bit-width or adaptive setting of the fixed-point precision.
from falcon-public.
Thank you for you responses.
Do you mean the division protocol currently can only handle the case where the exponent of the divisor b is the same? Or say, if the divisors in the vector have different exponents, then the current division protocol fails?
BTW, you seem to miss my question about the BN protocol. You mention that a larger bit-width or adaptive setting of the fixed-point precision.
can be helpful in end-to-end training, do you mean to employ BN to tackle this problem?
from falcon-public.
Yes, that is correct. Either all the exponents have to be the same or the protocol doesn't really guarantee any correctness.
About your BN question, like I said, end-to-end training in MPC was not studied (still many open challenges for that) so it is hard to make a comment empirically on the use of BN for training. However, the use of BN is known from ML literature (plaintext) and the idea is that the benefits of BN (improving convergence/stability) will translate into secure computation too. Does this answer your question? If you're asking if BN will help train a network in the current code base then I'll say no, though it is an issue, it is not the only issue that is preventing training.
from falcon-public.
OK,i got it. Sry for the late reply~
-
I also notice that you in the Paper Section 5.6, you present the elemental data for the training performance with (or without) BN. I am a little bit confused that how the accuracy is obtained? It seems to be that this is end-to-end secure training?
-
In addition, i wonder how the comparison to prior works is conducted. Do you carry out the experiments of the prior works using 32-bit (which is identical to your setting) or the setting in their papers (like 64-bit in ABY3)?
Really thanks for your patient answers!!!!
from falcon-public.
- The numbers are for end-to-end training but unfortunately for plaintext.
- I think the numbers are identical (the fastest way to verify would be to run the Falcon code).
from falcon-public.
Related Issues (20)
- Testing on distributed server HOT 2
- CPU time and Wall clock time HOT 1
- Configuring the network of linux server HOT 3
- New bug - Segmentation fault on layer push_back on VGG16 HOT 5
- Reproducing the accuracy results HOT 10
- Prepare dataset wrong HOT 2
- Using Randomness function of AESObject HOT 2
- "Not managing to connect" problem when running using LAN HOT 4
- The security of division protocol
- The security of division protocol HOT 1
- How to load CIFAR-10 and ImageNet dataset? HOT 3
- How to run falcon in the network environment HOT 1
- How to change the number of epochs when training networks HOT 4
- Weird communication results HOT 2
- An Error in the Calculation of Communication Amount
- Question about falcon-public/tree/master/MNIST HOT 2
- A question about calculating Pow in the paper and code
- Division calculation results accuracy
- Save the model after training and make prediction
- 'funcRELUPrime' function produces inconsistent 'b' sharing when printed by different parties HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from falcon-public.