flush-reload-attacks's People
Forkers
gitcollect xihaxiaozi timzhang926 divya30 mfrw darg0001 shekkbuilder alpatrykos cynthiaguan yamahiroto houcy chonghw hadimoghaddam ronald4545 ting2-wang ksutcr18 backahasten bobburns forever-gk rayyang2000 5upp0s1t0ry virdwaitdevs nickd0 epsilonsolutions abignail andrewzigerelli wali-ku fanyao chunjie yimzhou h0k5 churchhan tuinannan 1490358773 taktopiyush kei7h alamshariful landturtler cookiesweets imulmul adrienbenamira davidamarquis shreyash112 hank0438 rakhithjk zechenghe limkokholefork justforkin ezekiel-1998 koloa-yang argonauta666 langrange-l 5l1v3r1 tongzhongkai gerrie-cui slokuge mydevgh leesk212 dutongtong pushkarkishore hardsecurity radl97 j0sh99 nivi1501 ajgappmark terrafin alexeynovichenko elaineyao vyyq limengming manggoguy upleft-dt cs-joy victorolaiya iq-scm rmallof 23-cpu huangjunying hunie-sonflush-reload-attacks's Issues
Remove John Aycock from the author list and add acknowledgement
(As requested in email)
Make "automation" more prominent in the abstract
The abstract downplays the automation to almost a parenthetical remark. It's really a core part of the paper so it should be more prominent in the abstract. (Also use "human-assisted automation" instead of "partially automated" or some other way of being more specific than "partially.")
Incorporate mathematical definition of an input-distinguishing attack?
I just wrote this to James:
I came up with an idea while reading your paper that would appeal to
USENIX. The idea is that the Flush+Reload side channel might be limited
enough that it's possible to explore the entire space of attacks.
Define a One-Shot Automated Input Distinguishing Attack as a tuple
(ProbeFind, Recover) where:
- ProbeFind is an algorithm which takes as input:
- The target program (binary).
- The target input (what would be the "true positive.")
- A set of non-target inputs ("false positives").
ProbeFind outputs the F+R wait time c in cycles, and a list of probes.
- Recover is an algorithm that gets the attack tool's output (with
ProbeFind's parameters) and outputs 0 to guess it saw a non-target input
and 1 to guess it saw the target input.
Define the "advantage" as how much more likely the attack is to be right
than if it just made a random guess. This would be really similar to
cryptography where attacks are polynomial-time adversaries which have
non-negligible advantage in some experiment.
With that definition we can talk about the space of all attacks, and
carve it up into subspaces. One is the subspace you explored in your
work, and another is the subspace I explored in my original paper.
Here's why I think this is valuable:
1. If our definition is good enough, it captures all possible attacks
that can be done with Flush+Reload (crypto key leaking attacks imply the
existence of One-Shot Automated Input Distinguishing Attacks).
2. It's a huge mathematical space, but it's not totally unmanageable. It
can be carved up into smaller pieces, which can be tackled one at a
time. If you hold the binary and inputs constant then it's nearly finite
and maybe even completely searchable on a supercomputer.
3. The parts of the space which haven't been explored yet are obvious
candidates for future work, that other researchers can pick up on.
4. To propose a defense, you'll have to say which part of the space it
defends against. You can either prove it's good for all possible
attacks, or you'll see some space which it doesn't defend against.
5. The more we know about the properties of the Flush+Reload
side-channel itself (e.g. restrictions on probe locations, maximum
number of probes, CPU prefetching weirdness) the smaller and more
manageable the mathematical space gets.
6. It sets a precedent of mathematically defining what an "attack" is.
The space of all RCE attacks is so huge that it never made sense to
define what one was, so we just kept our intuitive idea, which lead to
lots of broken defenses. Maybe in the far, far, future we can actually
understand the entire space.
Black Hat Slide Deck
Be consistent with success rate significant figures
We claim a success rate of "94%" and then "97.7%" later on. Be consistent.
Incorporate new research that's been published since I stopped working on this
Here's a list of titles I found on Google Scholar which may be relevant. Basically, I searched for Flush+Reload and clicked a bunch of stuff, then found the PaaS paper and found everything that cited it. Read all of these papers' abstracts and decide which ones deserve a closer look and which ones should be read in entirety.
(filtered into categories below)
Ask USENIX if they allow concurrent submission to Black Hat
They are clear that they want things previously-submitted to Black Hat, but they're unclear if they want stuff that's also being submitted to Black Hat at the same time. Email them and ask.
Clarify cache specifications
- Minor remark
Table 1: "unified" is usually used to describe a level that contains both data and instructions, not to describe a "shared" cache between cores. The CPU on System 2 "Xeon E3-1245 v2" has neither a L1 that is shared or unified. A clearer way to write the specifications would be along these lines:
"L1: 4x32 KB data + 4x32 KB instructions"
"L2: 4x256 KB unified"
"L3: 8 MB shared unified"
Same for the Intel Core 2 Duo P8700:
"L1: 2x32 KB data + 2x32 KB instructions"
"L2: 3 MB shared unified"
You could also write the number of cores of each processor to make it clearer if need be.
Address the question "Does it work across VM?" in the paper
The paper is currently silent on this important question (as pointed out by reviewers).
Make sure experiments are reproducible on Taylor's new dedicated server
I switched to a cheaper dedicated server, but it's hosted in the same place by the same company and has better/equivalent hardware. Try to reproduce the experiments there just to make sure we have at least one system we can reproduce the attacks on.
Find out what venues publish our related work
This is kind of obvious in retrospect, but we should look at which venues our related work is published in and think about submitting to those venues!
Rephrase the last two paragraphs as actionable items
Don't just remark on the possibility of a result, use verbs like "test", "try", "find out" e.g.
"Flush+Reload might be used to speed up covert channels... ... could lead to reduced latencies ..."
Remove claims to being first non-crypto cache attack
See #2 for useful "why this is still important" arguments.
Submit to USENIX WOOT
In F+R explanation, mention why memory sharing is possible
Mention that the code memory can be shared because it's read-only. This will give the reader another thing to grasp on to to help them understand and visualize what's going on.
Incorporate Cache Template Attacks
Combine our work with Cache Template Attacks. See comments in #32.
Submit to IACR '17?
If all else fails, try submitting to IACR '17. See #4.
Make sure things mentioned in introduction as future work are also in the Future Work section
Getting it to work on AMD (#35), trying it across VMs.
Do we understand why it doesn't work on AMD yet?
In our future work section we claim this hasn't been answered adequately yet. Maybe one of the new papers answers it (or maybe someone's gotten a similar attack to work on AMD).
Submit to IEEE S&P '17?
If all else fails, try submitting to IEEE S&P '17. See #4.
Cite ARMageddon paper
https://arxiv.org/abs/1511.04897
Three things:
- They make the attack work on different (possibly exclusive cache) cache architecture.
- They present privacy-compromising attacks as examples.
- We might want to mention something about mobile attacks in future work.
Do I put "University of Calgary", "Zcash", or "University of Waterloo" under my name/
Most of the work happened at UofC, I'm currently working at Zcash using 20% time to work on this, and I'm going to Waterloo in the Fall.
It shouldn't be Waterloo, but the publication wouldn't have been possible without UofC nor Zcash so they should both get credit.
Maybe both?
"we describe a tool which takes a list of functions as input" is wrong
It's technically correct but the tool also takes two program inputs which it uses to test the goodness of the probe locations. We should mention this in the same sentence to make it easier to understand what's going on.
Boast about "reliable/reproducible experimentation framework"
I think the experiment framework we've built is a valuable contribution on its own, and should be highlighted in the paper. This isn't "academic code".
Cite The Spy in the Sandbox paper
- "The Spy in the Sandbox -- Practical Cache Attacks in Javascript", by Yossef Oren, Vasileios P. Kemerlis, Simha Sethumadhavan, Angelos D. Keromytis. In http://arxiv.org/abs/1502.07373
While using Javascript and not native code, Oren et al. use a cache side channel to perform a mouse/network activity logger, directly impacting the privacy of users.
GUIDE: How to reproduce the experiments
This should be a simple markdown file with step-by-step instructions for installing dependencies and running the experiments (including finding probe addresses in your binaries).
Link to it in the appendix of the paper.
Address reviewer comments
Dear authors,
The 9th USENIX Workshop on Offensive Technologies (WOOT '15) program
committee is sorry to inform you that your paper #17 was rejected, and will
not appear in the conference.
Title: Distinguishing Inputs with the FLUSH+RELOAD Cache Side
Channel
Authors: Taylor Hornby (University of Calgary)
John Aycock (University of Calgary)
Paper site: https://woot15.usenix.hotcrp.com/paper/17?cap=017a1_cQ9NL6QjM
20 papers were accepted out of 57 submissions.
Reviews and comments on your paper are appended to this email. The
submissions site also has the paper's reviews and comments, as well as more
information about review scores.
Contact [email protected] with any questions or concerns.
Aurélien and Thomas,
WOOT '15 PC co-chairs
===========================================================================
WOOT '15 Review #17A
---------------------------------------------------------------------------
Paper #17: Distinguishing Inputs with the FLUSH+RELOAD Cache Side Channel
---------------------------------------------------------------------------
Overall merit: 4. Accept
Reviewer expertise: 4. Expert
===== Paper summary =====
In this paper, the authors describe a method that use Flush+Reload to distinguish between a set of possible inputs. They do so by profiling the order in which functions are called by applications, for different inputs. The method comprises three steps: a training phase (with a set of inputs), an attack stage and the recovery stage. They thus show that cache attacks can be used to compromise the privacy of users, when shared memory is enabled.
===== Comments for author =====
* Pros
- Cache attacks are gaining momentum on the last few years, it is interesting to see how broad these attacks can be, and in particular what kind of attacks can be performed.
- The authors performed three different experiments (on Links, Poppler and TrueCrypt) to show that their approach is generic and applies to different applications.
* Following Remarks/Questions on Genericity
- For the Links (resp. Poppler) experiments, 100 (resp. 127) is not a big training set. I would have liked to know how the recovery behaves when the training set grows.
- Why Links and not a more popular browser? Is it linked to the size of the binary (and thus the number of functions to chose from)? You should clarify this in the paper.
- You also didn't explain your choice of only selecting cache lines that correspond to function entries only. Selecting between all the addresses accessed by the binary might give you more fine-grained information. You also could attack any binary, regardless of having the symbols.
* Related Work
There are really interesting and (also really) concurrent works to yours. I suggest you to cite, of interest and directly related to your paper:
- "The Spy in the Sandbox -- Practical Cache Attacks in Javascript", by Yossef Oren, Vasileios P. Kemerlis, Simha Sethumadhavan, Angelos D. Keromytis. In http://arxiv.org/abs/1502.07373
While using Javascript and not native code, Oren et al. use a cache side channel to perform a mouse/network activity logger, directly impacting the privacy of users.
- "Cache Template Attacks: Automating Attacks on Inclusive Last-Level Caches", by Daniel Gruss, Raphael Spreitzer, and Stefan Mangard. In Usenix Security 2015 (https://www.usenix.org/conference/usenixsecurity15/technical-sessions/presentation/gruss)
Gruss et al. designed a generic technique that profiles cache-based information leakage for any binary.
If the paper is not yet available at the time you get this review, you can contact Daniel Gruss who will happily give you a copy of the paper. Their code is here and is documented so that you can see the relevance to your work and what they did: https://github.com/IAIK/cache_template_attacks
* Minor remark
Table 1: "unified" is usually used to describe a level that contains both data and instructions, not to describe a "shared" cache between cores. The CPU on System 2 "Xeon E3-1245 v2" has neither a L1 that is shared or unified. A clearer way to write the specifications would be along these lines:
"L1: 4x32 KB data + 4x32 KB instructions"
"L2: 4x256 KB unified"
"L3: 8 MB shared unified"
Same for the Intel Core 2 Duo P8700:
"L1: 2x32 KB data + 2x32 KB instructions"
"L2: 3 MB shared unified"
You could also write the number of cores of each processor to make it clearer if need be.
===========================================================================
WOOT '15 Review #17B
---------------------------------------------------------------------------
Paper #17: Distinguishing Inputs with the FLUSH+RELOAD Cache Side Channel
---------------------------------------------------------------------------
Overall merit: 2. Weak reject
Reviewer expertise: 2. Some familiarity
===== Paper summary =====
This demonstrates using the FLUSH+RELOAD technique for one process to spy on the code execution pattern of another by timing cache accesses. Most work in this area uses such side-channels to attack crypto. This paper demonstrates distinguishing non-crypto program behavior, such as which website was loaded in a browser.
===== Comments for author =====
This is a nice demonstration of the FLUSH+RELOAD attack, the methodology for preparing and training the attack is interesting, and it's interesting to see what information can be extracted from different apps.
However, it's not clear how exactly this improves on "Cross-Tenant Side-Channel Attacks in PaaS Clouds" from ACM CCS 2014, which is not cited.
That paper uses FLUSH+RELOAD to recover non-cryptographic data from a shopping app (the number of items in a shopping cart). So it's not true that "this is the first time that a generic cache-based side channel has been used to compromise privacy by attacking a non-cryptographic application".
Moreover, that paper demonstrates the attack in a shared-VM environment targeting server apps, which is probably more practical and difficult.
The authors should clarify this paper's relationship to prior work.
===========================================================================
WOOT '15 Review #17C
---------------------------------------------------------------------------
Paper #17: Distinguishing Inputs with the FLUSH+RELOAD Cache Side Channel
---------------------------------------------------------------------------
Overall merit: 3. Weak accept
Reviewer expertise: 2. Some familiarity
===== Paper summary =====
Here we are presented with a noncryptographic application of the FLUSH+RELOAD attack. The authors use cache timing to detect, across user accounts in the same machine, which of many similar PDF files has been opened, which Wikipedia link has been accessed using `links` and, in a more cryptographic application, whether a TrueCrypt volume is a hidden volume. Thea authors also devise a semi-automated method to find good cache lines to FLUSH-RELOAD on.
===== Comments for author =====
I liked this paper. It was well-written and well-motivated. While this class of side-channel attack is of course not new, this is an interesting application of it that can end up being useful in an elaborate attack on, say, cross-VM scenarios.
Which leads to the question: why was the cross-VM scenario not explored here? Was it for convenience reasons, like it is pointed out for the same user/different user scenario, or does the presented method not work there? I would advise to clear this up. (The cross-VM scenario is treated at the very end, but it is in the context of shared _pages_, which is a leakier side-channel.)
One nitpick in Section 7: the authors claim that this is the first noncryptographic exploration of cache-timing attacks. But in the very next sentence they point out another prior cache-timing attack which is also noncryptographic. Is the claim specifically on _usermode_ cache timing? If so, I would advise the authors to make the claim clearer.
In the same section, the authors list a number of prior side-channel attacks against various software. Another interesting instance of such an attack is pakt's (not cache-) timing attack on JavaScript hash tables [*], which can be used to bypass ASLR by leaking an object's pointer.
[*] https://gdtr.wordpress.com/2012/08/07/leaking-information-with-timing-attacks-on-hashtables-part-1/
Give some characterization of which processors the attack works on
Say in the intro/abstract that we tested it on Intel processors of architecture X made in year Y, or something. Explicitly mention it doesn't work but cite #35 as evidence that attacks could be developed for AMD.
Answer: Why only probe functions? in the paper
- You also didn't explain your choice of only selecting cache lines that correspond to function entries only. Selecting between all the addresses accessed by the binary might give you more fine-grained information. You also could attack any binary, regardless of having the symbols.
Prepare talk demo
Practice giving the talk demo, and record a screencap of it in case the WiFi goes out there (or someone DDoSes my server during the presentation).
Did we get accepted to Black Hat?
Check on the CFP submission periodically.
Remind readers what S and T are inline
Even I, the author of the paper, get confused about what S and T are. Wherever we give them values we should write it like...
"... T=10 training samples tested using S=10 trial runs per input file..."
"These attacks are not directly damaging on their own" is incorrect
I wrote this in an attempt to not overstate our results, but it goes too far. These attacks could be damaging in certain scenarios, and we wouldn't want readers to falsely conclude from this statement (if it was the only sentence in the paper that they read) that they are safe.
Choose the best latex document theme for Black Hat
Black Hat doesn't constrain us to the USENIX theme. First, find out if there is a standard theme for Black Hat. Second, make the paper look as awesome as possible within those constraints.
Code reorganization / cleanup
Fix the confusion myversion
thing in flush-reload
.
Add contact information to the README.
Anything else that will make it easier to understand what's in here and get it running.
Make it clear that we aren't citing the latest F+R attacks
F+R has broken a lot more crypto stuff lately. We say things like "... has since broken GnuPG and OpenSSL." Replace that with F+R's best current attack results. There's also a complete list of F+R crypto attacks in the conclusion, which needs to be updated to include all of them.
Cite the PaaS paper
Instead of claiming we're the first cache side channel attack on privacy, just list related work, which out of all the stuff I'm aware of is just the PaaS paper. There's one paragraph starting with "To the best of our knowledge..." that needs to be amended, and then more in the related work section.
"Section 5 describes our experiments" is wrong
In the list of sections the paper says "Section 5 describes our experiments." This is incorrect. Section 5 describes the environment our experiments run in, and their general pattern. It does not describe the instances of our experiments.
Mention CATalyst in the defense section?
Submit talk to Black Hat
Saving my proposal in this issue since I don't trust their web interface not to delete everything when I click submit!
Final review of the paper
One reviewer wrote "...why was the cross-VM scenario not explored here? Was it for convenience reasons,..." I could have easily forseen that if I thought about it. First brainstorm the types of questions reviewers will ask, then find all the instances of those questions.
Types of questions:
- Why didn't you explore ...insert apparently-easy-to-try/low-hanging-fruit result here...?
- I'm confused about ...?
- ... is technically inaccurate (nitpicks).
- more?
Was the TrueCrypt experiment only failing on dedi because of no sudo?
Fix "Using a training set created on System 1 ... recover ... % of probe sequences"
What does it mean to "recover a probe sequence?" Rewrite this sentence.
What other information will readers want?
Think about what information readers will want our paper to have if they're going to incorporate it into their life so far. We should at least answer "I don't know." to all of these questions in the paper.
One bit of information is "Does it work across VM?" Practitioners will want to know if their infrastructure is vulnerable. Researchers will want to know if they should test their cross-VM-attack-defense idea against it.
What else?
Rename "recovery" to "identification" to avoid mischaracterization/confusion
Recovery suggests information about the input was recovered (technically true but misleading), while identification is more suggestive of "out of N inputs, we identified this one." This should be a simple search and replace.
Submit to WOOT '16 (Deadline: Tuesday, May 17, 2016, 8:59 p.m. PDT)
Answer: Why links and not a more popular browser? in the paper
- Why Links and not a more popular browser? Is it linked to the size of the binary (and thus the number of functions to chose from)? You should clarify this in the paper.
Add "(selected arbitrarily)" to "...using test inputs HAN040-E.pdf..."
Add Cache Template Attacks to Related Work
Cache Template Attacks: Automating Attacks on Inclusive Last-Level Caches (Daniel Gruss, Raphael Spreitzer, Stefan Mangard)
Look for new privacy-compromising non-crypto sidechannel attacks
We have a list of interesting examples in the related work section. Is there any new awesome work we should highlight here?
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.