Comments (4)
@Arthur-PI Code looks good. Note that the ///
comment for an error variant is parsed by #[derive(Display)
from displaydoc::Display
, which is why it contains formatting strings for all fields. Please take care of them as well, so that the context information in the error variant is actually printed.
As we're working asynchronously and I'll be the only one reviewing your PR(s), I think that a single PR with one fix per commit is the best here. Thanks for referencing the test file in each commit - that makes it very easy for me to follow what you've found :)
from ntfs.
Hey @Arthur-PI! Thank you for your fuzzing efforts and for opening this bug report. This is a very good catch!
When writing the ntfs crate, my style was to use the [start..end]
range slice only when start
and end
have already undergone extensive validation and are safe to use. An example for this style is
Line 249 in 9348d72
This is safe, because
start
and end
have already been checked in validate_name_sizes
before: Lines 387 to 407 in 9348d72
If validation fails, this function also returns a precise error that pinpoints to the exact error type and location in the filesystem.
Offset and length are not necessarily checked and reported individually. Looking at the code again, I also found places where start
and end
are put into a slice.get(start..end)
call, and a single error is returned if get
doesn't yield a result:
Lines 143 to 148 in 9348d72
So as you see, there are currently multiple ways to deal with these cases, with room for improvement/unification.
As you ask for my preferred version to deal with such cases:
-
Check if there is a suitable error variant. Don't hesitate to add one. Exact error messages are way better than a single message reused for multiple cases.
In this case,NtfsError::MissingIndexAllocation
is a specific error when an $INDEX_ALLOCATION attribute was expected but not found. It's unrelated to the reported problem. -
Changing
[start..end]
to.get(start..end)
is also how I would fix this particular problem.
One could also checkstart
andend
individually, like it's done invalidate_resident_value_sizes
, but I'd say it's too easy to miss a check here. -
Add an error variant
InvalidNonResidentValueDataRange
, with fieldsposition
,range
, andsize
, and return that as the specific error for this case.
Useself.position()
for theposition
field,start..end
for therange
field, andself.file.ntfs().file_record_size()
for the size field. This gives all important context to understand and pinpoint the error.
Doing it like this is also 100% consistent with what's done forslice.get(start..end)
in index_entry.rs. -
Use
.get(start..end).ok_or(.....)?
instead of an if-else block.
This doesn't add an indendation level for the happy case, and therefore scales better if we later need to introduce another check with early return in case of an error.
Again, thanks a lot for your help with this!
from ntfs.
Thanks you for this very complete response, those explanations are really helpful 👍🏼.
So I've taking into account what you've said and I made the fix accordingly, here is the new version.
pub(crate) fn non_resident_value_data_and_position(&self) -> Result<(&'f [u8], NtfsPosition)> {
debug_assert!(!self.is_resident());
let start = self.offset + self.non_resident_value_data_runs_offset() as usize;
let end = self.offset + self.attribute_length() as usize;
let position = self.file.position() + start;
let data = &self.file.record_data().get(start..end).ok_or(
NtfsError::InvalidNonResidentValueDataRange {
position,
range: start..end,
size: self.file.record_data().len(),
},
)?;
Ok((data, position))
}
With this new error:
/// The range of non resident value data when slicing is invalid
InvalidNonResidentValueDataRange {
position: NtfsPosition,
range: Range<usize>,
size: usize,
}
Now I have fix for 8 bugs similar to this one, do you prefer one PR for each fix or one big PR with one fix per commit. As you wish. I'll also provide you all the test files that create those crashes (I'll assign a test case by commit) if you want do dig a little bit in why this happens and if the problem is deeper in the parsing process.
Thank you for your work and help.
from ntfs.
Fixed in #23
from ntfs.
Related Issues (19)
- Multiply with oveflow when parsing malformed file system
- Panic when running ntfs_shell::dir in home directory HOT 4
- 2MB clusters lead to "The cluster size is 124928 bytes, but the maximum supported one is 2097152" HOT 3
- Large Sector Sizes HOT 1
- UEFI support HOT 2
- Publish a `0.2` or `0.1.x` version HOT 1
- NtfsFile::data string comparison is not case insensitive HOT 4
- Slice index out of bounds when parsing upcase table for malformed FS HOT 1
- Infinite loop on NtfsAttributes custom iterator next HOT 3
- Crash in `Record::fixup`, `array_position_end` out of bounds of `self.data` HOT 1
- Crash on `Record::fixup` `sector_position_end` out of bounds of data length and HOT 3
- Crash on `Record::update_sequence_array_count`, substraction overflow HOT 1
- Is it safe to scan a live disk? HOT 2
- Example is outdated
- MFT may have an attribute list HOT 1
- NTFS file at byte position 0xb820fe000 has no attribute HOT 5
- PhysicalDrive0 vs C: HOT 2
- Panic on empty volume name HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from ntfs.