Giter Club home page Giter Club logo

xfs_undelete's Introduction

xfs_undelete

An undelete tool for the XFS filesystem.

What does it?

xfs_undelete tries to recover all files on an XFS filesystem marked as deleted. You may also specify a date or age since deletion, and file types to ignore or to recover exclusively.

xfs_undelete does some sanity checks on the files to be recovered. This is done to avoid recovering bogus petabyte sized sparse files. In addition, it does not recover anything unidentifiable (given you have the file utility installed) by default. Specify -i "" on the command line if you want to recover those unidentifiable files.

The recovered file is stored on another filesystem in a subdirectory, by default xfs_undeleted relative to the current directory. The filename cannot be recovered and thus, it is put as the time of deletion, the inode number, and a guessed file extension. You have to check the recovered files you are interested in by hand and rename them properly.

How does it work?

xfs_undelete traverses the inode B+trees of each allocation group, and checks the filesystem blocks holding inodes for the magic string IN\0\0 that indicates a deleted inode. Then, it tries to make sense of the extents stored in the inode (which XFS does not delete) and collect the data blocks of the file.

Is it safe to use?

Given it only ever reads from the filesystem it operates on, yes. It also remounts the filesystem read-only on startup by default so you don’t accidentally overwrite source data. However, I don’t offer any warranty or liability. Use at your own risk.

Prerequisites

xfs_undelete is a tiny Tcl script so it needs a Tcl interpreter. It makes use of some features of Tcl-8.5, so you need at least that version. The tcllib package is used for parsing the command line. It also needs a version of dd which supports the bs=, skip=, seek=, count=, conv=notrunc, and status=none options, as well as a readlink which supports the -e option, and a version of stat which supports the -L and --format=%m options. The ones from GNU core utilities will do. If the file utility and magic number files with MIME type support are installed (likely), xfs_undelete will use that to guess a file extension from the content of the recovered file. In short:

  • tcl >= 8.5
  • tcllib
  • GNU coreutils

Recommended:

  • file (having magic number files with MIME type support)

In addition, you need enough space on another filesystem to store all the recovered files as they cannot be recovered in place.

Distribution Packages

OpenSUSE Linux Arch Linux

Limitations

  • The way XFS deletes files makes it impossible to recover the filename or the path. You cannot undelete only certain files. The tool however has a mechanism only to recover files deleted or modified since a certain date. See the -t and -T options.
  • The way XFS deletes files makes it impossible to recover heavily fragmented files. For typical 512 byte inodes, you can only recover files having at maximum 21 extents (of arbitrary size). Files with more extents cannot be recovered at all by this program.
  • The way XFS deletes files makes it impossible to retrieve the correct file size. Most files will be padded with zeroes so they fit the XFS block size. Most programs do not bother anyway. Files of the text/ mimetypes get their trailing zeroes trimmed by default after recovery. See the -z option to change this behaviour.

License

xfs_undelete is free software, written and copyrighted by Jan Kandziora <[email protected]>. You may use, distribute and modify it under the terms of the attached GPLv3 license. See the file LICENSE for details.

How to use it

There's a manpage. Here is a copy of it:

NAME

xfs_undelete - an undelete tool for the XFS filesystem

SYNOPSIS

xfs_undelete [ -t timerange ] [ -T timerange ] [ -r filetypes ] [ -i filetypes ] [ -x inodes ] [ -S size ] [ -z filetypes ] [ -o output_directory ] [ -s start_inode ] [ -m magicfiles ] [ --no-remount-readonly ] device
xfs_undelete -l [ -m magicfiles ]

DESCRIPTION

xfs_undelete tries to recover all files on an XFS filesystem marked as deleted. The filesystem is specified using the device argument which should be the device name of the disk partition or volume containing the filesystem.

You may also specify a date or age since deletion, and file types to ignore or to recover exclusively.

The recovered file cannot be undeleted in place and thus, it is stored on another filesystem in a subdirectory, by default xfs_undeleted relative to the current directory. The filename cannot be recovered and thus, it is put as the time of deletion, the inode number, and a guessed file extension. You have to check the recovered files you are interested in by hand and rename them properly. Also, the file length cannot be recovered and thus, the recovered files are padded with \0 characters up to the next xfs block size boundary. Most programs simply ignore those \0 characters but you may want to remove them by hand or automatically with the help of the -z option.

This tool does some sanity checks on the files to be recovered. That is to avoid "recovering" bogus petabyte sized sparse files. In addition, it does not recover anything unidentifiable (given you have the file utility installed) by default. Specify -i "" on the command line if you want to recover those non-bogus but still unidentifiable files.

OPTIONS

-t timerange
Only recover files that have been deleted within the given time range. The timerange value has to be put either as two timespecs separated by a doubledot e.g. 2020-03-19..-2hours, as a doubledot followed by a timespec as e.g. ..-2hours, which means a range starting at epoch, as a timespec followed by a doubledot as e.g. -2hours.., which means a range ending now, or as a single timespec value. The latter means the same as a timespec followed by a double dot. Timespecs may be all values Tcl's [clock scan] function accepts. See clock(n). By default, files deleted from epoch to now are being recovered.

-T timerange
Only recover files that have been modified within the given time range before they have been deleted. This option is useful if you know the date of your latest backup. The timerange value has to be put either as two timespecs separated by a doubledot e.g. 2020-03-19..-2hours, as a doubledot followed by a timespec as e.g. ..-2hours, which means a range starting at epoch, as a timespec followed by a doubledot as e.g. -2hours.., which means a range ending now, or as a single timespec value. The latter means the same as a timespec followed by a double dot. Timespecs may be all values Tcl's [clock scan] function accepts. See clock(n). By default, files modified from epoch to now are being recovered.

-r filetypes
Only recover files with a filetype matching a pattern from this comma-separated list of patterns. See section FILETYPES below. By default this pattern is * ; all files are being recovered, but also see the -i option.

-i filetypes
Ignore files with a filetype matching a pattern from this comma-separated list of patterns. See section FILETYPES below. By default this list is set to bin ; all files of unknown type are being ignored, but also see the -r option.

-x inodes
Ignore files with an inode number from this comma-separated list. By default this list is empty.

-S size
Ignore files with a size larger than specified. The number may be given as bytes or with an appended k, M, G as kilobytes, megabytes, gigabytes respectively. By default there is no size limit.

-z filetypes
Remove trailing zeroes from files with a filetype matching a pattern from this comma-separated list of patterns. See section FILETYPES below. By default this list is set to text/* ; all files of text/* mimetype have their trailing zeroes removed.

-o output_directory
Specify the directory the recovered files are copied to. By default this is xfs_undeleted relative to the current directory.

-s start_inode
Specify the inode number the recovery should be started at. This must be an existing inode number in the source filesystem, as the inode trees are traversed until this particular number is found. This option may be used to pickup a previously interrupted recovery. By default, the recovery is started with the first inode existing.

-m magicfiles
Specify an alternate list of files and directories containing magic. This can be a single item, or a colon-separated list. If a compiled magic file is found alongside a file or directory, it will be used instead. This option is passed to the file utility in verbatim if specified.

--no-remount-readonly
This is a convenience option meant for the case you need to recover files from your root filesystem, which you cannot umount or remount read-only at the time you want to run xfs_undelete. The sane solution would be moving the harddisk with that particular file system to another computer where it isn't needed for operation.

If you refuse to be that sane, you have to make sure the filesystem was umounted or remounted read-only at least in the meantime by another means, for example by doing a reboot. Otherwise you won't be able to recover recently deleted files.

USE THIS OPTION AT YOUR OWN RISK. As the source filesystem isn't remounted read-only when you specify this option, you may accidentally overwrite your source filesystem with the recovered files. Xfs_undelete checks if you accidentally specified your output directory within the mount hierarchy of your source filesystem and refuses to do such nonsense. However, automatic checks may fail, so better check your specification of the output directory by hand. Twice. It must reside on a different filesystem.

-l
Shows a list of filetypes suitable for use with the -r, -i, and -z options, along with common name as put by the file utility.

FILETYPES

The filetypes as used with the -r, -i, and -z options are a comma-separated list of patterns. Patterns of the form */* are matched against known mimetypes, all others are matched against known file extensions. The file extensions are guessed from the file contents with the help of the file utility, so they don't neccessarily are the same the file had before deletion.

Start xfs_undeleted with the -l option to get a list of valid file types.

Note: you want to quote the list of filetypes to avoid the shell doing wildcard expansion.

EXAMPLES

# cd ~ ; xfs_undelete /dev/mapper/cr_data

This stores the recovered files from /dev/mapper/cr_data in the directory ~/xfs_undeleted.

# xfs_undelete -o /mnt/external_harddisk /dev/sda3

This stores the recovered files from /dev/sda3 in the directory /mnt/external_harddisk.

# xfs_undelete -t 2020-03-19 /dev/sda3

This ignores files deleted before March 19th, 2020.

# xfs_undelete -t -1hour /dev/sda3

This ignores files deleted more than one hour ago. The -t option accepts all dates understood by Tcl’s [clock scan] command.

# xfs_undelete -i "" -t -2hour /dev/sda3

This recovers all files deleted not more than two hours ago, including "bin" files.

# xfs_undelete -r 'image/*,gimp-*' /dev/sda3

This only recovers files matching any image/ mimetype plus those getting assigned an extension starting with gimp-.

TROUBLESHOOTING

When operating on devices, this program must be run as root, as it remounts the source filesystem read-only to put it into a consistent state. This remount may fail if the filesystem is busy e.g. because it's your /home or / filesystem and there are programs having files opened in read-write mode on it. Stop those programs e.g. by running fuser -m /home or ultimately, put your computer into single-user mode to have them stopped by init. If you need to recover files from your / filesystem, you may want to reboot, then use the --no-remount-readonly option, but the sane option is to boot from a different root filesystem instead, for example by connecting the harddisk with the valueable deleted files to another computer.

You also need some space on another filesystem to put the recovered files onto as they cannot be recovered in place. If your computer only has one huge xfs filesystem, you need to connect external storage.

If the recovered files have no file extensions, or if the -r, -i, and -z options aren't functional, check with the -l option if the file utility functions as intended. If the returned list is very short, the file utility is most likely not installed or the magic files for the file utility, often shipped extra in a package named file-magic are missing, or they don't feature mimetypes.

SEE ALSO

xfs(5), fuser(1), clock(n), file(1)

AUTHORS

Jan Kandziora <[email protected]>

xfs_undelete's People

Contributors

axxapy avatar cab404 avatar ianka avatar marcone avatar phcoder avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

xfs_undelete's Issues

I have xfs raid-5 array mounted as /srv/media

          I have xfs raid-5 array mounted as /srv/media

root@myserver:~# df -h
/dev/md3 19T 17T 1.9T 90% /srv/media

root@myserver:~# cat /proc/mdstat
md3 : active raid5 sdg5[6] sde5[2] sdf5[5] sdd5[9] sdb5[8] sdc5[7]
19506902080 blocks super 1.2 level 5, 64k chunk, algorithm 2 [6/6] [UUUUUU]

root@myserver:~# lsblk -o PATH,FSTYPE,MOUNTPOINT /dev/md3
PATH FSTYPE MOUNTPOINT
/dev/md3 xfs /srv/media

Accidentally deleted a folder using rm (intended to delete a soft link).

Mounted an external drive :

root@myserver:~# lsblk -o PATH,FSTYPE,MOUNTPOINT /dev/sda1
PATH FSTYPE MOUNTPOINT
/dev/sda1 ext4 /root/recovery

Cloned your script using git.
Made sure all per-requisits are met:
tcl >= 8.6
tcllib
GNU coreutils
file (having magic number files with MIME type support)

Run
root@myserver:~# ./xfs_undelete -t 2023-05-01 -o /root/recovery/ /dev/md3
Got:
Starting recovery.
Done.

But process went way too fast for 18Tb and 0 files were recovered.

Originally posted by @lelik77 in #34 (comment)

-l list file types option broken

Tested at commit c01b5fa (v13.1-2-g4278cf7).

The "list filetypes" option -l documented in the man page does not work as advertised:

[user@host xfs_undelete]# xfs_undelete -l
bad option "-stride": must be -ascii, -command, -decreasing, -dictionary, -increasing, -index, -indices, -integer, -nocase, -real, or -unique
    while executing
"lsort -dictionary -stride 3 -index 0 $::filetypes"
    invoked from within
"if {[dict get $::parameters l]} {
	## Yes. Get file types understood by the file utility
	if {![catch {exec -ignorestderr -- file -l {*}$magicopts 2>/..."
    (file "/root/XFS_Data_Recovery/xfs_undelete_20240111/xfs_undelete/xfs_undelete" line 439)
[user@host xfs_undelete]#

Testing with git bisect reveals that this output is produced since commit a867fdf "added -l option for listing understood file extensions". Perhaps this option never worked properly, or there is something broken on my system.

xfs_undeleted only contains txt file?

I use xfs_undelete to recover my stupid rm-rf

my machine is centos 8 with lvm

i typed ./xfs_undelete /dev/mapper/cl-home

then it comes out:

Starting recovery.
Recovered file -> xfs_undeleted/2021-08-26-02-21_1474.txt
Recovered file -> xfs_undeleted/2021-08-26-02-21_150551.txt

it only output two txt file, which record some command history..

How to get the removed file?

Error renaming to [...] to [...].pythonapplication/octet-stream

The program tries to rename to a filename with a slash:

error renaming "xfs_undeleted/2021-11-30-12-56_21774193369" to "xfs_undeleted/2021-11-30-12-56_21774193369.pythonapplication/octet-stream": no such file or directory
while executing
"file rename -force $of $rof"
(procedure "investigateInodeBlock" line 93)
invoked from within
"investigateInodeBlock $ag $iblock"
(procedure "traverseInodeTree" line 40)
invoked from within
"traverseInodeTree $ag $agi_branch"
(procedure "traverseInodeTree" line 26)
invoked from within
"traverseInodeTree $ag $agi_root"
("for" body line 10)
invoked from within
"for {set ag 0} {$ag<$agcount} {incr ag} {
## Read inode B+tree information sector of this allocation group.
seek $fd [expr {$blocksize*$agblocks*$ag..."
(file "/root/xfs_undelete/xfs_undelete" line 598)

Please specify a block device or an XFS filesystem image.

Im on unraid trying to recover recently deleted files.
I tried multiple commands and get the same error every time

xfs_undelete -t 2021-12-13 -r '.mp4' -o /mnt/user/Recovery /mnt/user/Pictures
xfs_undelete -t 2021-12-13 -r '
.mp4' -o /mnt/user/Recovery --no-remount-readonly /mnt/user/Pictures
xfs_undelete /mnt/user/Pictures
xfs_undelete /mnt/disk6

Any help pointing me in the right direction would be appreciated !
Thanks

Unclear program behavior

I installed tclsh and tcllib. Copy-pasted your script, all set up, I suppose.

Fire up:

~ $ ./xfs_undelete.sh -t -1hour -r 'image/* /dev/sda3
> 

That's all I have. Just the prompt line. What is this supposed to mean?

FYI:

~ $ blkid
/dev/sda3: UUID="..." TYPE="xfs" PARTLABEL="home" PARTUUID="..."

missing value to go with key( 0%)

When I try to run xfs_undelete on v13.0 release, it fails with a stack trace:

Commit v13.0 CommitID 43fec30

[root@host xfs_undelete]# /root/XFS_Data_Recovery/xfs_undelete/xfs_undelete/xfs_undelete -i "" -t 2024-01-09 /dev/mapper/rhel-data
Starting recovery.
missing value to go with key( 0%)
while executing
"dict get $extent count"
("uplevel" body line 2)
invoked from within
"uplevel 1 $body"
(procedure "lmap" line 4)
invoked from within
"lmap {loffset extent} $extents {
expr {$::blocksize*($loffset+[dict get $extent count])}
}"
(procedure "investigateInodeBlock" line 87)
invoked from within
"investigateInodeBlock $ag $iblock"
(procedure "traverseInodeTree" line 40)
invoked from within
"traverseInodeTree $ag $agi_branch"
(procedure "traverseInodeTree" line 26)
invoked from within
"traverseInodeTree $ag $agi_root"
("for" body line 10)
invoked from within
"for {set ag 0} {$ag<$agcount} {incr ag} {
## Read inode B+tree information sector of this allocation group.
seek $fd [expr {$blocksize*$agblocks*$ag..."
(file "/root/XFS_Data_Recovery/xfs_undelete/xfs_undelete/xfs_undelete" line 661)
[root@host xfs_undelete]#

I note that CommitID 43fec30 v13.0 introduced the code at
"lmap {loffset extent} $extents {
expr {$::blocksize*($loffset+[dict get $extent count])}
}"
(procedure "investigateInodeBlock" line 87)

Testing the parent commit:

CommitID 6176de1

[root@host xfs_undelete]# /root/XFS_Data_Recovery/xfs_undelete/xfs_undelete/xfs_undelete -i "" -t 2024-01-09 /dev/mapper/rhel-data
Starting recovery.
Recovered file -> xfs_undeleted/2024-01-09-09-43_11252188.bin
Recovered file -> xfs_undeleted/2024-01-09-09-43_11252190.bin
Recovered file -> xfs_undeleted/2024-01-09-09-43_11252191.bin
Recovered file -> xfs_undeleted/2024-01-09-09-43_16800488.bin
Recovered file -> xfs_undeleted/2024-01-09-09-43_16800489.bin
Recovered file -> xfs_undeleted/2024-01-09-09-43_16800490.bin
Recovered file -> xfs_undeleted/2024-01-09-09-43_16800491.bin
Recovered file -> xfs_undeleted/2024-01-09-09-43_16800498.bin
Recovered file -> xfs_undeleted/2024-01-09-09-43_16800499.bin
Recovered file -> xfs_undeleted/2024-01-09-09-43_16800500.bin
Recovered file -> xfs_undeleted/2024-01-09-09-43_16800502.bin
Recovered file -> xfs_undeleted/2024-01-09-09-43_16800503.bin
Recovered file -> xfs_undeleted/2024-01-09-09-43_16800504.bin
Recovered file -> xfs_undeleted/2024-01-09-09-43_16800505.bin
Recovered file -> xfs_undeleted/2024-01-09-09-43_16800506.bin
Recovered file -> xfs_undeleted/2024-01-09-09-43_16800507.bin
...

This appears to be working as advertised.

So it seems that CommitID 43fec30 is broken.

Doesn't appear to undelete PCM data.

I have a big raw datafile which is about 290 GiB in size, however I get only one file recovered which is just 356kiB in size - and which definitely contains unrelated data. It appears, even with the -i "" option, that the tool is ignoring the file because of its size.
Is there any way to lift the maximum file size limitation?

The file contains 8 channels of unsigned 16 bit data in little-endian order, none of which will have the least three significant bits set, and at least two sequential channels of which were all zeros. Is there any option to give a pattern to match like this for recovery recognition purposes?

xfs_undelete -l does not show at least one file type supported by file -i

So I used xfs_undelete to unsuccessfully try to recover some old log files that got rotated out. I chose to focus exclusively on the gzip compressed log files: as they would be easy for 'file' to recognize, and less likely to be fragmented (due to their smaller size). I was only able to recover 3 very short (under 12 line) log files. Most of the hits were gzip compressed JavaScript from the last day or two of web browsing.

As part of the process, I temporarily mounted the xfs volume read-only: and then ran 'file' on a representative sample log file. The output was:

gzip compressed data, last modified: Thu Jul 7 06:00:03 2022, from Unix, original size modulo 2^32 34984

Realizing my mistake, I looked at the man page for 'file' and ran "file -i /mnt/var/log/syslog.5.gz":

application/gzip;    charset=binary

Out of curiosity it [tried] running "xfs_undelete -l | grep gzip":

 #

Since I had gotten my target [mime] type from 'file' directly: I tried it anyway:

xfs_undelete -t 2022-06-07 -r application/gzip /dev/md126

Successfully recovered dozens of files, as described above. Unfortunately not the ones I was looking for: but I don't think that is the fault of 'xfs_undelete'.

Edit: put command prompt indicator in there so empty line can be represented.

: mark causes problem on ArchLinux

Hi,

Thanks for the brilliant tool.

I found an issue on my ArchLinux machine. dd complained the invalid argument cause the of parameter contains :, I believe the code is on line 83

83  set of [file join [dict get $::parameters o] [format "%s_%s" [clock format $ctime -format "%Y-%m-%d-%H:%M"] $inode]]

After I changed "%Y-%m-%d-%H:%M" to "%Y-%m-%d-%H-%M", it worked normally.

I don't know if it is a bug, but good to have you known it.

recovery not working

I have some directories and image files that have been deleted by mistake, I tried your recovery script and it doesn't seem to work.
Starting recovery.
Done. 2 (100%))
I've searched all of them and found no recovered files.

install on Suse 13.2

Fairly old OS, install failing to find xfs_undelete-master.

Any advice welcome.

thanks in advance, Brian

gyan:/tmp # zypper install xfs_undelete-master
Retrieving repository 'openSUSE:Factory' metadata ........................[done]
Building repository 'openSUSE:Factory' cache .............................[done]
Loading repository data...
Reading installed packages...
'xfs_undelete-master' not found in package names. Trying capabilities.
No provider of 'xfs_undelete-master' found.

child process exited abnormally06304

I was trying to use xfs_undelete to see if it could uncover directories and files that we either deleted, overwritten, or the drive might have had its MBR overwritten.

I haven't been able to get past the following:

# ./xfs_undelete -o /scr2/recovered-data-l211 /dev/sdb1
--
child process exited abnormally06304  (  0.0%)
while executing
"exec -ignorestderr -- dd 2>/dev/null if=$fs of=$of bs=$blocksize skip=$skip seek=$loffset count=$count"
("for" body line 43)
invoked from within
"for {set block [dict get $::parameters s]} {$block<$dblocks} {incr block} {
## Log each visited block.
puts -nonewline stderr [format $m1format $blo..."
(file "./xfs_undelete" line 59)

From your code:

...
 41 foreach line [split $config \n] {
 42         lassign $line key dummy value
 43         if {$key in {blocksize inodesize agblocks agblklog dblocks}} {
 44                 set $key $value
 45         }
 46 }
...
 58 ## Run through whole filesystem.
 59 for {set block [dict get $::parameters s]} {$block<$dblocks} {incr block} {
...

Some background:
I'm not sure what happened with the data, but the user says there were multiple directories with multiple files and now they're all gone. The suspect commands in the bash history are rm -rf blah which is followed by a fdisk /dev/THE-DRIVE. There weren't any timestamps in the bash history, so those rm and fdisk commands could have been from months ago during provisioning.

System details:
OS: CentOS7
HDD: 8TB Western Digital Red
Partition Details:

# fdisk -l /dev/sdb
...
Disk /dev/sdb: 8001.6 GB, 8001563222016 bytes, 15628053168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk label type: gpt
Disk identifier: E7F55CB0-5C0B-43E3-9978-438D59CAFEDB


#         Start          End    Size  Type            Name
 1         2048  15628052479    7.3T  Microsoft basic primary

When mounted:

# mount /dev/sdb1 /TEST-DATA-RECOVERY

# mount -l | grep sdb
/dev/sdb1 on /TEST-DATA-RECOVERY type xfs (rw,relatime,attr2,inode64,noquota) [/label1]

# ls -la /TEST-DATA-RECOVERY/
total 4
drwxrws---   2 root psgvb    6 Jul 25 14:11 .
dr-xr-xr-x. 26 root root  4096 Jul 26 16:14 ..

# df -lH | grep sdb
/dev/sdb1       8.0T   35M  8.0T   1% /TEST-DATA-RECOVERY

Any thoughts?

Thanks.

Clock Scan Problems and environments

I had to specify the timezone because clock scan was failing with: time value too large/small to represent

Adding this to the top of the scripts fixed the issue

set env(TZ) Europe/Kiev

help recovering a large qcow2 file

os: OL7.9

tcl:
tcllib-1.14-1.el7.noarch
tcl-8.5.13-8.el7.x86_64

app: latest release from github

I did a mv from this filesystem to a different system, then accidentally lost the one on destination. on source I see 6MB of a log file as filesystem activity after I mv-ed the file, nothing else has used the xfs filesystem

$ sudo xfs_info /dev/mapper/ol-home
meta-data=/dev/mapper/ol-home    isize=256    agcount=4, agsize=1264128 blks
         =                       sectsz=4096  attr=2, projid32bit=1
         =                       crc=0        finobt=0, sparse=0, rmapbt=0
         =                       reflink=0
data     =                       bsize=4096   blocks=5056512, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=4096  sunit=1 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
$ file -l | grep -i qcow
Strength =  70 : QEMU QCOW Image []
$ sudo ./xfs_undelete -l
bin application/octet-stream Binary data
txt text/plain               Plain Text
[user@srv xfs_undelete-14.0]$ sudo ./xfs_undelete  -t -3hour -i ""  /dev/mapper/ol-home
Starting recovery.
Recovered file -> xfs_undeleted/2024-06-03-16-39_249.bin
Recovered file -> xfs_undeleted/2024-06-03-16-39_252.bin
Recovered file -> xfs_undeleted/2024-06-03-16-39_40519878.txt
Recovered file -> xfs_undeleted/2024-06-03-17-59_41781571.txt
Recovered file -> xfs_undeleted/2024-06-03-16-25_60678479.txt
Recovered file -> xfs_undeleted/2024-06-03-18-02_60680223.txt
Recovered file -> xfs_undeleted/2024-06-03-18-02_64116153.txt
Done.
[user@srv xfs_undelete-14.0]$ ls -lahtrs xfs_undeleted/
total 364K
4.0K drwxrwxr-x. 3 root root 4.0K Jun  3 18:15 ..
 36K -rw-r--r--. 1 root root  44K Jun  3 18:24 2024-06-03-16-39_249.bin
 84K -rw-r--r--. 1 root root 208K Jun  3 18:24 2024-06-03-16-39_252.bin
220K -rw-r--r--. 1 root root 284K Jun  3 18:24 2024-06-03-16-39_40519878.txt
4.0K -rw-r--r--. 1 root root 3.5K Jun  3 18:24 2024-06-03-17-59_41781571.txt
4.0K -rw-r--r--. 1 root root   13 Jun  3 18:24 2024-06-03-16-25_60678479.txt
4.0K -rw-r--r--. 1 root root   13 Jun  3 18:24 2024-06-03-18-02_60680223.txt
4.0K drwxr-xr-x. 2 root root 4.0K Jun  3 18:24 .
4.0K -rw-r--r--. 1 root root   13 Jun  3 18:24 2024-06-03-18-02_64116153.txt

is there any chance to recover that 12GB qcow2 file?

Please specify a block device or an XFS filesystem image

Hi there, I am hoping to recover a bunch of files but when running the command, I get:

Please specify a block device or an XFS filesystem image.

I run the command as follows, from the directory the tool is installed in:

xfs_undelete -t -10hour -r '*.jpg' -o /dev/disk/by/id/ata-xxx --no-remount-readonly /dev/disk/by/id/ata-xxx

Infrastructure:

  • unraid server with 8 disks XFS formatted, one of which is the parity disk.
  • data was deleted on 6 disks, the 7th is the one I wish to restore to
  • since deletion the server has not been written to
  • array is mounted.

Would appreciate your steer, thanks for sharing such an awesome little program!

Unexpected info occur during try to run this program

Hi,

I was trying to use xfs_undelete,unexpected info occur during try to run this program.Please help to analysis,Thanks!

The info message following:

[root@localhost xfs_undelete-1.2]# ./xfs_undelete
can't find package cmdline
while executing
"package require cmdline"
(file "./xfs_undelete" line 10)

OS info:
[root@localhost xfs_undelete-1.2]# uname -a
Linux localhost 3.10.0-862.el7.x86_64 #1 SMP Wed Mar 21 18:14:51 EDT 2018 x86_64 x86_64 x86_64 GNU/Linux
[root@localhost xfs_undelete-1.2]# cat /etc/os-release
NAME="Red Hat Enterprise Linux Server"
VERSION="7.5 (Maipo)"
ID="rhel"
ID_LIKE="fedora"
VARIANT="Server"
VARIANT_ID="server"
VERSION_ID="7.5"
PRETTY_NAME="Red Hat Enterprise Linux Server 7.5 (Maipo)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:redhat:enterprise_linux:7.5:GA:server"
HOME_URL="https://www.redhat.com/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"

REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 7"
REDHAT_BUGZILLA_PRODUCT_VERSION=7.5
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="7.5"

[root@localhost xfs_undelete-1.2]# strings /lib64/libc.so.6 |grep ^GLIBC
GLIBC_2.2.5
GLIBC_2.2.6
GLIBC_2.3
GLIBC_2.3.2
GLIBC_2.3.3
GLIBC_2.3.4
GLIBC_2.4
GLIBC_2.5
GLIBC_2.6
GLIBC_2.7
GLIBC_2.8
GLIBC_2.9
GLIBC_2.10
GLIBC_2.11
GLIBC_2.12
GLIBC_2.13
GLIBC_2.14
GLIBC_2.15
GLIBC_2.16
GLIBC_2.17
GLIBC_PRIVATE
GLIBC_2.8
GLIBC_2.5
GLIBC_2.9
GLIBC_2.7
GLIBC_2.6
GLIBC_2.11
GLIBC_2.16
GLIBC_2.10
GLIBC_2.17
GLIBC_2.13
GLIBC_2.2.6

TCL Version:
tcl8.6.1

XFSPROGS Version:
xfsprogs-4.5.0-15.el7.x86_64

recovering a vdisk.img file

Hi, i also accidently wiped a VM including image ... so i ended up here now ;)

i tried following the instructions

Starting recovery.
Recovered file -> xfs_undeleted/2023-03-03-08-24_1092962809.bin
Recovered file -> xfs_undeleted/2023-03-03-08-24_1098416061.bin
Recovered file -> xfs_undeleted/2023-03-03-08-24_1152304231.gzip
Recovered file -> xfs_undeleted/2023-03-03-08-24_1152304236.bin
Recovered file -> xfs_undeleted/2023-03-03-08-24_1152304237.txt
Recovered file -> xfs_undeleted/2023-03-03-08-24_1152304240.txt
Recovered file -> xfs_undeleted/2023-03-03-08-24_1152304241.pgp
Recovered file -> xfs_undeleted/2023-03-03-08-24_1152304243.txt
Recovered file -> xfs_undeleted/2023-03-03-08-24_1152304250.pgp
Recovered file -> xfs_undeleted/2023-03-03-09-03_2165197241.bin
Recovered file -> xfs_undeleted/2023-03-03-08-24_3221231872.txt
Done.
root@AlsServer:/tmp#`

it does fairly work, but there is no vdisk.img with 120 GB in size, so i assume im also too late and xfs already reaccolerated the free space ... even there where definately no writes afterwards.

may possible as .img is also not included in the mime type list (-l function) ? the list is huge, but .img is not included ...

the disk is unmounted

```root@AlsServer:/tmp# mount | grep -i "nvme"
root@AlsServer:/tmp#`

if you have any other advice ;) thanks ahead, also tried the -r option but then the results are even less.

Restore file with big inode number fail

Hello, author!
My test scenario is to delete all files in the /testdir directory. During my test, I found that this tool cannot recover files with inode number jumps, which often occurs in multi-level directories, or files with larger inode numbers cannot be recovered. Even if the inode number is specified with the -s parameter, it cannot be recovered. For example, files with inode numbers of 50-110 can be restored, while files with numbers of 13581-13590 and 1069121-1069129 cannot be restored. How to solve this situation?

problem recovering a txt file

I create a txt file with three lines:
11
12
13

the i delete this file,and use this tool to undelete it,it works fine ,but the file content is a little different:
11

No nodes found ..

Hi,
just went through the other reports of "nothing found"; I deleted a single JPEG file by accident.. first I ran time xfs_undelete -t -48hour /dev/disk/by-label/attik overnight, thinking this will surely take quite the while.. Actually it only takes 90 seconds It successfully remounted the partition read-only..
This is a
Linux base 5.8.0-1-amd64 #1 SMP Debian 5.8.7-1 (2020-09-05) x86_64 GNU/Linux machine, with tcl 8.6.9+1+b1 and tcllib 1.20+dfsg-1.

# file --version
file-5.38
magic file from /etc/magic:/usr/share/misc/magic
# file --brief --mime-type /etc/magic
text/plain

The mount is

/dev/sdd3 /mnt/attik xfs ro,noatime,attr2,inode64,logbufs=8,logbsize=32k,noquota 0 0

  Filesystem      Size  Used Avail Use% Mounted on
  /dev/sdd3       2.6T  2.6T  3.1G 100% /mnt/attik

I also tried without time limit, with -i "", directly specifying the device (/dev/sdd3)..
Wanted to turn the verbosity higher but no idea how to do that with tclsh.
The file is probably very irrelevant, just wanted to use this as a test case. Glad to dig deeper though.. best regards ; )

using XFS v5 No files showing up

Using XFS v5
dmesg | grep XFS
[5536321.175489] XFS (sde1): Mounting V5 Filesystem

I deleted about 1tb of backups a few days ago, about 50 files and of course now need some back.

/dev/ssd is the block device where the files were deleted from and
/var/opt/mssql/restore is a new drive I have added to accept the recovered files.

./xfs_undelete -o /var/opt/mssql/restore /dev/sdd
/dev/sdd (/dev/sdd) is currently mounted read-write. Trying to remount read-only.
Remount successful.
Starting recovery.
Done.

It took about 1 sec to run which seemed too fast, and the result was no files.
I am using Ubuntu, and untared the tar.gz installed TCL and LIBTCL and ran the ./xfs_undelete
I tried listing the file types and a few other commands. I did not install xfs_undelete as a package.

/usr/bin/env: ‘tclsh’: No such file or directory ...

Basically I would like to undelete a folder in an XFS partition, but I am getting errors message:

"/usr/bin/env: ‘tclsh’: No such file or directory"

which I don't know how to troubleshoot.

$ uname -a
Linux debian 5.10.0-18-amd64 #1 SMP Debian 5.10.140-1 (2022-09-02) x86_64 GNU/Linux
$

$ ls -l
total 256
drwxr-xr-x 2 user user 131072 Mar 18 14:40 xfs_undelete-master
-rwxr-xr-x 1 user user 28268 Jul 16 01:55 xfs_undelete-master.zip

$ ls -l xfs_undelete-master.zip
-rwxr-xr-x 1 user user 28268 Jul 16 01:55 xfs_undelete-master.zip

$ file --brief xfs_undelete-master.zip
Zip archive data, at least v1.0 to extract

$ sha256sum --binary xfs_undelete-master.zip
db66ef9ca37120407f6a692fe0d30492dba525e2446ee1a87e26d3d978b7e875 *xfs_undelete-master.zip

$ cd xfs_undelete-master

$ ls -l
total 640
-rwxr-xr-x 1 user user 35149 Mar 18 14:40 LICENSE
-rwxr-xr-x 1 user user 13411 Mar 18 14:40 README.md
-rwxr-xr-x 1 user user 150 Mar 18 14:40 shell.nix
-rwxr-xr-x 1 user user 21851 Mar 18 14:40 xfs_undelete
-rwxr-xr-x 1 user user 9698 Mar 18 14:40 xfs_undelete.man

$ ls -l xfs_undelete
-rwxr-xr-x 1 user user 21851 Mar 18 14:40 xfs_undelete

$ file --brief xfs_undelete
Tcl script, UTF-8 Unicode text executable

$ sha256sum --binary xfs_undelete
063dad87b4f4ae505521735067405b07eb668e4fc7791624132420b665adb64a *xfs_undelete
$

$ sudo ./xfs_undelete --help
/usr/bin/env: ‘tclsh’: No such file or directory

$ ./xfs_undelete --help
/usr/bin/env: ‘tclsh’: No such file or directory

$ ./xfs_undelete
/usr/bin/env: ‘tclsh’: No such file or directory

$ sudo ./xfs_undelete
/usr/bin/env: ‘tclsh’: No such file or directory

Duplicates

Hey it's not an issue per se, but I was wondering if it would detect if it had recovered the same file before?

I carelessly deleted a decent amount of files about 600GB so when I went into recovering the files I was faced with an issue, "how would I store all of it if it manages to fined everyting?"
I had to go find some of the external hard drives drives I had laying around, thankfully I had about 700GB left on one of them.

My issue was that I wanted to start restoring the files as quick as possible so I launched the scrip and noticed that it was quickly filling up my drive, so I switched to another bigger drive and after a couple of hours realized that it might not be enough.

I tried moving the recovered files off to a another computer while it was recovering in fear I of stopping it again, but it only ended up clipping the poor drive.

So basically I have just two question and one request, Would it over write the files if I started recovering in the same folder, or would it detect the duplicate?
and what would happen if the recovery drive ended up out of space?

request:
could it be possible to start recovering from inode X
I still don't know how it all works, but basically if I stop the recovery for some reason, could it be possible to implement a function where I could "resume" the recovery?

Also THANK YOU!! I got 500GB back!
Now all I have to do is rename the files, thankfully most of them were RAR files that kept the original name inside the archive, and some movies with subtitles, or text files that had the original name.

reading around the net you only hear horror stories about rm -rf :)

I'm glad I decided to look further because the first recovery tools ended up giving me corrupted file back (probably because they were made for ext3,4

thank and suggest

thank u for great tool, two suggest,
1, i change the filename by ctime to mtime, unlike ext4 have directory name, so mtime is only mark of what it's..
hope an option for use the ctime or mtime as file name. or restore the file's mtime(in code current not)
2, i put an size filter, many not .c but named .c file recover more 10G.. so very slow. hope an option for that, just $loffset*$::blocksize maybe.
i change the code but i don't known about tclsh, i just sample copy many if todo that, so i don't pull the request, just suggests.

no files recover generate in "-o dir" or "./xfs_undeleted"

Hello ianka

I install xfs_undelete tool correctly . I can run the command . 
xfs_undelete locates in /data01/xfs_undelete-11.0 .  I copy a file to /data02/file_to_delete/test.txt with a few KB content.
The mount point "/data02" is from device "/dev/sde1"

I run the following command:

cd /data01/xfs_undelete-11.0
rm -f /data02/file_to_delete/test.txt
./xfs_undelete /dev/sde1

It shows "/dev/sde1 is currently mounted read-write. Trying to remount read-only.
Remount successful.
Starting recovery.
Done. 1 (100%))
"

I could not find the deleted file in " /data01/xfs_undelete-11.0/xfs_undeleted"

If I add "-o /data01/recovery_test" para , I could not find deleted file in "/data01/recovery_test" .

The OS is Centos7.6.1810 x86_64 , tcl is 8.6.10 and tcllib is 1-20 , xfs_undelete is 11.0

How do I recover files?

This project looks promising but I'm confused how I should recover files. It generated tons of files with the inode? as the filename. How can I use these files to restore?

agcount value

Hello,

There is a variable called "agcount" in the script xfs_undelete. The default value is 4. This is not updated through reading the superblock. The value of my filesystem is 32. With the default value 4, I cannot get the lost file and then the script stopped too early. By the way, I get the value 32 by executing the command "xfs_info". Hope this will help improve this nice work!

Thanks

Root privileges aren't really required for block devices

If the script is passed a block device, it tries to remount it readonly, and if it gets the error mount: only root can use "--options" option from the mount command, it reports Root privileges are required to run this command on devices.

However, it is possible to have access to a block device, for example due to being a member of the "disk" group, while not having the right to mount/remount that device, so this is unnecessarily restrictive.

This is probably not the case for most users, but it would be nice if it was supported. Perhaps to keep the script "friendly", if the open of the filesystem fails due to access being denied, the script could expand on the error to suggest that the user might need to run it as root.

This is trivial to work around by commenting out the exit.

Restored file is 4.0 EB big

Hi,

first thank you for the nice recovery tool!

I am trying to recover some files removed some hours ago, and I get several 4.0 EB big files that are difficult to deal with afterwards.
The block device where I start the recovery is a 59 TB hardware raid.
I have tried the -z option, both with txt and text/* but I still get the 4.0 EB file.
Am I doing something wrong or how can I get a smaller file?

Many thanks,
Richard

fails if $LANG env var is undefined

./xfs_undelete -t 2020-08-30 -o /mnt/disks/flashdrive /dev/sdc1 
Starting recovery.
no such variable805599295 ( 98%)
    (read trace on "::env(LANG)")
    invoked from within
"set lang $::env(LANG)"
    (procedure "dd" line 5)
    invoked from within
"dd if=$::fs of=$of bs=$::blocksize skip=[dict get $extents 0 skip] seek=0 count=1 conv=notrunc status=none"
    (procedure "investigateInodeBlock" line 80)
    invoked from within
"investigateInodeBlock $ag $iblock"
    (procedure "traverseInodeTree" line 40)
    invoked from within
"traverseInodeTree $ag $agi_root"
    ("for" body line 10)
    invoked from within
"for {set ag 0} {$ag<$agcount} {incr ag} {
        ## Read inode B+tree information sector of this allocation group.
        seek $fd [expr {$blocksize*$agblocks*$ag..."
    (file "./xfs_undelete" line 577)

Running on unraid (a Slackware derivative)

tcl-8.6.10
tcllib-1.20

/dev/sdc is unmounted

recovery not working on ubuntu

Hi,
I try to use the tools on one xfs filesystem on ubuntu bionic with this command:
./xfs_undelete -i "" /dev/mapper/lowspeed-das
Starting recovery.
Done.
but no files are recovered, the files are in one extensions hdf5 that is not listed if I use the -l option
Is I use strings command I can find some occurence, for example:
strings -td /dev/mapper/lowspeed-das | grep hdf5
11817744 $/home/ldanciu/oqdata/calc_40597.hdf5

`Please specify a block device or an XFS filesystem image.`

hi,
i entered single user mode using the command: # telinit 1
and
i used the following command but got the error.

[root@localhost ~]# xfs_undelete /dev/mapper/centos-root
Please specify a block device or an XFS filesystem image.

plz give me some advice.
thanks!

In Suse Linux Enterprise Server 15. SP3 it gives error executing xfs_undelete

In Suse Linux Enterprise Server 15. SP3 it gives error executing xfs_undelete

Last sunday night (~21:30) by mistake i delete a linux virtual machine in Xen.
The host is a SUSE Linux Enterprise Server SP3 5.3.18-59.24 13-Sep-2021 with Xen, and the partition of /vm is in XFS format.
The virtual machine was in /vm/grpwise/ and the file was grpwise.qcow2 with 321G.

I am trying to use your tool xfs_undelete, but i cant make it running.
In the SLES 15 SP3 i've installed:

  • tcl 8.6.7-7.6.1
  • tk 8.6.7-3.6.3
  • graphviz 2.40.1-6.12.1 and also i have made zypper install coreutils and he says it are installed (GNU Core Utils 8.29-2.12)
  • also, tcl-devel

I'e copied shell.nix and xfs_undelete (the contents from you site) and made them exec with chmod 700
i run ./xfs_undelete and i receive:
can't find package cmdline
while executing "package require cmdline"
(file "./xfs_undelete" line 10)

I read something you said about this error, but i don't how to compile tcl 8.6 to make the modifications you said.....
I am not an "expert" in linux.
Can you give me support?
Wath would be the cost?
Thanks,

Urgent
Paulo Sousa / [email protected] / Deltabyte

20211122_182849
20211122_181758
20211122_181821
e

Test file not recovered, what am I doing wrong?

Test file not recovered, what am I doing wrong?

It should have recovered text.txt but when I check xfs_undeleted/ its empty.

[root@ip-10-0-10-221 xfs_undelete]# df |grep new
/dev/xvdf1       8376300 1269792   7106508  16% /newdrive

[root@ip-10-0-10-221 xfs_undelete]# echo "test" >/newdrive/text.txt
[root@ip-10-0-10-221 xfs_undelete]# cat /newdrive/text.txt
test

[root@ip-10-0-10-221 xfs_undelete]# rm -f /newdrive/text.txt
[root@ip-10-0-10-221 xfs_undelete]# umount /newdrive
[root@ip-10-0-10-221 xfs_undelete]# df |grep newdrive

[root@ip-10-0-10-221 xfs_undelete]# ./xfs_undelete /dev/xvdf1
Starting recovery.
Done.

[root@ip-10-0-10-221 xfs_undelete]# ls xfs_undeleted/
[root@ip-10-0-10-221 xfs_undelete]#

Unable to parse time range. Please put it as a range of time specs Tcl's \[clock sc...

./xfs_undelete -t 2024-01-03 -r 'zip/*' /dev/mapper/centos-home invalid command name "lmap" while executing "lmap t $times { if {[catch {clock scan $t} t]} { puts stderr "Unable to parse time range. Please put it as a range of time specs Tcl's \[clock sc..." (procedure "parseTimerange" line 13) invoked from within "parseTimerange [dict get $::parameters t]" invoked from within "set ctimes [parseTimerange [dict get $::parameters t]]" (file "./xfs_undelete" line 460)

always error when executed

Help-me

I accidentally lost some files from my server in the *ibd format

Mysql database

And I'm not able to use it.

Can I hire your services to guide to restore these files??

invalid command name "lmap"

./xfs_undelete

invalid command name "lmap"
while executing
"lmap t $times {
if {[catch {clock scan $t} t]} {
puts stderr "Unable to parse time range. Please put it as a range of time specs Tcl's [clock sc..."
(procedure "parseTimerange" line 13)
invoked from within
"parseTimerange [dict get $::parameters t]"
invoked from within
"set ctimes [parseTimerange [dict get $::parameters t]]"
(file "./xfs_undelete" line 453)

Centos 7

Package 1:tcl-8.5.13-8.el7.x86_64

[Q] if nothing found - nothing to recover?

This is more of a usage question.

Do I understand correctly something like

./xfs_undelete -i "" -o /mounted/flashdrive /dev/sdc1

would be a catch-all recovery - if that returns nothing (or not what expected), then data is lost?
I moved an empty dir over a data directory that needs to be recovered, but surprisingly only 2 binary files are found; fairly certain no writes were done to the filesystem after the event.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.