nxadm/tail provides a Go library that emulates the features of the BSD tail
program. The library comes with full support for truncation/move detection as
it is designed to work with log rotation tools. The library works on all
operating systems supported by Go, including POSIX systems like Linux, *BSD,
MacOS, and MS Windows. Go 1.12 is the oldest compiler release supported.
A simple example:
// Create a tailt, err:=tail.TailFile(
"/var/log/nginx.log", tail.Config{Follow: true, ReOpen: true})
iferr!=nil {
panic(err)
}
// Print the text of each received lineforline:=ranget.Lines {
fmt.Println(line.Text)
}
This project is an active, drop-in replacement for the
abandoned Go tail library at
hpcloud. Next to
addressing open issues/PRs of the original project,
nxadm/tail continues the development by keeping up to date with the Go toolchain
(e.g. go modules) and dependencies, completing the documentation, adding features
and fixing bugs.
Examples
Examples, e.g. used to debug an issue, are kept in the examples directory.
There seems to be race condition that leads to partial line reads. That is, if a line write is not atomic, I can read a partial line, but since tail strips all newlines, I'll never know this happened. (My code will fail further on as it finds out said line is incomplete.)
Expected behaviour
There are two ways this can be addressed:
Newlines are returned in line.Text, this would be a breaking change, not really a good solution
Adding a boolean flag NewlineEnding to Line, which would tell the reader if the line.Text returned actually contained a newline at the end or not. This should be fully backwards compatible.
Here's my locally tested patch (it's just a draft): kokes@519502e
This creates two lines, both containing foofoo, but we can't tell from the reader's point of view:
$ go run repro.go tmp.log
PID: 57959
2021/03/29 11:17:07 Waiting for tmp.log to appear...
line: foo; newline ending: false
line: foo; newline ending: false
line: foofoo; newline ending: false
System information
tail version v1.4.8
OS: macOS
Arch: amd64
Additional context
This was reported back in 2014 in the original repo #35 and "fixed" without tests in #40, which deadlocked the process if the tailed file didn't end with a newline. It was then reverted in this repo by merging in #126.
My proposal doesn't affect the library's behaviour, just adds a flag to inform the user of partial rows. I wasn't super sure how to handle splitting or large lines, so please review my code.
Is it currently possible to start tailing from the end of a file? Currently, it goes through all existing lines in the file and sends each one to the Lines channel, which doesn't work well for my use-case (watching for new log lines from a game server that may or may not be running). I didn't find anything obvious about this in the documentation.
Hi, I try to used nxadm/tail to parse log file of kube-apisever who logs out json lines into a log file with package lumberjack.
But I find that the line will be truncated during kube-apiserver is writing in. Just like the image following shows, and it will cause that failed to call json.Unmarshal to convert a json line string to an Object.
Maybe it because the interval duration tail try to read less than kube-apisever write cost ?
When used in conjunction with continuous truncation (>) as opposed to appending (>>) only reacts to the first event, ignore all following ones. Simple test case:
$ while true; do echo "$(date)" > watched.log; sleep 1; done
It will truncate file every second before writing a single line into it.
Expected behaviour
The code correctly detects the truncation events and re-opens the file accordingly. So the output would look something like this:
Mon Dec 18 04:23:44 PM UTC 2023
2023/12/18 16:23:45 Re-opening truncated file watched.log ...
2023/12/18 16:23:45 Successfully reopened truncated watched.log
Mon Dec 18 04:23:45 PM UTC 2023
2023/12/18 16:23:46 Re-opening truncated file watched.log ...
2023/12/18 16:23:46 Successfully reopened truncated watched.log
Mon Dec 18 04:23:46 PM UTC 2023
...
Actual behaviour
It detects first event, and reads a single line after that ignoring everything else:
2023/12/18 16:31:45 Waiting for watched.log to appear...
Mon Dec 18 04:31:48 PM UTC 2023
To Reproduce
See steps above. I've tried with Poll:true to no avail.
Describe the bug log-parser.go:7:2: package nxadm/tail is not in GOROOT (/usr/local/go/src/nxadm/tail)
Expected behaviour
log-parser.go runs like the wind
To Reproduce
go get github.com/nxadm/tail/...
(import library in logparser.go)
go run logparser.go
System information
tail version 8.32
WSL ubuntu
amd64
Additional context
Hey just trying to install this library, I do have other libraries installed with go get that import correctly. I'm wondering if I'm missing something I should know before I dig into my Go environment, thanks!
Because of the many changes between the sending of the original PRs and now, most can not be applied as is. The changes are cherry-picked and their authors referred in the commit message.
When errors occur, it is important to get the error message logged out. Otherwise it can get very annoying to debug. I have for example hit the following error:
Hi,
I have the issue when I trying to tail the file which was tailed before. The channel tail.Lines doesn't return anything.
Steps to reproduce.
open and tail file A with tail.TailFile()
receive some lines using Lines channel.
close file with .Stop() and .Cleanup().
... everything work as intended.
open and tail file B with tail.TailFile()
receive some lines using Lines channel.
close file with .Stop() and .Cleanup().
... everything work as intended.
repeat tailing for file A, open it with tail.TailFile()
attempt to receive some lines using Lines channel will not work. No errors no panics, select is just silently waiting for new lines.
If file A contains some content before attempt 2, lines are returned, but new lines appended later are not returned from channel.
Next attempts to tail another new files will work correctly.
The usecase is related to tailing PostgreSQL logs, which could be configured with "-%a" suffix, which means a weekday. After one week the program will have to return to tailing previously tailed (and truncated) file.
I am using this library in production for years now and recently gotten in to more and more scenarios
in which apps are crashing because watchers can not be created.
Describe the bug
When a watcher can not be created the application crashes
watcher, err:=fsnotify.NewWatcher()
iferr!=nil {
util.Fatal("failed to create Watcher")
}
Expected behaviour
I either expect this operation to return the error value or at least for the error to be included in the fatal log message
To Reproduce
You would have to create a system state in which fsnotify.NewWatcher() returns an error.
I can't point out the error that caused it in my case, since it is not logged by the library.
This error occurs rarely in some of my containers, the majority work as expected.
This repository rebooted the development of tail and we mainly focus on issues and PR on nxadm/tail. However, we try follow and fix "historical" issues and PRs from the dormant repository. Because of the many changes between the frozen state of hpcloud/tail and the new commits on this repo, many issues have already been fixed and need verification or a test-case. This also means that upstream PRs no longer can be applied as is. The changes are cherry-picked and their authors referred in the commit message if possible. Feel free to amend changes or issue a new PR if you're an author.
hpcloud/tail#13: NA (request for enhancement: gz files are not tailed, but archived although it may be considered later is tail is used a replacement for zcat).
hpcloud/tail#31: fixed by upgrade to a recent fsnotify (from v1.2.1 to latest, 1.4.9 at the moment), can no longer reproduce.
hpcloud/tail#34: fixed by upgrade to a recent fsnotify (from v1.2.1 to latest, 1.4.9 at the moment), tests no pass on MS Windows. Windows support warning removed from Readme.
hpcloud/tail#151: fixed (nxadm/tail uses go modules and more recent dependencies), tested under qemu (Linux 4da9abb41a67 5.8.0-40-generic #45-Ubuntu SMP Fri Jan 15 11:05:36 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux).
hpcloud/tail#161: fixed (addressed by merging PR hpcloud#162 and other changes. Downstream projects like greenplum-db/gpbackup, owncloud/ocis and asim/go-micro build on Solaris, Illunimos and DragonFly).
If you StartTail(file_x) following from end, and then append "hello\n" to file_x, there's a race that makes it uncertain whether your appended line will come out of the tail operation or not; the StartTail operation seeks the end of the file in its own goroutine, and provides no signal that it has done so, so there's no way to reliably force the ordering of the operations.
#48 adds a config option "SyncOpen" that allows StartTail to perform any requested seek synchronously before it returns, thereby guaranteeing an order of operations.
While looking at the noTomb branch, I noticed that several of the old tomb.Kill calls in tail.tailFileSync were replaced with unreceived fmt.Errorf calls, which doesn't actually do anything.
Describe the bug
Tailing files that do not exist causes the go channel to close almost immediately. I've also noticed that the go channel closes (sometimes) when a file rotates; I think that happens when the file does not get re-created right away. I hope i'm just missing something and there's a workaround to this issue, or maybe some obvious tactic I'm not seeing.
Also, calling .Stop() on a closed tail produces an error (on windows) that indicates it's trying to open the file:
Unable to open file C:\ProgramData\file.path: The process cannot access the file because it is being used by another process.
Maybe this is expected, and I'm going to avoid calling stop when the channel is closed, but my first iteration did not and I noticed this. Wanted to share.
Expected behavior
It's not clear why the go channel closes. Once it's closed, it cannot be reused. I would expect the channel not to be closed until .Stop() is called.
To Reproduce
Try to tail a file that does not exist. The channel closes, and when you create the file there is no channel to read lines from.
package main
import (
"log""os""github.com/nxadm/tail"
)
funcmain() {
log:=log.New(os.Stdout, "", 0)
t, err:=tail.TailFile("/tmp/does.not.exist", tail.Config{
Follow: true,
ReOpen: true,
Logger: log,
})
iferr!=nil {
log.Println(err)
}
deferfunc() {
iferr:=t.Stop(); err!=nil {
log.Println(err)
}
}()
forline:=ranget.Lines {
log.Println(line.Text)
}
log.Println("channel closed")
}
// Output:// Waiting for /tmp/does.not.exist to appear...// channel closed// Failed to detect creation of /tmp/does.not.exist: bad file descriptor
(the last line is likely OS dependent, the above error is from macos x86)
System information
Happens on macos arm&x86 and windows x86 using v1.4.8.
If you sent an issue/PR to hpcloud, you can have a look at this meta issue tracking issues
and PRs of the dormant upstream. However, as the code bases diverge the
issue may have been solved already.
Describe the bug
The StopAtEOF function can sometimes stop tailing the file before we actually encounter an EOF. In particular, when using the polling function for checking for file changes, we might select this tail.Dying channel here first before the poller gets another chance to notify us of new changes, which I believe makes us then return here, even though we should probably be waiting for another poll cycle.
Expected behaviour
The library checks again for new changes in the file after StopAtEOF is reached.
To Reproduce
This is a bit awkward to reproduce, since it's based on racyness. However tailing a file that's being written to in short intervals, and calling StopAtEOF might not return all data before exiting.
Additional context
I'd be happy to contribute a PR for this, but probably won't have the time to do so before next week.
Tail depends on Tomb v1, a version of the library not updated updated in the last 5 years. The next version of Tomb, v2, is not API compatible with v1 and hasn't been updated for 2 years. A better solution is using the core coroutine workflow around context.Context. This branch implements this workflow and passes all the tests. However, because the API of Tomb v1 is very different to the one of context.Context, the changes in this branch need to be well tested in order to make sure no threading issues occur.
This branch also removes internal packages that resulted in many exported functions and structs. Most of these, probably all, were only useful within tail and their public nature was just a consequence of being in a package. This methods were removed from the public API. Also, the OpenFile function was removed from the API as it doesn't do anything special and is not related to tailing. The documentation was updated and public functions and structs were documented. Programs that use the "useful" API of previous releases will not be impacted:
the variables DefaultLogger and DiscardingLogger.
the structs Config, Line, Seekinfo and Tail.
the functions TailFile, Tell, Stop and StopAtEOF.
This branch, once ready, will be released as a major release, v2.
It runs three goroutines, one will continuously write to a file every 500 milliseconds, the other two will each tail the same file. The writing goroutine runs for 5 seconds, the first tailing goroutine runs for 2 seconds and the second tailing goroutine runs for 4 seconds.
The expectation is that the second goroutine would keep on tailing the file after the first one stops, but that is not the case.
System information
tail version 1.4.8
OS: macOS Monterey 12.3.1
Arch: amd64
Additional context
The issue is (presumably) that both goroutines use the same underlying watcher and after one stops the watcher is removed.
Hi - I made use of a fork of the original hpcloud/tail repo in termshark to tail a pcap (packet capture) file. The output from the tail process is fed into some tshark (CLI tool) processes which provide data for termshark's UI. Because pcaps are a binary format, I couldn't rely on a line-based tail to provide timely output for tshark's input, so I made some cheesy modifications to hpcloud/tail to tail in "chunks" of bytes. These chunks are not necessarily line-terminated, and possibly even 1-byte long if the pcap file being tailed grows very slowly. Would you be interested in these sorts of changes in nxadm/tail? If so, I could make a PR - and if you find it acceptable, I'd then switch termshark over to use nxadm/tail. Thanks!
go: finding github.com/nxadm/tail v1.4.1
go: downloading github.com/nxadm/tail v1.4.1
verifying github.com/nxadm/[email protected]: checksum mismatch
downloaded: h1:Vw207joT1RO2/h2d+lr3BdpTt1LVvBX4sjdiK5Zitu4=
go.sum: h1:IvsH0xPk6Z+ORn/PoTiBbD2Ig7241sucVgFy4S/5nas=
SECURITY ERROR
This download does NOT match an earlier download recorded in go.sum.
The bits may have been replaced on the origin server, or an attacker may
have intercepted the download attempt.
I used tail.TailFile and t.Lines happily. Until one day I had a UTF16-LE encoded file.
Since golang string type supports utf-8, it is not capable to throw a UTF-16 string correctly.
It will be nice if a tail.Line can return a []byte. What do you think?
Describe the bug
We have a file that we're tailing where we're also interested in the file offset location. However, when we do t.Tell(), we sometimes run into the exception "err":"seek <filename>: file already closed"
Code:
t, err := tail.TailFile(
path,
tail.Config{
Follow: true,
ReOpen: true,
})
for line := range t.Lines {
// Fetch the current offset in the tailed file
offset, err := t.Tell()
// consume data
}
Has anyone else seen a similar issue with the file being closed? We've kind of solved this by adding a small wait and a retry in the t.Tell() function call, and that's resolved the issue, but we'd like to understand if this is a bug in the underlying tailing code where it's prematurely closing files.
If you start tailing the same file twice (by calling TailFile and supplying an identical filename), and are using the inotify backend, then only one of the Tail objects is notified when a line is appended to the file. Which one of the two objects gets notice is chosen at random, and the one which does then catches up on lines it has slept on in previous appends.
I would expect both of the objects to be notified at each append.
Maybe I am I misunderstand the purpose of the StopAtEOF function, but my expectation is the file would continue reading until the end of the file is reached after the function is called.
The current behavior is: the tailer will continue to go through each line, but the lines are never sent to the Lines channel because of this case switch:
If you sent an issue/PR to hpcloud, you can have a look at this meta issue tracking issues
and PRs of the dormant upstream. However, as the code bases diverge the
issue may have been implemented already.
Is your feature request related to a problem? Please describe.
Support for tail -n server.log feature, do not watch file, only get last N lines only.