Giter Club home page Giter Club logo

tail's Introduction

Go Reference ci FreeBSD

tail functionality in Go

nxadm/tail provides a Go library that emulates the features of the BSD tail program. The library comes with full support for truncation/move detection as it is designed to work with log rotation tools. The library works on all operating systems supported by Go, including POSIX systems like Linux, *BSD, MacOS, and MS Windows. Go 1.12 is the oldest compiler release supported.

A simple example:

// Create a tail
t, err := tail.TailFile(
	"/var/log/nginx.log", tail.Config{Follow: true, ReOpen: true})
if err != nil {
    panic(err)
}

// Print the text of each received line
for line := range t.Lines {
    fmt.Println(line.Text)
}

See API documentation.

Installing

go get github.com/nxadm/tail/...

History

This project is an active, drop-in replacement for the abandoned Go tail library at hpcloud. Next to addressing open issues/PRs of the original project, nxadm/tail continues the development by keeping up to date with the Go toolchain (e.g. go modules) and dependencies, completing the documentation, adding features and fixing bugs.

Examples

Examples, e.g. used to debug an issue, are kept in the examples directory.

tail's People

Contributors

42wim avatar alexaliu avatar ches avatar davidsansome avatar dependabot[bot] avatar dolmen avatar featalion avatar florindragos avatar funkygao avatar fw42 avatar joelreymont avatar kokes avatar kouk avatar lukedirtwalker avatar masahide avatar miraclesu avatar mook-as avatar mschneider82 avatar nino-k avatar nxadm avatar ober avatar presbrey avatar qleelulu avatar rickard-von-essen avatar srid avatar tcheneau avatar terratech avatar titanous avatar tsuna avatar xuzixx avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tail's Issues

Cannot determine if a given line is complete or not

Describe the bug

There seems to be race condition that leads to partial line reads. That is, if a line write is not atomic, I can read a partial line, but since tail strips all newlines, I'll never know this happened. (My code will fail further on as it finds out said line is incomplete.)

Expected behaviour

There are two ways this can be addressed:

  1. Newlines are returned in line.Text, this would be a breaking change, not really a good solution
  2. Adding a boolean flag NewlineEnding to Line, which would tell the reader if the line.Text returned actually contained a newline at the end or not. This should be fully backwards compatible.

Here's my locally tested patch (it's just a draft): kokes@519502e

To Reproduce

Write a simple tailer:

A cli wrapper for this library
package main

import (
"fmt"
"log"
"os"
"strings"

"github.com/nxadm/tail"

)

func main() {
if err := run(); err != nil {
log.Fatal(err)
}
}

func run() error {
fmt.Printf("PID: %v\n", os.Getpid())
logfile := os.Args[1]
t, err := tail.TailFile(logfile, tail.Config{Follow: true, ReOpen: true})
if err != nil {
return err
}

for line := range t.Lines {
	fmt.Printf("line: %v; newline ending: %v\n", line.Text, strings.HasSuffix(line.Text, "\n"))
}
return nil

}


And run this against a random text file, go run repro.go tmp.log and then write to it:

echo -n "foo" >> tmp.log
echo "foo" >> tmp.log
echo "foofoo" >> tmp.log

This creates two lines, both containing foofoo, but we can't tell from the reader's point of view:

$ go run repro.go tmp.log
PID: 57959
2021/03/29 11:17:07 Waiting for tmp.log to appear...
line: foo; newline ending: false
line: foo; newline ending: false
line: foofoo; newline ending: false

System information

  • tail version v1.4.8
  • OS: macOS
  • Arch: amd64

Additional context

This was reported back in 2014 in the original repo #35 and "fixed" without tests in #40, which deadlocked the process if the tailed file didn't end with a newline. It was then reverted in this repo by merging in #126.

My proposal doesn't affect the library's behaviour, just adds a flag to inform the user of partial rows. I wasn't super sure how to handle splitting or large lines, so please review my code.

Tail from end of file

Is it currently possible to start tailing from the end of a file? Currently, it goes through all existing lines in the file and sends each one to the Lines channel, which doesn't work well for my use-case (watching for new log lines from a game server that may or may not be running). I didn't find anything obvious about this in the documentation.

[Question] The line will be truncated during kube-apiserver (another app) is writing in.

Hi, I try to used nxadm/tail to parse log file of kube-apisever who logs out json lines into a log file with package lumberjack.

But I find that the line will be truncated during kube-apiserver is writing in. Just like the image following shows, and it will cause that failed to call json.Unmarshal to convert a json line string to an Object.

image

Maybe it because the interval duration tail try to read less than kube-apisever write cost ?

execute code:

func (lt *LogTail) Start() error {
	tailConfig := tail.Config{
		Follow: true,
		ReOpen: true,
		// set start location at the end of file
		Location: &tail.SeekInfo{
			Offset: 0,
			Whence: io.SeekEnd,
		},
		Logger: logrus.StandardLogger(),
	}
	t, err := tail.TailFile(lt.options.Path, tailConfig)
	if err != nil {
		return errors.Wrapf(err, "fail to tail file %q", lt.options.Path)
	}

	ctx, cancel := context.WithCancel(context.Background())
	quit := make(chan struct{})
	go func(ctx context.Context) {
		for {
			select {
			case <-ctx.Done():
				quit <- struct{}{}
				return
			case l := <-t.Lines:
				err := lt.sendLine(l.Text)
				if err != nil {
					logrus.Warnf("%+v", err)
				}
			}
		}
	}(ctx)

	stopCh := make(chan os.Signal, 1)
	signal.Notify(stopCh, syscall.SIGINT, syscall.SIGTERM)
	sigInfo := <-stopCh
	fmt.Println(sigInfo)

	cancel()
	t.Cleanup()

	<-quit
	fmt.Println("Goodbye")

	return nil
}

func (lt *LogTail) sendLine(string) error

func (lt *LogTail) sendLine(logLine string) error {
	// batch send
	if len(lt.buffer) < 200 {
		lt.buffer = append(lt.buffer, logLine)
		return nil
	}
	var items []auditv1.Event
	for _, line := range lt.buffer {
		event := auditv1.Event{}
		if err := json.Unmarshal([]byte(line), &event); err != nil {
			logrus.Errorf("%+v", errors.Wrapf(err, "fail to decode as json format for %q", line))
			continue
		}
		items = append(items, event)
	}
	eventList := &auditv1.EventList{
		Items: items,
	}
	if err := PostAuditToReportServer(eventList); err != nil {
		return err
	}
	lt.cleanBuffer()
	return nil
}

Continious truncate is not handled correctly

Describe the bug
Simple code:

func main() {
	if len(os.Args) != 2 {
		panic("not enough arguments")
	}

	filename := os.Args[1]

	t, err := tail.TailFile(filename, tail.Config{Follow: true, ReOpen: true})
	if err != nil {
		panic(err)
	}

	for line := range t.Lines {
		fmt.Println(line.Text)
	}
}

When used in conjunction with continuous truncation (>) as opposed to appending (>>) only reacts to the first event, ignore all following ones. Simple test case:

$ while true; do echo "$(date)" > watched.log; sleep 1; done

It will truncate file every second before writing a single line into it.

Expected behaviour
The code correctly detects the truncation events and re-opens the file accordingly. So the output would look something like this:

Mon Dec 18 04:23:44 PM UTC 2023
2023/12/18 16:23:45 Re-opening truncated file watched.log ...
2023/12/18 16:23:45 Successfully reopened truncated watched.log
Mon Dec 18 04:23:45 PM UTC 2023
2023/12/18 16:23:46 Re-opening truncated file watched.log ...
2023/12/18 16:23:46 Successfully reopened truncated watched.log
Mon Dec 18 04:23:46 PM UTC 2023
...

Actual behaviour
It detects first event, and reads a single line after that ignoring everything else:

2023/12/18 16:31:45 Waiting for watched.log to appear...
Mon Dec 18 04:31:48 PM UTC 2023

To Reproduce
See steps above. I've tried with Poll:true to no avail.

System information

  • 1.4.11
  • Fedora 37
  • amd64

Additional context

package not in GOROOT

Describe the bug
log-parser.go:7:2: package nxadm/tail is not in GOROOT (/usr/local/go/src/nxadm/tail)

Expected behaviour
log-parser.go runs like the wind

To Reproduce

go get github.com/nxadm/tail/...

(import library in logparser.go)

go run logparser.go

System information

  • tail version 8.32
  • WSL ubuntu
  • amd64

Additional context
Hey just trying to install this library, I do have other libraries installed with go get that import correctly. I'm wondering if I'm missing something I should know before I dig into my Go environment, thanks!

Overview of reviewed/applied open Pull Requests of the dormant upstream project by nxadm/tail

Because of the many changes between the sending of the original PRs and now, most can not be applied as is. The changes are cherry-picked and their authors referred in the commit message.

Reviewed PRs:

[x] hpcloud/tail#92: rejected (use case not completely clear., can be merged later).
[x] hpcloud/tail#108: rejected (implements a similar functionality as hpcloud#149 + external dependency).
[x] hpcloud/tail#114: merged.
[x] hpcloud/tail#120: merged.
[x] hpcloud/tail#125: rejected (PR for vendored library, not for tail).
[x] hpcloud/tail#126: merged.
[x] hpcloud/tail#128: merged.
[x] hpcloud/tail#130: merged.
[x] hpcloud/tail#131: merged.
[x] hpcloud/tail#133: merged.
[x] hpcloud/tail#149: merged.
[x] hpcloud/tail#153: merged
[x] hpcloud/tail#159: NA (already implemented).
[x] hpcloud/tail#162: merged.

Add error message to logs

Describe the bug

When errors occur, it is important to get the error message logged out. Otherwise it can get very annoying to debug. I have for example hit the following error:

util.Fatal("failed to create Watcher")

I don't know why fsnotify.NewWatcher() would fail, but since the error message isn't included in the error log, it is almost impossible to debug.

Expected behaviour

I would expect the error message is a part of the logs using something like the following:

util.Fatal(fmt.Sprintf("failed to create Watcher: %v", err))

Additional context

I actually came onto this error by using https://github.com/hpcloud/tail which is used in https://github.com/kubeflow/katib.

Don't tail for repeatedly opened file.

Hi,
I have the issue when I trying to tail the file which was tailed before. The channel tail.Lines doesn't return anything.

Steps to reproduce.

  1. open and tail file A with tail.TailFile()
  2. receive some lines using Lines channel.
  3. close file with .Stop() and .Cleanup().
    ... everything work as intended.
  4. open and tail file B with tail.TailFile()
  5. receive some lines using Lines channel.
  6. close file with .Stop() and .Cleanup().
    ... everything work as intended.
  7. repeat tailing for file A, open it with tail.TailFile()
  8. attempt to receive some lines using Lines channel will not work. No errors no panics, select is just silently waiting for new lines.

If file A contains some content before attempt 2, lines are returned, but new lines appended later are not returned from channel.
Next attempts to tail another new files will work correctly.

The usecase is related to tailing PostgreSQL logs, which could be configured with "-%a" suffix, which means a weekday. After one week the program will have to return to tailing previously tailed (and truncated) file.

inotify_tracker logs fatal message without including error

I am using this library in production for years now and recently gotten in to more and more scenarios
in which apps are crashing because watchers can not be created.

Describe the bug
When a watcher can not be created the application crashes

watcher, err := fsnotify.NewWatcher()
if err != nil {
  util.Fatal("failed to create Watcher")
}

util.Fatal("failed to create Watcher")

Expected behaviour
I either expect this operation to return the error value or at least for the error to be included in the fatal log message

To Reproduce
You would have to create a system state in which fsnotify.NewWatcher() returns an error.
I can't point out the error that caused it in my case, since it is not logged by the library.
This error occurs rarely in some of my containers, the majority work as expected.

System information

  • tail version: latest
  • OS: alpine in Docker
  • Arch: amd64

Overview of reviewed/fixed/applied issues/PRs of the abandoned upstream (fixed on nxadm/tail)

This repository rebooted the development of tail and we mainly focus on issues and PR on nxadm/tail. However, we try follow and fix "historical" issues and PRs from the dormant repository. Because of the many changes between the frozen state of hpcloud/tail and the new commits on this repo, many issues have already been fixed and need verification or a test-case. This also means that upstream PRs no longer can be applied as is. The changes are cherry-picked and their authors referred in the commit message if possible. Feel free to amend changes or issue a new PR if you're an author.

StartTail followed by append to file is race-prone

If you StartTail(file_x) following from end, and then append "hello\n" to file_x, there's a race that makes it uncertain whether your appended line will come out of the tail operation or not; the StartTail operation seeks the end of the file in its own goroutine, and provides no signal that it has done so, so there's no way to reliably force the ordering of the operations.

#48 adds a config option "SyncOpen" that allows StartTail to perform any requested seek synchronously before it returns, thereby guaranteeing an order of operations.

read part message

When the log qps is large, the message read is not a complete line but only a part

noTomb: useless fmt.Errorf calls

While looking at the noTomb branch, I noticed that several of the old tomb.Kill calls in tail.tailFileSync were replaced with unreceived fmt.Errorf calls, which doesn't actually do anything.

For example:

tail/tail.go

Line 244 in 146fe55

fmt.Errorf("Seek error on %s: %s", tail.Filename, err)

Tail non-existent file: Lines channel closes when MustExist=false.

Describe the bug
Tailing files that do not exist causes the go channel to close almost immediately. I've also noticed that the go channel closes (sometimes) when a file rotates; I think that happens when the file does not get re-created right away. I hope i'm just missing something and there's a workaround to this issue, or maybe some obvious tactic I'm not seeing.

Also, calling .Stop() on a closed tail produces an error (on windows) that indicates it's trying to open the file:

Unable to open file C:\ProgramData\file.path: The process cannot access the file because it is being used by another process.

Maybe this is expected, and I'm going to avoid calling stop when the channel is closed, but my first iteration did not and I noticed this. Wanted to share.

Expected behavior
It's not clear why the go channel closes. Once it's closed, it cannot be reused. I would expect the channel not to be closed until .Stop() is called.

To Reproduce
Try to tail a file that does not exist. The channel closes, and when you create the file there is no channel to read lines from.

package main

import (
    "log"
    "os"

    "github.com/nxadm/tail"
)

func main() {
    log := log.New(os.Stdout, "", 0)

    t, err := tail.TailFile("/tmp/does.not.exist", tail.Config{
        Follow: true,
        ReOpen: true,
        Logger: log,
    })
    if err != nil {
        log.Println(err)
    }
    defer func() {
        if err := t.Stop(); err != nil {
            log.Println(err)
        }
    }()

    for line := range t.Lines {
        log.Println(line.Text)
    }

    log.Println("channel closed")
}
// Output:
// Waiting for /tmp/does.not.exist to appear...
// channel closed
// Failed to detect creation of /tmp/does.not.exist: bad file descriptor

(the last line is likely OS dependent, the above error is from macos x86)

System information
Happens on macos arm&x86 and windows x86 using v1.4.8.

Additional context
๐Ÿคž

The message in the chan is incomplete.

Describe the bug
for example , if a log file is "abcd"
sometimes it can only collect "abc"
this is my code

tails, err := tail.TailFile(fileName, tail.Config{ ReOpen: true, Follow: true, Location: &tail.SeekInfo{Offset: 0, Whence: flag.Whence}, MustExist: false, Poll: false, })

Expected behaviour

To Reproduce
sry ,I have used this to collect over a million log entries, and only a few hundred of them are difficult to reproduce.

System information

  • tail version [v1.4.8]
  • OS: centos
  • Arch: [e.g. arm64]

StopAtEOF is racy

If you sent an issue/PR to hpcloud, you can have a look at
this meta issue tracking issues
and PRs of the dormant upstream. However, as the code bases diverge the
issue may have been solved already.

Describe the bug
The StopAtEOF function can sometimes stop tailing the file before we actually encounter an EOF. In particular, when using the polling function for checking for file changes, we might select this tail.Dying channel here first before the poller gets another chance to notify us of new changes, which I believe makes us then return here, even though we should probably be waiting for another poll cycle.

Expected behaviour
The library checks again for new changes in the file after StopAtEOF is reached.

To Reproduce
This is a bit awkward to reproduce, since it's based on racyness. However tailing a file that's being written to in short intervals, and calling StopAtEOF might not return all data before exiting.

Additional context
I'd be happy to contribute a PR for this, but probably won't have the time to do so before next week.

[new major release] Test and release the noTomb branch

Branch: https://github.com/nxadm/tail/tree/noTomb

Tail depends on Tomb v1, a version of the library not updated updated in the last 5 years. The next version of Tomb, v2, is not API compatible with v1 and hasn't been updated for 2 years. A better solution is using the core coroutine workflow around context.Context. This branch implements this workflow and passes all the tests. However, because the API of Tomb v1 is very different to the one of context.Context, the changes in this branch need to be well tested in order to make sure no threading issues occur.

This branch also removes internal packages that resulted in many exported functions and structs. Most of these, probably all, were only useful within tail and their public nature was just a consequence of being in a package. This methods were removed from the public API. Also, the OpenFile function was removed from the API as it doesn't do anything special and is not related to tailing. The documentation was updated and public functions and structs were documented. Programs that use the "useful" API of previous releases will not be impacted:

  • the variables DefaultLogger and DiscardingLogger.
  • the structs Config, Line, Seekinfo and Tail.
  • the functions TailFile, Tell, Stop and StopAtEOF.

This branch, once ready, will be released as a major release, v2.

Concurrent usage of tail results in unexpected behavior

Describe the bug
If two (or more) goroutines are tailing the same file and one of them stops it impacts the other goroutine(s).

Expected behaviour
Other goroutines should keep on tailing the file.

To Reproduce
See this test.

It runs three goroutines, one will continuously write to a file every 500 milliseconds, the other two will each tail the same file. The writing goroutine runs for 5 seconds, the first tailing goroutine runs for 2 seconds and the second tailing goroutine runs for 4 seconds.

The expectation is that the second goroutine would keep on tailing the file after the first one stops, but that is not the case.

System information

  • tail version 1.4.8
  • OS: macOS Monterey 12.3.1
  • Arch: amd64

Additional context
The issue is (presumably) that both goroutines use the same underlying watcher and after one stops the watcher is removed.

Interest in byte-level tailing?

Hi - I made use of a fork of the original hpcloud/tail repo in termshark to tail a pcap (packet capture) file. The output from the tail process is fed into some tshark (CLI tool) processes which provide data for termshark's UI. Because pcaps are a binary format, I couldn't rely on a line-based tail to provide timely output for tshark's input, so I made some cheesy modifications to hpcloud/tail to tail in "chunks" of bytes. These chunks are not necessarily line-terminated, and possibly even 1-byte long if the pcap file being tailed grows very slowly. Would you be interested in these sorts of changes in nxadm/tail? If so, I could make a PR - and if you find it acceptable, I'd then switch termshark over to use nxadm/tail. Thanks!

Line provide Seekinfo

Hi thanks for the fork.

I looked into godoc, shouldn't a Line{} include the SeekInfo so the line receiver can know where to seek on an application restart?

Polling the Tell() is not elegant. What do you think?

Checksum on sum.golang.org does not match

It seems hash present on https://sum.golang.org/ does not match module:

go: finding github.com/nxadm/tail v1.4.1
go: downloading github.com/nxadm/tail v1.4.1
verifying github.com/nxadm/[email protected]: checksum mismatch
	downloaded: h1:Vw207joT1RO2/h2d+lr3BdpTt1LVvBX4sjdiK5Zitu4=
	go.sum:     h1:IvsH0xPk6Z+ORn/PoTiBbD2Ig7241sucVgFy4S/5nas=

SECURITY ERROR
This download does NOT match an earlier download recorded in go.sum.
The bits may have been replaced on the origin server, or an attacker may
have intercepted the download attempt.

Was there any forced tag change ?

Dealing with utf16 string

I used tail.TailFile and t.Lines happily. Until one day I had a UTF16-LE encoded file.
Since golang string type supports utf-8, it is not capable to throw a UTF-16 string correctly.

It will be nice if a tail.Line can return a []byte. What do you think?

File unexpectedly closes during tailing process

Describe the bug
We have a file that we're tailing where we're also interested in the file offset location. However, when we do t.Tell(), we sometimes run into the exception "err":"seek <filename>: file already closed"

Code:

	t, err := tail.TailFile(
		path,
		tail.Config{
			Follow:   true,
			ReOpen:   true,
		})

	for line := range t.Lines {
		// Fetch the current offset in the tailed file
		offset, err := t.Tell()
		// consume data
        } 

Has anyone else seen a similar issue with the file being closed? We've kind of solved this by adding a small wait and a retry in the t.Tell() function call, and that's resolved the issue, but we'd like to understand if this is a bug in the underlying tailing code where it's prematurely closing files.

System information

  • tail version: v1.4.8
  • OS: Ubuntu: 18.04

Inotify backend does not correctly handle one file being watched twice

If you start tailing the same file twice (by calling TailFile and supplying an identical filename), and are using the inotify backend, then only one of the Tail objects is notified when a line is appended to the file. Which one of the two objects gets notice is chosen at random, and the one which does then catches up on lines it has slept on in previous appends.

I would expect both of the objects to be notified at each append.

StopAtEOF does not function as expected.

Hello,

Maybe I am I misunderstand the purpose of the StopAtEOF function, but my expectation is the file would continue reading until the end of the file is reached after the function is called.

The current behavior is: the tailer will continue to go through each line, but the lines are never sent to the Lines channel because of this case switch:

tail/tail.go

Line 432 in 6abd9f9

case <-tail.Dying():

Support `tail -n`

If you sent an issue/PR to hpcloud, you can have a look at
this meta issue tracking issues
and PRs of the dormant upstream. However, as the code bases diverge the
issue may have been implemented already.

Is your feature request related to a problem? Please describe.

Support for tail -n server.log feature, do not watch file, only get last N lines only.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.