Giter Club home page Giter Club logo

go-deadlock's People

Contributors

dunglas avatar h3n4l avatar jrajahalme avatar kanocz avatar millfort avatar ncw avatar orisano avatar ryanking avatar santosh653 avatar sasha-s avatar smileusd avatar tamird avatar zheaoli avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

go-deadlock's Issues

detect miss use of sync.Pool

I can see that sync.Pool is simply wrapped in this package, is there any plan on detecting invalid ref on objects that put into sync.Pool.

something like:


    var pool deadlock.Pool
    type  A struct {
        a int
    }
    p := &A{}
    pool.Put(p)
    println(fmt.Sprintf("%v", p)) // should race error

Please tag a new release?

Hi,

It looks like there have been many commits since the last release, can you tag a new release please?

Thanks,
Jim

tests fail on linux/x86_64 with go 1.9

When running "go test" with go 1.9 (linux/x86_64, other architectures seem to work), the following failure occurs:

+ go test -compiler gc -ldflags '' github.com/sasha-s/go-deadlock
POTENTIAL DEADLOCK: Inconsistent locking. saw this ordering in one goroutine:
happened before
deadlock_test.go:44 go-deadlock.TestNoDeadlocks.func2.1 { a.RLock() } <<<<<
deadlock_test.go:55 go-deadlock.TestNoDeadlocks.func2 { }() }
happened after
deadlock_test.go:47 go-deadlock.TestNoDeadlocks.func2.1.1 { b.Lock() } <<<<<
deadlock_test.go:54 go-deadlock.TestNoDeadlocks.func2.1 { }() }
deadlock_test.go:55 go-deadlock.TestNoDeadlocks.func2 { }() }
in another goroutine: happened before
deadlock_test.go:47 go-deadlock.TestNoDeadlocks.func2.1.1 { b.Lock() } <<<<<
deadlock_test.go:54 go-deadlock.TestNoDeadlocks.func2.1 { }() }
deadlock_test.go:55 go-deadlock.TestNoDeadlocks.func2 { }() }
happend after
deadlock_test.go:44 go-deadlock.TestNoDeadlocks.func2.1 { a.RLock() } <<<<<
deadlock_test.go:55 go-deadlock.TestNoDeadlocks.func2 { }() }
Other goroutines holding locks:
goroutine 0 lock 0xc420014118
deadlock_test.go:47 go-deadlock.TestNoDeadlocks.func2.1.1 { b.Lock() } <<<<<
deadlock_test.go:54 go-deadlock.TestNoDeadlocks.func2.1 { }() }
deadlock_test.go:55 go-deadlock.TestNoDeadlocks.func2 { }() }
goroutine 0 lock 0xc4200160e0
deadlock_test.go:50 go-deadlock.TestNoDeadlocks.func2.1.1.1 { c.Lock() } <<<<<
deadlock_test.go:53 go-deadlock.TestNoDeadlocks.func2.1.1 { }() }
deadlock_test.go:54 go-deadlock.TestNoDeadlocks.func2.1 { }() }
deadlock_test.go:55 go-deadlock.TestNoDeadlocks.func2 { }() }
FAIL	github.com/sasha-s/go-deadlock	0.003s

Wrong deadlock detection in time.AfterFunc definitions

Hello

Package detects potential deadlock in this code:

func (w *Chwriter) resetInterfacesTimer() {
	w.InterfacesLock.Lock()
	defer w.InterfacesLock.Unlock()

	// ... do some stuff

	w.InterfaceTimer = time.AfterFunc(time.Duration(w.WriteMaxInterval) * time.Second, func(){
		w.InterfacesLock.Lock()
		defer w.InterfacesLock.Unlock()
		// ... do some stuff
	})
}

Looks like it's false detection

Support Go 1.23

Go 1.23 breaks older versions of github.com/petermattis/goid, including the one you're currently using.

I suggest you add go1.23 to your testing matrix and possibly also add a runtime init test that checks if the goid in use is working correctly or not.

Feel free to copy relevant bits from my fork: https://github.com/linkdata/deadlock

Bug in 1595213edefa28ca5047b00340c63557f9c051d0

In the above change, you introduce a bug :

Instead of getting the callers for the goroutine which was calling the lock, you exchanged that with the monitoring goroutine.
i.e. this is wrong :

fmt.Fprintf(Opts.LogBuf, "goroutine %v lock %p\n", goid.Get(), ptr)
printStack(Opts.LogBuf, callers(2))

document Opts to readme

sometimes I need to set timeout shorter, documentation should give a hint you can do it.

great tool, save my day many times !!

RWLock shouldn't check RLock order

goroutineA -> RlockA -> RlockB -> release
goroutineB -> RlockB -> RlockA -> release
That is a not bad to use RWLock but will trigger 'Inconsistent locking' error:

get RlockB and release Rlock
get RlockA and release Rlock
deadlock
get RlockB and release Rlock
succeed
get RlockA and release Rlock
POTENTIAL DEADLOCK: Inconsistent locking. saw this ordering in one goroutine:
happened before
examples/rwmutextest.go:46 main.(*MutexTestA).testB { m.mutexB.RLock() } <<<<<

happened after
examples/rwmutextest.go:49 main.(*MutexTestA).testB { m.mutexA.RLock() } <<<<<

in another goroutine: happened before
examples/rwmutextest.go:36 main.(*MutexTestA).testA { m.mutexA.RLock() } <<<<<
examples/rwmutextest.go:26 main.main { mutexTest.testA() }
/usr/local/go/src/runtime/proc.go:183 runtime.main { main_main() }

happend after
examples/rwmutextest.go:39 main.(*MutexTestA).testA { m.mutexB.RLock() } <<<<<
examples/rwmutextest.go:26 main.main { mutexTest.testA() }
/usr/local/go/src/runtime/proc.go:183 runtime.main { main_main() }


Other goroutines holding locks:
goroutine 1 lock 0xc4200125a0
examples/rwmutextest.go:36 main.(*MutexTestA).testA { m.mutexA.RLock() } <<<<<
examples/rwmutextest.go:26 main.main { mutexTest.testA() }
/usr/local/go/src/runtime/proc.go:183 runtime.main { main_main() }

panic in OnPotentialDeadlock callback

Hi,

Raising a request to check if there are plans to support panic in OnPotentialDeadlock callback. Default behaviour is a os.Exit. If I override the callback function, panicking in callback is not supported as internally unlock for lockOrder mutex is not deferred which creates issues with further execution of the program.

Also checking if it was a conscious call to not support panic in callback.

My usecase is to achieve something as below so that all the go routine are not impacted apart from the one where the deadlock could have come up.

deadlock.Opts.OnPotentialDeadlock = onPotentialDeadlock
func onPotentialDeadlock () {
	panic("potential deadlock detected")
}

Please let me know if there are any follow up questions.

Thanks

Heads-up and suggestions for improvements

This is a great package, but it was missing some features I needed and there were some low-hanging optimization fruit to pluck.
Rather than submitting a massive PR, I felt it was easier and faster to just clone and rewrite where needed. I give full credit to you for the original work, of course.

You might consider incorporating some of the changes made in the clone (https://github.com/linkdata/deadlock). Specifically
the use of runtime.CallersFrames to get correct line numbers, using build tags instead of Opts.Disable to avoid all overhead, leverage escape analysis for callers() to lower memory usage and maybe use of TryLock() for Go 1.18+.

Question: potential deadlock with goroutine stuck on internal lock?

Hey, first of all thanks for the hard work on this great lib!

I'm having trouble interpreting the output below. It suggests goroutine 77264 holds lock 0xc4202a60e0 for a long time, preventing others (like goroutine 77325 and many more) from acquiring it.

However, the output suggests that goroutine 77264 actually got stuck during unlock: raft.go:688 is a deferred mu.Unlock(), and deadlock.go:330 is actually is a lock acquire statement in this lib.

Does this mean that the (potential) deadlock is coming from this lib in this case? What would make goroutine 77264 get stuck on that internal lock? (I reproduced the same output with 1m30s lock timeout.)

POTENTIAL DEADLOCK:
Previous place where the lock was grabbed
goroutine 77264 lock 0xc4202a60e0
../raft/raft.go:618 raft.(*Raft).AppendEntriesRPCHandler { rf.mu.Lock() } <<<<<
/usr/local/Cellar/go/1.9/libexec/src/runtime/asm_amd64.s:509 runtime.call32 { CALLFN(·call32, 32) }
/usr/local/Cellar/go/1.9/libexec/src/reflect/value.go:434 reflect.Value.call { call(frametype, fn, args, uint32(frametype.size), uint32(retOffset)) }
/usr/local/Cellar/go/1.9/libexec/src/reflect/value.go:302 reflect.Value.Call { return v.call("Call", in) }
../labrpc/labrpc.go:478 labrpc.(*Service).dispatch { function.Call([]reflect.Value{svc.rcvr, args.Elem(), replyv}) }
../labrpc/labrpc.go:402 labrpc.(*Server).dispatch { return service.dispatch(methodName, req) }
../labrpc/labrpc.go:229 labrpc.(*Network).ProcessReq.func1 { r := server.dispatch(req) }

Have been trying to lock it again for more than 30s
goroutine 77325 lock 0xc4202a60e0
../raft/raft.go:618 raft.(*Raft).AppendEntriesRPCHandler { rf.mu.Lock() } <<<<<
/usr/local/Cellar/go/1.9/libexec/src/runtime/asm_amd64.s:509 runtime.call32 { CALLFN(·call32, 32) }
/usr/local/Cellar/go/1.9/libexec/src/reflect/value.go:434 reflect.Value.call { call(frametype, fn, args, uint32(frametype.size), uint32(retOffset)) }
/usr/local/Cellar/go/1.9/libexec/src/reflect/value.go:302 reflect.Value.Call { return v.call("Call", in) }
../labrpc/labrpc.go:478 labrpc.(*Service).dispatch { function.Call([]reflect.Value{svc.rcvr, args.Elem(), replyv}) }
../labrpc/labrpc.go:402 labrpc.(*Server).dispatch { return service.dispatch(methodName, req) }
../labrpc/labrpc.go:229 labrpc.(*Network).ProcessReq.func1 { r := server.dispatch(req) }

Here is what goroutine 77264 doing now
goroutine 77264 [semacquire]:
sync.runtime_SemacquireMutex(0xc420084744, 0xa900000000)
    /usr/local/Cellar/go/1.9/libexec/src/runtime/sema.go:71 +0x3d
sync.(*Mutex).Lock(0xc420084740)
    /usr/local/Cellar/go/1.9/libexec/src/sync/mutex.go:134 +0x14c
github.com/sasha-s/go-deadlock.(*lockOrder).postUnlock(0xc420084740, 0x1288e00, 0xc4202a60e0)
    .../src/github.com/sasha-s/go-deadlock/deadlock.go:330 +0x3f
github.com/sasha-s/go-deadlock.postUnlock(0x1288e00, 0xc4202a60e0)
    .../src/github.com/sasha-s/go-deadlock/deadlock.go:167 +0x5f
github.com/sasha-s/go-deadlock.(*Mutex).Unlock(0xc4202a60e0)
    .../src/github.com/sasha-s/go-deadlock/deadlock.go:97 +0x7f
raft.(*Raft).AppendEntriesRPCHandler(0xc4202a60e0, 0xc420beb840, 0xc421834620)
    .../src/raft/raft.go:688 +0xcc8

...

the order of unlock in deadlock.go seems be strange

in deadlock.go func lock(lockFn func(), ptr interface{})
in line 161 lo.mu.Lock() //you lock lo.mu firstly
in line 167 Opts.mu.Lock() //you lock Opts.mu secondly
but
in line 192 lo.mu.Unlock() //you unlock lo.mu firstly
in line 193 Opts.mu.UnLock() //you unlock Opts.mu secondly

Is that OK?

Single thread lock in different order reported as deadlock

There is only one thread, the order should not matter.

package main

import (
	sync "github.com/sasha-s/go-deadlock"
)

func main() {
	m := make(map[int]*sync.Mutex)
	for i := 0; i < 8; i++ {
		m[i] = &sync.Mutex{}
	}
	useLock(m)
	useLock(m)
}

func useLock(locks map[int]*sync.Mutex) {
	for _, lock := range locks {
		lock.Lock()
		defer lock.Unlock()
	}
	// do something that require all locks
}

Massive Memory Leak

do not leave this on a prod application by accident. learn from my mistakes

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.