Giter Club home page Giter Club logo

libleak's Introduction

libleak

libleak detects memory leak by hooking memory functions (e.g. malloc) by LD_PRELOAD.

There is no need to modify or re-compile the target program, and you can enable/disable the detection during target running.

In fact libleak can not identify memory leak, while it just takes the memory as leak if it lives longer than a threshold. The threshold is 60 second by default, but you should set it according to your scenarios.

There is less impact on performance, compared with valgrind and memleax.

It prints the full call-stack at suspicious memory leak point, and easier to use, compared with other similar libraries (e.g. mtrace).

LICENCE

GPLv2

OS-MACHINE

  • GNU/Linux only by now. But FreeBSD should be OK with some code changing

  • x86_64 is only tested by now. But others should be OK.

BUILD FROM SOURCE

$ git clone --recursive https://github.com/WuBingzheng/libleak.git
$ cd libleak
$ make

USAGE

basic

  1. Download or build the shared-object libleak.so.

  2. Run the target program:

     $ LD_PRELOAD=/path/of/libleak.so ./a.out
    
  3. Then you will read output in /tmp/libleak.$pid in time.

set expire threshold

As said above, you should set expire threshold according to your scenarios.

For example, if you are debugging an HTTP server with keepalive, and there are connections last for more than 5 minutes, you should set the threshold to 300 second to cover it. If your program is expected to free every memory in 1 second, you should set the threshold to 2 to get report in time.

The threshold is set by environment variable LEAK_EXPIRE (in second, default is 60):

$ LD_PRELOAD=/path/of/libleak.so LEAK_EXPIRE=300 ./a.out

Besides, the threshold will be increased if any memory block is freed after expiration, with LEAK_AUTO_EXPIRE enabled (default is disabled):

$ LD_PRELOAD=/path/of/libleak.so LEAK_AUTO_EXPIRE=1 ./a.out

enable/disable detection during running

libleak begins the detection at the very beginning of target process running by default. However you can enable/disable the detection during running by setting LEAK_PID_CHECK and LEAK_PID_FILE:

$ LD_PRELOAD=/path/of/libleak.so LEAK_PID_CHECK=10 ./a.out

LEAK_PID_CHECK set the interval (in second, default is 0) to check LEAK_PID_FILE.

LEAK_PID_FILE (default is /tmp/libleak.enabled) contains target pids: one pid each line, no empty line, no comment line. You can add or delete pids to/from this file during running.

To enable detecting process pid=1234:

$ echo 1234 >> /tmp/libleak.enabled

To disable detecting process pid=1234:

$ sed -i '/1234/d' /tmp/libleak.enabled

disable shared libraries calling

If your program uses a shared library that allocates too much memory which ruins the log file, AND you can make sure that there is no leak in calling it, LEAK_LIB_BLACKLIST can be used to disable it. Library name can be got from ldd $your-program. If there are more than one libraries, use , to seperate them:

$ LD_PRELOAD=/path/of/libleak.so LEAK_LIB_BLACKLIST=libmysqlclient.so.20.3.8,librdkafka.so.1 ./a.out

skip initial phase

Programs always allocate some memory in initial phase and do not free them. LEAK_AFTER can be used to skip this. If it's set, libleak starts to detect after this time (in second):

$ LD_PRELOAD=/path/of/libleak.so LEAK_AFTER=1 ./a.out

for multi-thread program

libleak is multi-thread safe.

for multi-process program

Log file will be created for each process.

Besides you can choose which processes to be detect by LEAK_PID_FILE and LEAK_PID_CHECK said above.

set the log

The log file is set by LEAK_LOG_FILE (default is /tmp/libleak.$pid).

There is also a statistics report when disabled or target normal termination, either via exit(3) or via return from the main().

READ LOG

After the program running, you can check the output log (e.g. by tail -f /tmp/libleak.$pid).

The memory blocks that live longer than the threshold will be printed as:

callstack[1] expires. count=1 size=1024/1024 alloc=1 free=0
    0x00007fd322bd8220  libleak.so  /path/libleak/libleak.c:674  malloc()
    0x000000000040084e  test  /path/test/test.c:30  foo()
    0x0000000000400875  test  /path/test/test.c:60  bar()
    0x0000000000400acb  test  /path/test/test.c:67  main()

callstack[1] is the ID of callstack where memory leak happens.

The backtrace is showed only on the first time, while it only prints the ID and counters if expiring again:

callstack[1] expires. count=2 size=1024/2048 alloc=2 free=0

If the expired memory block is freed later, it prints:

callstack[1] frees after expired. live=6 expired=1 free_expired=1

Stop the output when you think there is enough log. You can stop the output by terminating the target process, or by by LEAK_PID_FILE and LEAK_PID_CHECK temporarily.

After stopping, statistics is printed for the CallStacks with memory leak:

# callstack statistics: (in ascending order)

callstack[1]: may-leak=1 (1024 bytes)
    expired=2 (2048 bytes), free_expired=1 (1024 bytes)
    alloc=12 (12288 bytes), free=10 (10240 bytes)
    freed memory live time: min=1 max=5 average=4
    un-freed memory live time: max=13
callstack[4]: may-leak=4 (32 bytes)
    expired=4 (32 bytes), free_expired=0 (0 bytes)
    alloc=4 (32 bytes), free=0 (0 bytes)
    freed memory live time: min=0 max=0 average=0
    un-freed memory live time: max=7

The statistics are straight:

  • may-leak, equal to expired - free_expired,
  • expired, count of memory blocks that live longer than threshold,
  • free_expired, count of memory blocks that freed after expiration,
  • alloc, total count of allocation,
  • free, total count of free.

The may-leak may be the most important one. All callstacks are sorted by this in ascending order. So you should check all callstacks backward.

If a free is totally missed in your program, you should only check the callstacks with free=0 . Otherwise, if memory leak only happens in some cases, you need to check all callstacks.

When you find some suspicious callstack, go back to find its full backtrace by the ID, and check you code.

libleak just try to give some help, while some inspiration is still need to find the memory leak finally.

If memory pool is used in your program (e.g. Nginx), you must try harder to locate the memory leak.

Good luck!

libleak's People

Contributors

weiyuanyin avatar wubingzheng avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

libleak's Issues

Not sure whether it is an issue or not.

Have downloaded source code :
-rw-rw-r-- 1 zhaog man 3.2K Jun 9 23:37 README.md
-rw-rw-r-- 1 zhaog man 198 Jun 9 23:37 Makefile
-rw-rw-r-- 1 zhaog man 15K Jun 9 23:37 libleak.c
drwxrwxr-x 2 zhaog man 152 Jun 12 10:51 libwuya/

Tried to build it up by kicking off command make in below old linux box
Linux ceres 2.4.21-32.ELsmp #1 SMP Fri Apr 15 21:17:59 EDT 2005 i686 i686 i386 GNU/Linux

But looks like it was failed with bunch of errors.
Build log was attached. Could you please take a look?
Thanks
build.log

Any way to load debug symbols for system libs?

Hello and thanks for this tool! It looks that It might help me, but I have not yet obtained useful information from it.
What might help me are function names in stack info from standard libraries (libc, libstdc++). I have Ubuntu and have installed debug symbols for these libraries. As I understand, they are located in separate files somewhere in /usr/lib/debug. Gdb uses those, but libleak doesn't, so stacks with libc-2.23.so have no function name and only have an address. Is there a way to make use of those symbols?

anyway to support android?

I replaced backtrace implementation with libunwind to make it built with the latest ndk. however when tested on a Huawei P9 mobile phone(android 8/api 28/arm64-v8a), it always crash with bus error on startup.

fails to compile on fedora 36

first I hit this issue #19
then I hit this issue and re-downloaded #11

then I get this:

cc -g -O2 -Wall -fPIC -Ilibwuya   -c -o libleak.o libleak.c
CFLAGS='-fPIC' make -C libwuya
make[1]: Entering directory '/home/asus/libleak/libwuya'
cc -fPIC -g -Wall -O2   -c -o wuy_dict.o wuy_dict.c
cc -fPIC -g -Wall -O2   -c -o wuy_pool.o wuy_pool.c
cc -fPIC -g -Wall -O2   -c -o wuy_heap.o wuy_heap.c
cc -fPIC -g -Wall -O2   -c -o wuy_event.o wuy_event.c
cc -fPIC -g -Wall -O2   -c -o wuy_sockaddr.o wuy_sockaddr.c
cc -fPIC -g -Wall -O2   -c -o wuy_skiplist.o wuy_skiplist.c
ar rcs libwuya.a wuy_dict.o wuy_pool.o wuy_heap.o wuy_event.o wuy_sockaddr.o wuy_skiplist.o
make[1]: Leaving directory '/home/asus/libleak/libwuya'
cc -shared -o libleak.so libleak.o -Llibwuya -lwuya -lpthread -ldl -lbacktrace
/usr/bin/ld: /usr/local/lib/libbacktrace.a(fileline.o): relocation R_X86_64_32 against `.rodata.str1.8' can not be used when making a shared object; recompile with -fPIC
/usr/bin/ld: /usr/local/lib/libbacktrace.a(posix.o): relocation R_X86_64_32 against `.rodata.str1.1' can not be used when making a shared object; recompile with -fPIC
/usr/bin/ld: /usr/local/lib/libbacktrace.a(simple.o): relocation R_X86_64_32 against `.text' can not be used when making a shared object; recompile with -fPIC
/usr/bin/ld: /usr/local/lib/libbacktrace.a(elf.o): relocation R_X86_64_32 against `.rodata.str1.8' can not be used when making a shared object; recompile with -fPIC
/usr/bin/ld: /usr/local/lib/libbacktrace.a(mmapio.o): relocation R_X86_64_32 against `.rodata.str1.1' can not be used when making a shared object; recompile with -fPIC
/usr/bin/ld: /usr/local/lib/libbacktrace.a(mmap.o): relocation R_X86_64_32 against `.rodata.str1.1' can not be used when making a shared object; recompile with -fPIC
/usr/bin/ld: /usr/local/lib/libbacktrace.a(dwarf.o): relocation R_X86_64_32 against `.rodata.str1.1' can not be used when making a shared object; recompile with -fPIC
collect2: error: ld returned 1 exit status
make: *** [Makefile:6: libleak.so] Error 1

Python stack stace with function names

Looks like an amazing tool!
Do you know how can I see the function names in python process stack trace?
Or at least use the memory address to find which function it is with other tools?

Can't built libleak on CentOS 7 like system

I tried to build libleak on a CentOS like system, but I can't figure out what package is supposed to provide the header
backtrace.h
I really appreciate any help to be able to use libleak on CentOS.

Unable to get the stack

Below is the sample code with which I want to try this shared library usage.
File: example.c

#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
int main()
{
int i;
int ptr = NULL;
ptr = malloc(sizeof(int) * 1000);
for(i=0; i<10;i++)
{
printf("
********\n");
sleep(1);
}
return 0;
}

Compiled as below:
gcc -c -g -Wall -o example.o example.c
gcc -export-dynamic -o prog example.o

Below is the execution:
LD_PRELOAD=./libleak.so LEAK_PID_CHECK=10 ./prog

To see the call stack, there is a file created with PID as: /tmp/libleak.$(PID)

But the content in the file is shown as below.

start detect. expire=1000s

callstack statistics: (in ascending order)

Callstack is empty. Kindly let me know what mistake I have done in this? Please bear with me as I am not completely aware of the concepts.

Regards,
Raghu

does not resolve line-numbers

Hi,

When it finds a problem, it emits:
./constatus(Z15send_index_htmlP13http_thread_tRKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEES8_P10instance_tP6sourceS8+0x5f4) [0x555555916ca6]

That is without line-numbers. Can you maybe add that to libleak? Would be a great help :-)

FreeBSD 12 libwuya make problem

cc  -O2 -pipe -g -Wall -O2 -c wuy_event.c -o wuy_event.o
wuy_event.c:4:10: fatal error: 'sys/epoll.h' file not found
#include <sys/epoll.h>
         ^~~~~~~~~~~~~
1 error generated.
*** Error code 1

Stop.

I linked the libbacktrace.h file for the libleak make now I am having issues with sys/epoll.h in the submodule makefile

Causing assembler errors "bad registers" while porting library to 32 bit power-pc linux

I was experimenting the libleak library in 32 bit power-pc linux box. The compilation is failed with "bad registers" assembler errors. Is this library is un-supported to this platform ? Because I read that "GNU/Linux only by now" in README file.

CC=powerpc-linux-gnu-gcc
/tmp/ccUfyBYT.s: Assembler messages:
/tmp/ccUfyBYT.s:22: Error: bad register name `%rdi)'
/tmp/ccUfyBYT.s:30: Error: bad register name `%rsi)'
/tmp/ccUfyBYT.s:33: Error: bad register name `%rsi)'
/tmp/ccUfyBYT.s:36: Error: bad register name `%rax'
/tmp/ccUfyBYT.s:38: Error: bad register name `%rax'
/tmp/ccUfyBYT.s:42: Error: bad register name `%rdx'
/tmp/ccUfyBYT.s:47: Error: bad register name `%rdi)'
/tmp/ccUfyBYT.s:64: Error: bad register name `%rdi)'
/tmp/ccUfyBYT.s:67: Error: bad register name `%rdi)'
/tmp/ccUfyBYT.s:70: Error: bad register name `%rax'

<truncated>

make[1]: *** [wuy_dict.o] Error 1
make: *** [libleak.so] Error 2

How to use?

I want to find a memory leak in a bigger program: http://tvdr.de/

But all that libleak produces is a over 100MB log with cryptic information. How should I find the relevant parts in 1991508 lines of log?

What this (probably very useful) tool really needs is some kind of tutorial on how to use it!

Core dumped on Jetson NANO

I'm trying to use libleak on jetson nano, but i get this error:

izuser@nano:~/jsoft-1.0/dgpu_core$ LD_PRELOAD=/home/izuser/libleak/libleak.so ./build/dgpu_core
Aborted (core dumped)

without LD_PRELOAD it works correctly. How can I fix/diagnose this problem?

Can't see the function name and line numbers.

callstack[1] expires. count=1 size=32/32 alloc=1 free=0
/media/kownse/OS/code/git/libleak/libleak.so(malloc+0x25) [0x7fe382456195]
/usr/lib/x86_64-linux-gnu/libstdc++.so.6(_Znwm+0x18) [0x7fe38215e258]
./philo(+0x2a4e) [0x55e3a0623a4e]
./philo(+0x1e98) [0x55e3a0622e98]
./philo(+0x1883) [0x55e3a0622883]
./philo(+0x131a) [0x55e3a062231a]

Please print detail log just like memleax does.

Use makefile variables for portability

If you use $(CC) instead of 'gcc', and $(AR) instead of 'ar', on your makefiles. The makefile will be more portable. For example, it will be easier to support cross-compilation.

just a suggestion.

Crash when loading

Program terminated with signal 11, Segmentation fault.

#0 0x0000000010e44cc0 in ?? ()
#1 0x00002ba62f18b477 in malloc (size=18) at libleak.c:677
#2 0x00002ba6340ecaaa in strdup () from /lib64/libc.so.6
#3 0x00002ba62f18a274 in lib_maps_build () at libleak.c:168
#4 0x00002ba62f18a565 in init () at libleak.c:342
#5 0x00002ba62ef73973 in _dl_init_internal () from /lib64/ld-linux-x86-64.so.2
#6 0x00002ba62ef6515a in _dl_start_user () from /lib64/ld-linux-x86-64.so.2

Output has no backtrace

Hello and thank you for this tool.
I tried using this tool in ARM to check a possible memory leak in our system but when I check the output the backtrace is not shown.

start detect. expire=300s

callstack[1] expires. count=1 size=216/216 alloc=211774 free=211482
callstack[1] expires. count=2 size=16/232 alloc=211774 free=211482
callstack[1] expires. count=3 size=216/448 alloc=211774 free=211482
callstack[1] expires. count=4 size=16/464 alloc=211774 free=211482
callstack[1] expires. count=5 size=2040/2504 alloc=211898 free=211606
callstack[1] expires. count=6 size=216/2720 alloc=214351 free=214055
callstack[1] expires. count=7 size=16/2736 alloc=214351 free=214055
callstack[1] expires. count=8 size=216/2952 alloc=214351 free=214055
callstack[1] expires. count=9 size=16/2968 alloc=214351 free=214055
callstack[1] expires. count=10 size=216/3184 alloc=218767 free=218467
callstack[1] expires. count=11 size=16/3200 alloc=218767 free=218467
callstack[1] expires. count=12 size=216/3416 alloc=218827 free=218527
callstack[1] expires. count=13 size=16/3432 alloc=218827 free=218527

Any advice on what could cause this?

backtrace library dependency

Hi,

can you let me know which backtrace library used for linking. Is it a standard linux backtrace library or a third-party library?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.