nvdv / vprof Goto Github PK
View Code? Open in Web Editor NEWVisual profiler for Python
License: BSD 2-Clause "Simplified" License
Visual profiler for Python
License: BSD 2-Clause "Simplified" License
Running setup.py triggers: ImportError: No module named pip.req
due to "from pip.req import parse_requirements"
Is this due to a requirement on a specific version of Pip?
Not displaying all file source code if only few lines of code were run in code heatmap tab would improve readability.
Currently vprof -h
does not show available modes. Adding them will improve vprof
usability.
This is happening with neutron-server:
λ vprof -c cmh -H 192.168.8.1 -p 8000 -s ".venv/bin/neutron-server --config-file /home/jhammond/etc/neutron/neutron.conf"
...
neutron-server: error: unrecognized arguments: --config-file /home/jhammond/etc/neutron/neutron.conf
...
Whereas without vprof works:
λ .venv/bin/neutron-server --config-file /home/jhammond/etc/neutron/neutron.conf
Guru mediation now registers SIGUSR1 and SIGUSR2 by default for backward compatibility. SIGUSR1 will no longer be registered in a future release, so please use SIGUSR2 to generate reports.
Option "notification_driver" from group "DEFAULT" is deprecated. Use option "driver" from group "oslo_messaging_notifications".
2016-05-20 13:43:12.287 29198 INFO neutron.common.config [-] Logging enabled!
2016-05-20 13:43:12.288 29198 INFO neutron.common.config [-] .venv/bin/neutron-server version 8.0.1.dev363
I just installed vprof
with pip
. Then I tried one of the examples with my script:
$ vprof -c cmh -s "my_script.py"
usage: vprof [-h] [--port PORT] [--debug] [-n] options src
vprof: error: unrecognized arguments: -c -s
Examples should be updated, for example like this:
$ vprof cmh my_script.py
vprof
cannot open default web browser on Windows, since open
is not a command on Windows.
vprof crashes with a stacktrace on scripts with Python 3.6.1 32-bit and Windows
pip install vprof
print("Hi")
and save it as test.py
vprof -c p test.py
C:\>vprof -c p test.py
Running Profiler...
Traceback (most recent call last):
File "c:\program files (x86)\python36-32\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "c:\program files (x86)\python36-32\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Program Files (x86)\Python36-32\Scripts\vprof.exe\__main__.py", line 9, in <module>
File "c:\program files (x86)\python36-32\lib\site-packages\vprof\__main__.py", line 88, in main
source, config, verbose=True)
File "c:\program files (x86)\python36-32\lib\site-packages\vprof\runner.py", line 78, in run_profilers
run_stats[option] = curr_profiler.run()
File "c:\program files (x86)\python36-32\lib\site-packages\vprof\base_profiler.py", line 162, in run
return dispatcher()
File "c:\program files (x86)\python36-32\lib\site-packages\vprof\base_profiler.py", line 71, in multiprocessing_wrapper
process.start()
File "c:\program files (x86)\python36-32\lib\multiprocessing\process.py", line 105, in start
self._popen = self._Popen(self)
File "c:\program files (x86)\python36-32\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "c:\program files (x86)\python36-32\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "c:\program files (x86)\python36-32\lib\multiprocessing\popen_spawn_win32.py", line 65, in __init__
reduction.dump(process_obj, to_child)
File "c:\program files (x86)\python36-32\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
AttributeError: Can't pickle local object 'run_in_another_process.<locals>.multiprocessing_wrapper.<locals>.remote_wrapper'
C:\>Traceback (most recent call last):
File "<string>", line 1, in <module>
File "c:\program files (x86)\python36-32\lib\multiprocessing\spawn.py", line 99, in spawn_main
new_handle = reduction.steal_handle(parent_pid, pipe_handle)
File "c:\program files (x86)\python36-32\lib\multiprocessing\reduction.py", line 82, in steal_handle
_winapi.PROCESS_DUP_HANDLE, False, source_pid)
OSError: [WinError 87] The parameter is incorrect
Not a stack trace.
Python 3.6.1 (v3.6.1:69c0db5, Mar 21 2017, 17:54:52) [MSC v.1900 32 bit (Intel)] on win32
vprof
UI is interactive and currently there's no way to get info about available controls. Adding help tooltip would improve UX.
Cool project. I really like the web interface and the different profile types: flame, memory, line.
In one of my test scripts, I use multiprocessing and vprof
didn't pick up any stats for flame and line profiles when that happened. Is multiprocessing supported?
Thanks for pushing the UI of Python profilers ahead.
Since UI can display too many things at once, adding search to UI would improve usability.
Python 3.5.1 (v3.5.1:37a07cee5969, Dec 6 2015, 01:54:25) [MSC v.1900 64 bit (AMD64)] on win32
PS C:\Users\Admin\Downloads\trufont> vprof -c c Lib/trufont
Running FlameGraphProfiler...
Traceback (most recent call last):
File "C:\Python35\Scripts\vprof-script.py", line 9, in <module>
load_entry_point('vprof==0.34', 'console_scripts', 'vprof')()
File "c:\python35\lib\site-packages\vprof\__main__.py", line 88, in main
source, config, verbose=True)
File "c:\python35\lib\site-packages\vprof\runner.py", line 78, in run_profilers
run_stats[option] = curr_profiler.run()
File "c:\python35\lib\site-packages\vprof\flame_graph.py", line 169, in run
prof = run_dispatcher()
File "c:\python35\lib\site-packages\vprof\flame_graph.py", line 142, in run_as_package
with _StatProfiler() as prof:
File "c:\python35\lib\site-packages\vprof\flame_graph.py", line 29, in __enter__
signal.signal(signal.SIGPROF, self.sample)
AttributeError: module 'signal' has no attribute 'SIGPROF'
There is no way to install Profiler or dependencies via make when npm is missing in the system.
make deps_install
make install
All required dependencies are installed. Profiler are installed successfully.
"npm install
make: npm: Command not found
make: *** [deps_install] Error 127"
Verify/Install npm at the very first step. Otherwise, document it in Prerequisites section.
Consider include the following into Makefile. But need to keep it cross-platform somehow:
packages:
[ -z dpkg -l | grep libx11-dev
] && sudo apt-get install libx11-dev
.PHONY: packages
Example from #54
The project looks great, and because of the makefile it wont work on Windows. Are there any other linux-specific things?
It will be great to move that over to some cross-platform code, using python (setup.py or similar), considering that the main project is itself cross-compatible (I think),
I'm trying to profile http://scikit-image.org/docs/dev/auto_examples/filters/plot_inpaint.html .
vprof {ch/hc/mh/hm} plot_inpaint.py
works units of seconds, while vprof cmh ...
doesn't finish even within 10mins.
At present pip install vprof
installs v0.22 instead of latest v0.3.
Current workaround: pip install vprof==0.3
IPython notebook is a popular interactive development environment for Python. Displaying vprof stats inline for some functions would be pretty cool.
Thanks for making this useful project available!
I pip installed it (Mac OS 10.11.3, Anaconda Python 3.5.1), and then tried running it. I get the messages saying that it's running the various profiling functionalities (depending on the flags), and then a blank browser tab opens pointing to localhost:8000. The message in the title then appears in the terminal.
Any ideas? Thanks again
In Python 3.5.1, vprof freshly installed via 'pip3.5 vprof'.
-c and -s switches aren't working, so README is incorrect.
In order to get it working, I have to do:
vprof cmh "myscript.py --my-option 1"
vprof
can't handle scripts that take arguments after the script name (sys.argv). It would be super helpful if it did.
With this code:
import unittest
class MyTest(unittest.TestCase):
def test_fail(self):
self.assertTrue(False)
unittest.main()
I get the following result
xcombelle@ender ~/d/sgftool> vprof -c cmh -s test.py --port 8080
Running RuntimeProfile...
----------------------------------------------------------------------
Ran 0 tests in 0.000s
OK
Running MemoryProfile...
----------------------------------------------------------------------
Ran 0 tests in 0.000s
OK
Running CodeHeatmapProfile...
----------------------------------------------------------------------
Ran 0 tests in 0.000s
OK
Starting HTTP server...
^CStopping...
xcombelle@ender ~/d/sgftool> python3 test.py --port 8080
usage: test.py [-h] [-v] [-q] [-f] [-c] [-b] [tests [tests ...]]
test.py: error: unrecognized arguments: --port
xcombelle@ender ~/d/sgftool> python3 test.py
F
======================================================================
FAIL: test_fail (__main__.MyTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "test.py", line 5, in test_fail
self.assertTrue(False)
AssertionError: False is not true
----------------------------------------------------------------------
Ran 1 test in 0.001s
FAILED (failures=1)
Both execution should have the same result
Maybe there is a way to do this. If I save off the whole Webpage, including supporting files, I get the initial graphics and text but the mouseovers don't work, I assume because there is no node.js server to provide that info?
I should have mentioned that the Flame chart is great!
I ran vprof -c cmh -s "evaluate.py -o 7496a0471ba8259926b36028b79b5a0e62ecf03c -d 510 -f 260 -s 25"
and no results were presented in any way.
If I understand correctly there should appear new tabs in my browser but that didn't happen.
Also:
I had to pass the hash without single quotation marks as vprof would put another pair around them. Is this intended?
Platform:
python3.5:
Python 3.5.2 (default, Jun 28 2016, 08:46:01)
[GCC 6.1.1 20160602] on linux
Chromium Version 53.0.2785.116 (64-bit)
I am running Antergos linux
Consider profiling a relatively long-running script. There's a well-defined use case where you just want to reuse cached profile results.
At present all code heatmaps are displayed in one column and there's no way to choose heatmap for single file. Adding links to separate files would make browsing heatmaps for large packages eaiser.
vprof -h
returns
usage: __main__.py [-h] [--port PORT] [--debug] [-n] opts src
__main__.py
should not show up in this message
i didn't find detail documents..
It'd be nice if it was easily possible to run this on a package, similar to python -m foo
.
Looks great! Quick observation and minor note about terminology: the visualization looks like a "flame graph", where the x-axis spans the population (which can be sorted alphabetically, to maximize frame merging), whereas a "flame chart" (as Google has named them) has the passage of time on the x-axis. Both are useful for different reasons (and if the profiler retains individual timestamped samples, you have all the data you need, and could display both in separate tabs).
So I'd rename this to "flame graph", to avoid confusion. And maybe add a "flame chart" tab later on, where it does show the passage of time. Both are useful for different kinds of problems (flame graphs for aggregate big picture view, and flame charts for time series issues).
More info on flame graphs: http://queue.acm.org/detail.cfm?id=2927301
May be you should add information about version of vprof into help?
Currently vprof
does not have favicon and server throws IOError
:
Exception happened during processing of request from ('127.0.0.1', 52771)
Traceback (most recent call last):
File "/usr/local/Cellar/python/2.7.11/Frameworks/Python.framework/Versions/2.7/lib/python2.7/SocketServer.py", line 599, in process_request_thread
self.finish_request(request, client_address)
File "/usr/local/Cellar/python/2.7.11/Frameworks/Python.framework/Versions/2.7/lib/python2.7/SocketServer.py", line 334, in finish_request
self.RequestHandlerClass(request, client_address, self)
File "vprof/stats_server.py", line 37, in __init__
self, *args, **kwargs)
File "/usr/local/Cellar/python/2.7.11/Frameworks/Python.framework/Versions/2.7/lib/python2.7/SocketServer.py", line 655, in __init__
self.handle()
File "/usr/local/Cellar/python/2.7.11/Frameworks/Python.framework/Versions/2.7/lib/python2.7/BaseHTTPServer.py", line 340, in handle
self.handle_one_request()
File "/usr/local/Cellar/python/2.7.11/Frameworks/Python.framework/Versions/2.7/lib/python2.7/BaseHTTPServer.py", line 328, in handle_one_request
method()
File "vprof/stats_server.py", line 52, in do_GET
with open(res_filename) as res_file:
IOError: [Errno 2] No such file or directory: 'vprof/frontend/favicon.ico'
Add favicon.ico to vprof
to fix this.
Sometimes flame chart might contain a lot of elements that do not take a lot of time, but make whole picture noisier. Excluding them would improve UX.
It seems that eslint
provides better customization and catches more errors. Replace jshint
with eslint
.
Hey, I ran this library's profiler on my Mac OSX on a fairly simple apriori algorithm I wrote. It ate up all of the space on my disc and crashed the browser.
But the cool visuals in the repo pictures are clearly of the same order of magnitude in computation, so...what's up with that?
I've posted the code to an example repo for reproduction. Python version is 3.5
. The file of interest is vprof.ipynb
.
Thanks for your help!
Currently vprof supports Python 2 only. Python 3 support is important, since it's current Python version.
pylint
supports pylint: disable=relative-import
instead ofpylint: disable=W0403
.
Change it project-wide to improve code readability.
Cool project! I really like the webviewer and were using it to view memory stats generated elsewhere. Since it fetches all data in one piece, it only works well up to maybe 10 MB. Want to leave the issue here in case I find some time to submit a PR for it.
Error running example.
$ vprof -c cmh permutations.py
Running MemoryProfiler...
[('A', 'B'), ('A', 'C'), ('A', 'D'), ('A', 'E'), ('A', 'F'), ('A', 'G'), ('A', 'E'), ('A', 'D'), ('B', 'A'), ('B', 'C'), ('B', 'D'), ('B', 'E'), ('B', 'F'), ('B', 'G'), ('B', 'E'), ('B', 'D'), ('C', 'A'), ('C', 'B'), ('C', 'D'), ('C', 'E'), ('C', 'F'), ('C', 'G'), ('C', 'E'), ('C', 'D'), ('D', 'A'), ('D', 'B'), ('D', 'C'), ('D', 'E'), ('D', 'F'), ('D', 'G'), ('D', 'E'), ('D', 'D'), ('E', 'A'), ('E', 'B'), ('E', 'C'), ('E', 'D'), ('E', 'F'), ('E', 'G'), ('E', 'E'), ('E', 'D'), ('F', 'A'), ('F', 'B'), ('F', 'C'), ('F', 'D'), ('F', 'E'), ('F', 'G'), ('F', 'E'), ('F', 'D'), ('G', 'A'), ('G', 'B'), ('G', 'C'), ('G', 'D'), ('G', 'E'), ('G', 'F'), ('G', 'E'), ('G', 'D'), ('E', 'A'), ('E', 'B'), ('E', 'C'), ('E', 'D'), ('E', 'E'), ('E', 'F'), ('E', 'G'), ('E', 'D'), ('D', 'A'), ('D', 'B'), ('D', 'C'), ('D', 'D'), ('D', 'E'), ('D', 'F'), ('D', 'G'), ('D', 'E')]
Running FlameGraphProfiler...
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "c:\python27\lib\multiprocessing\forking.py", line 380, in main
prepare(preparation_data)
File "c:\python27\lib\multiprocessing\forking.py", line 488, in prepare
assert main_name not in sys.modules, main_name
AssertionError: __main__
Traceback (most recent call last):
File "c:\python27\lib\runpy.py", line 162, in _run_module_as_main
"__main__", fname, loader, pkg_name)
File "c:\python27\lib\runpy.py", line 72, in _run_code
exec code in run_globals
File "C:\Python27\Scripts\vprof.exe\__main__.py", line 9, in <module>
File "c:\python27\lib\site-packages\vprof\__main__.py", line 87, in main
program_stats = runner.run_profilers(source, config, verbose=True)
File "c:\python27\lib\site-packages\vprof\runner.py", line 78, in run_profilers
run_stats[option] = curr_profiler.run()
File "c:\python27\lib\site-packages\vprof\base_profiler.py", line 170, in run
return dispatcher()
File "c:\python27\lib\site-packages\vprof\flame_graph.py", line 171, in profile_module
return base_profiler.run_in_separate_process(self._profile_module)
File "c:\python27\lib\site-packages\vprof\base_profiler.py", line 76, in run_in_separate_process
manager = multiprocessing.Manager()
File "c:\python27\lib\multiprocessing\__init__.py", line 99, in Manager
m.start()
File "c:\python27\lib\multiprocessing\managers.py", line 528, in start
self._address = reader.recv()
EOFError
vprof visualisations
Windows 10 python 2.7.9 cygwin
For long running applications (web server) is it possible to view intermediary results while the application is still running?
Currently vprof
can profile Python source files only. Profiling parts of code would be awesome.
README says:
"To install current dev version, clone this repository and execute
make install"
However it should be:
"sudo make install"
LOG:
Requirement already satisfied (use --upgrade to upgrade): psutil in /usr/lib/python2.7/dist-packages (from vprof==0.1.2)
Installing collected packages: vprof
Running setup.py install for vprof
Complete output from command /usr/bin/python2.7 -c "import setuptools, tokenize;file='/tmp/pip-Wd26mM-build/setup.py';exec(compile(getattr(tokenize, 'open', open)(file).read().replace('\r\n', '\n'),
file, 'exec'))" install --record /tmp/pip-67vVFH-record/install-record.txt --single-version-externally-managed --compile:
running install
running build
running build_py
creating build
creating build/lib.linux-x86_64-2.7
creating build/lib.linux-x86_64-2.7/vprof
copying vprof/init.py -> build/lib.linux-x86_64-2.7/vprof
copying vprof/profile_wrappers.py -> build/lib.linux-x86_64-2.7/vprof
copying vprof/main.py -> build/lib.linux-x86_64-2.7/vprof
copying vprof/stats_server.py -> build/lib.linux-x86_64-2.7/vprof
running egg_info
creating vprof.egg-info
writing requirements to vprof.egg-info/requires.txt
writing vprof.egg-info/PKG-INFO
writing top-level names to vprof.egg-info/top_level.txt
writing dependency_links to vprof.egg-info/dependency_links.txt
writing entry points to vprof.egg-info/entry_points.txt
writing manifest file 'vprof.egg-info/SOURCES.txt'
warning: manifest_maker: standard file '-c' not found
reading manifest file 'vprof.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
writing manifest file 'vprof.egg-info/SOURCES.txt'
creating build/lib.linux-x86_64-2.7/vprof/frontend
copying vprof/frontend/profile.html -> build/lib.linux-x86_64-2.7/vprof/frontend
copying vprof/frontend/vprof.css -> build/lib.linux-x86_64-2.7/vprof/frontend
copying vprof/frontend/vprof_min.js -> build/lib.linux-x86_64-2.7/vprof/frontend
running install_lib
creating /usr/local/lib/python2.7/dist-packages/vprof
error: could not create '/usr/local/lib/python2.7/dist-packages/vprof': Permission denied
----------------------------------------
Command "/usr/bin/python2.7 -c "import setuptools, tokenize;file='/tmp/pip-Wd26mM-build/setup.py';exec(compile(getattr(tokenize, 'open', open)(file).read().replace('\r\n', '\n'), file, 'exec'))" inst
all --record /tmp/pip-67vVFH-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-Wd26mM-build
make: *** [install] Error 1
I'm not sure if it's deps_install
that's failing, but whenever I open vprof
in remote mode, http://localhost:/ shows a spinning circle that keeps on spinning forever. build_ui
and install
throw no noticeable errors. Here's the output from deps_install
:
(mvenv)$ python setup.py deps_install
running deps_install
Requirement already satisfied (use --upgrade to upgrade): psutil>=3.4.2 in /Users/ali/.pyenv/versions/mvenv/lib/python2.7/site-packages (from -r requirements.txt (line 1))
Requirement already satisfied (use --upgrade to upgrade): six>=1.10.0 in /Users/ali/.pyenv/versions/mvenv/lib/python2.7/site-packages (from -r requirements.txt (line 2))
Requirement already satisfied (use --upgrade to upgrade): mock>=1.0.0 in /Users/ali/.pyenv/versions/mvenv/lib/python2.7/site-packages (from -r dev_requirements.txt (line 1))
Requirement already satisfied (use --upgrade to upgrade): pylint>=1.5.4 in /Users/ali/.pyenv/versions/mvenv/lib/python2.7/site-packages (from -r dev_requirements.txt (line 2))
Requirement already satisfied (use --upgrade to upgrade): six in /Users/ali/.pyenv/versions/mvenv/lib/python2.7/site-packages (from pylint>=1.5.4->-r dev_requirements.txt (line 2))
Requirement already satisfied (use --upgrade to upgrade): astroid<1.5.0,>=1.4.5 in /Users/ali/.pyenv/versions/mvenv/lib/python2.7/site-packages (from pylint>=1.5.4->-r dev_requirements.txt (line 2))
Requirement already satisfied (use --upgrade to upgrade): colorama in /Users/ali/.pyenv/versions/mvenv/lib/python2.7/site-packages (from pylint>=1.5.4->-r dev_requirements.txt (line 2))
Requirement already satisfied (use --upgrade to upgrade): wrapt in /Users/ali/.pyenv/versions/mvenv/lib/python2.7/site-packages (from astroid<1.5.0,>=1.4.5->pylint>=1.5.4->-r dev_requirements.txt (line 2))
Requirement already satisfied (use --upgrade to upgrade): lazy-object-proxy in /Users/ali/.pyenv/versions/mvenv/lib/python2.7/site-packages (from astroid<1.5.0,>=1.4.5->pylint>=1.5.4->-r dev_requirements.txt (line 2))
npm WARN package.json [email protected] No README data
npm WARN unmet dependency /Users/ali/repo/vprof/node_modules/karma/node_modules/socket.io/node_modules/engine.io/node_modules/engine.io-parser requires has-binary@'0.1.6' but will load
npm WARN unmet dependency /Users/ali/repo/vprof/node_modules/karma/node_modules/socket.io/node_modules/has-binary,
npm WARN unmet dependency which is version 0.1.7
npm WARN unmet dependency /Users/ali/repo/vprof/node_modules/karma/node_modules/socket.io/node_modules/socket.io-client/node_modules/engine.io-client requires component-emitter@'1.1.2' but will load
npm WARN unmet dependency /Users/ali/repo/vprof/node_modules/karma/node_modules/socket.io/node_modules/socket.io-client/node_modules/component-emitter,
npm WARN unmet dependency which is version 1.2.0
npm WARN unmet dependency /Users/ali/repo/vprof/node_modules/karma/node_modules/socket.io/node_modules/socket.io-adapter/node_modules/socket.io-parser requires debug@'0.7.4' but will load
npm WARN unmet dependency /Users/ali/repo/vprof/node_modules/karma/node_modules/socket.io/node_modules/debug,
npm WARN unmet dependency which is version 2.2.0
npm WARN unmet dependency /Users/ali/repo/vprof/node_modules/phantomjs-prebuilt/node_modules/request/node_modules/http-signature/node_modules/sshpk requires assert-plus@'^1.0.0' but will load
npm WARN unmet dependency /Users/ali/repo/vprof/node_modules/phantomjs-prebuilt/node_modules/request/node_modules/http-signature/node_modules/assert-plus,
npm WARN unmet dependency which is version 0.2.0
My self-test outputs are as follows:
(mvenv)$ python setup.py test
running test
testGetRunDispatcher (base_profile_test.BaseProfileUnittest) ... ok
testInit_RunObjFunction (base_profile_test.BaseProfileUnittest) ... ok
testInit_RunObjImportedPackage (base_profile_test.BaseProfileUnittest) ... ok
testInit_RunObjModule (base_profile_test.BaseProfileUnittest) ... ok
testInit_RunObjPackagePath (base_profile_test.BaseProfileUnittest) ... ok
testRun (base_profile_test.BaseProfileUnittest) ... ok
testRunAsFunction (base_profile_test.BaseProfileUnittest) ... ok
testRunAsModule (base_profile_test.BaseProfileUnittest) ... ok
testRunAsPackageInNamespace (base_profile_test.BaseProfileUnittest) ... ok
testRunAsPackagePath (base_profile_test.BaseProfileUnittest) ... ok
testGetPackageCode (base_profile_test.GetPackageCodeUnittest) ... ok
testAddCode (code_heatmap_test.CodeHeatmapCalculator) ... ok
testCalcHeatmap (code_heatmap_test.CodeHeatmapCalculator) ... ok
testInit (code_heatmap_test.CodeHeatmapCalculator) ... ok
testAddCode (memory_profile_test.CodeEventsTrackerUnittest) ... ok
testTraceMemoryUsage_EmptyEventsList (memory_profile_test.CodeEventsTrackerUnittest) ... ok
testTraceMemoryUsage_NormalUsage (memory_profile_test.CodeEventsTrackerUnittest) ... ok
testTraceMemoryUsage_OtherCode (memory_profile_test.CodeEventsTrackerUnittest) ... ok
testTraceMemoryUsage_SameLine (memory_profile_test.CodeEventsTrackerUnittest) ... ok
testTransformStats (runtime_profile_test.RuntimeProfileUnittest) ... ok
----------------------------------------------------------------------
Ran 20 tests in 0.022s
OK
> vprof-frontend@ test /Users/ali/repo/vprof
> karma start
19 05 2016 16:37:59.894:INFO [framework.browserify]: bundle built
19 05 2016 16:37:59.907:INFO [karma]: Karma v0.13.22 server started at http://localhost:9876/
19 05 2016 16:37:59.914:INFO [launcher]: Starting browser PhantomJS
19 05 2016 16:38:01.173:INFO [PhantomJS 2.1.1 (Mac OS X 0.0.0)]: Connected on socket /#-QYG9HhjEzuR9d5NAAAA with id 80466447
PhantomJS 2.1.1 (Mac OS X 0.0.0): Executed 3 of 3 SUCCESS (0.003 secs / 0.002 secs)
(mvenv)$ python setup.py e2e_test
running e2e_test
testRequest (code_heatmap_e2e.CodeHeatmapFunctionEndToEndTest) ... ok
testRequest (code_heatmap_e2e.CodeHeatmapImportedPackageEndToEndTest) ... ok
testRequest (code_heatmap_e2e.CodeHeatmapModuleEndToEndTest) ... ok
testRequest (code_heatmap_e2e.CodeHeatmapPackageAsPathEndToEndTest) ... ok
testRequest (memory_profile_e2e.MemoryProfileFunctionEndToEndTest) ... ok
testRequest (memory_profile_e2e.MemoryProfileImportedPackageEndToEndTest) ... ok
testRequest (memory_profile_e2e.MemoryProfileModuleEndToEndTest) ... ok
testRequest (memory_profile_e2e.MemoryProfilePackageAsPathEndToEndTest) ... ok
testRequest (runtime_profile_e2e.RuntimeProfileFunctionEndToEndTest) ... ok
testRequest (runtime_profile_e2e.RuntimeProfileImportedPackageEndToEndTest) ... ok
testRequest (runtime_profile_e2e.RuntimeProfileModuleEndToEndTest) ... ok
testRequest (runtime_profile_e2e.RuntimeProfilePackageAsPathEndToEndTest) ... ok
----------------------------------------------------------------------
Ran 12 tests in 6.114s
OK
I'm sure it's something really straightforward I've missed, because I didn't see any issues like this in other open (or closed) issues.
async/await is integral part of Python 3.6. Adding support for it would be nice.
Thanks for a great tool! This visual profiler is what I was looking for in Python.
I was surprised though that the code heatmap only shows the execution count for the colour highlighting, and for the mouseovers. For me it is far more interesting to know how long each line took, not just how many times it is run. Of course this is also somewhat in the flame graph, but not in the same format.
It would be great to be able to select what the code heatmap shows, effectively allowing the user to choose between the columns of line_profiler. This could be in the command-line options, but ideally could be in a drop-down menu in the code heatmap screen. I believe matlab's profiler offers something similar.
Thanks
d3 is moving towards v4 that will be modular.
Since vprof
requires just the tiny subset of d3
functionality, porting it to v4 will decrease size of UI code and thus improve performance.
I installed through pip on python 2.7.5 and ran into an error when trying to launch
File ".../lib/python2.7/site-packages/vprof/profile_wrappers.py", line 27, in get_memory_usage
memory_info = psutil.Process(os.getpid()).memory_info()
AttributeError: 'Process' object has no attribute 'memory_info'
Turned out I had a really old version of psutil installed (version 1.2.1). Running pip install -U psutil
fixed the problem. I would suggest specifying a minimum version in you requirements.txt file, something like psutil>=3
would probably do it.
Installler couldn't install phantomjs dependency:
[email protected] install /home/vladimir/Sources/Python/vprof/node_modules/phantomjs
node install.js
sh: 1: node: not found
npm http 304 https://registry.npmjs.org/align-text
npm http 304 https://registry.npmjs.org/align-text
npm http 304 https://registry.npmjs.org/lazy-cache
npm WARN This failure might be due to the use of legacy binary "node"
npm WARN For further explanations, please read
/usr/share/doc/nodejs/README.Debian
...
Add nodejs-legacy as a requirement
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.