Giter Club home page Giter Club logo

virtualgl's People

Contributors

akien-mga avatar dcommander avatar listout avatar xeonacid avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

virtualgl's Issues

new virtualgl seems to break steam with bumblebee

On ArchLinux, after upgrading lib32-virtualgl and virtualgl to version 2.5, steam no longer work with bumblebee.

% optirun steam

Running Steam on arch rolling 64-bit
STEAM_RUNTIME is enabled automatically
Installing breakpad exception handler for appid(steam)/version(1454620878)
[VGL] ERROR: Could not load GLX/OpenGL functions
[2016-02-22 13:50:23] Startup - updater built Feb 4 2016 12:22:08

Note that steam do work if I start it without optirun (indeed, it use the intel gpu, which is slow).

Downgrading virtualgl to 2.4.1 as a workaround fix the problem.

BLAS : Program is Terminated. Because you tried to allocate too many memory regions.

Using commit fccfd89 (Sat Jul 16 14:26:06 2016 -0500)
on Linux 3.19.0-32-generic #37~14.04.1-Ubuntu
When using a 3D app via VNC as follows ./bin/vglrun cura, everything is initially fine, but when I aggressively move 3d objects around on the screen I get the following crash:

BLAS : Program is Terminated. Because you tried to allocate too many memory regions.
/usr/bin/cura: line 3: 27700 Segmentation fault LD_LIBRARY_PATH=/opt/cura/lib PYTHONPATH=/opt/cura/lib/python3/dist-packages QT_PLUGIN_PATH=/opt/cura/lib/plugins QML2_IMPORT_PATH=/opt/cura/lib/qml QT_QPA_FONTDIR=/opt/cura/lib/fonts python3.4 /opt/cura/bin/cura "$@"

I get the same crash if I use the packaged version virtualgl_2.5_amd64.deb.
Cura does not normally crash under these circumstates.

couldn't open display on debian 8

I am trying to get opengl work for one of my projects.
I have done
/opt/VirtualGL/bin/vglserver_config
then chose all recommended settings

Choose:
1

Restrict 3D X server access to vglusers group (recommended)?
[Y/n]
y

Restrict framebuffer device access to vglusers group (recommended)?
[Y/n]
y

Disable XTEST extension (recommended)?
[Y/n]
y
... Creating vglusers group ...
groupadd: group 'vglusers' already exists
Could not add vglusers group (probably because it already exists.)
... Granting read permission to /etc/opt/VirtualGL/ for vglusers group ...
... Creating /etc/modprobe.d/virtualgl.conf to set requested permissions for
    /dev/nvidia* ...
... Adding vglgenkey to /etc/kde4/kdm/Xsetup script ...
... Adding display-setup-script=vglgenkey to /etc/lightdm/lightdm.conf ...
... Disabling XTEST extension in /etc/kde4/kdm/kdmrc ...

Done. You must restart the display manager for the changes to take effect.
then i logged in using vglconnect -x user:pass as described. Then in that same ssh session , i did vglrun glxgears, and it just wont run . This is the output of vglrun +v glxgears :
[VGL] Shared memory segment ID for vglconfig: 360448
[VGL] VirtualGL v2.3.2 64-bit (Build 20121002)
[VGL] Opening local display :0
Error: couldn't open display 129.21.147.40:0

vglconnect with -e = "The server does not appear to have VirtualGL 2.1 or later installed"

Hello !
I'm proudly using VGL since some years now, and it works great.
I never really used vglconnect because vglrun -d was enough for my needs, but today I need to encrypt X11 and VGL.
After re-reading the doc, it seems the only way to encrypt X11 is to use the vglconnect -s argument.
So I tried it, it works. And there's the -e argument, which is VERY interesting, BUT, I can't use it.
Each time I try to "vglconnect -s user@server -e glxgears" for example, I get the "The server does not appear to have VirtualGL 2.1 or later installed" error, I don't understand why.

I'm using VGL 2.5.2 on both client & server.

Thanks for your help :)
Regards

Firefox crashes

When I start firefox with vglrun firefox it crashes and can see

[VGL] ERROR: in readPixels--
[VGL]    351: VirtualDrawable instance has not been fully initialized

I'm using just compiled VirtualGL from master branch on ArchLinux.

Authentication with X cookie instead of xhost +LOCAL:

I have managed to use VirtualGL without setting xhost +LOCAL:
The core point is to extract the cookie from ~/.Xauthority for display :0 and to change it a bit. It is described here: http://stackoverflow.com/a/25280523/5369403

Xcookie=/tmp/Xcookie
touch $Xcookie
xauth nlist :0 | sed -e 's/^..../ffff/' | xauth -f $Xcookie nmerge - 

Inserting ffff in the cookie removes restriction to localhost clients. Clients from other hosts using this cookie can access display :0.
I'm using this with VirtualGL in my project x11docker: https://github.com/mviereck/x11docker

Maybe it would be possible to let VirtualGL use this cookie to access display :0 without providing this cookie to client applications? I think this can be a security improvement.

problem developing 3d app with virtualgl and turbovnc

Hello,

I'm developing a simple 3d application, and my development machine is
using turbovnc+virtualgl.

The behaviour is different depending if I develop locally or remotely.

The difference is that if I close the app window created by "Processing", when running locally
the process continues as expected. When running through vglrun, the
process dies when closing the window.

The exceptions are included below.

The app is using the Quil frontend to the Java variant of the Processing
API. Processing is using the Jogamp api for 3D it seems.

The code I'm using is here: [email protected]:jave/forestdream-sketch.git

The problem is mostly an inconvenience, because I need to restart the
development environment when I close the app window, which is bothersome
but not a complete show-stopper.

Any hint would be appreciated.

/Joakim

user=> (load-file "/home/joakim/forestdream-sketch/src/my_sketch/core.clj")
Caught handled GLException: EGLGLXDrawableFactory - Could not initialize shared resources for EGLGraphicsDevice[type .egl, v0.0.0, connection :2.0, unitID 0, handle 0x0, owner true, ResourceToolkitLock[obj 0x628bc963, isOwner true, <27895b56, 521f168e>[count 1, qsz 0, owner ]]] on thread nREPL-worker-1-SharedResourceRunner
[0]: jogamp.opengl.egl.EGLDrawableFactory$SharedResourceImplementation.createSharedResource(EGLDrawableFactory.java:518)
[1]: jogamp.opengl.SharedResourceRunner.run(SharedResourceRunner.java:353)
[2]: java.lang.Thread.run(Thread.java:745)
Caused[0] by GLException: Failed to created/initialize EGL display incl. fallback default: native 0x0, error 0x3001/0x3001 on thread nREPL-worker-1-SharedResourceRunner
[0]: jogamp.opengl.egl.EGLDisplayUtil.eglGetDisplayAndInitialize(EGLDisplayUtil.java:297)
[1]: jogamp.opengl.egl.EGLDisplayUtil.access$300(EGLDisplayUtil.java:58)
[2]: jogamp.opengl.egl.EGLDisplayUtil$1.eglGetAndInitDisplay(EGLDisplayUtil.java:320)
[3]: com.jogamp.nativewindow.egl.EGLGraphicsDevice.open(EGLGraphicsDevice.java:125)
[4]: jogamp.opengl.egl.EGLDrawableFactory$SharedResourceImplementation.createEGLSharedResourceImpl(EGLDrawableFactory.java:532)
[5]: jogamp.opengl.egl.EGLDrawableFactory$SharedResourceImplementation.createSharedResource(EGLDrawableFactory.java:516)
[6]: jogamp.opengl.SharedResourceRunner.run(SharedResourceRunner.java:353)
[7]: java.lang.Thread.run(Thread.java:745)
Caught handled GLException: EGLGLXDrawableFactory - Could not initialize shared resources for X11GraphicsDevice[type .x11, connection :2.0, unitID 0, handle 0x0, owner false, ResourceToolkitLock[obj 0x5b211efa, isOwner true, <311c6348, 43e89f1e>[count 1, qsz 0, owner ]]] on thread nREPL-worker-1-SharedResourceRunner
[0]: jogamp.opengl.egl.EGLDrawableFactory$SharedResourceImplementation.createSharedResource(EGLDrawableFactory.java:518)
[1]: jogamp.opengl.SharedResourceRunner.run(SharedResourceRunner.java:353)
[2]: java.lang.Thread.run(Thread.java:745)
Caused[0] by GLException: Failed to created/initialize EGL display incl. fallback default: native 0x0, error 0x3001/0x3001 on thread nREPL-worker-1-SharedResourceRunner
[0]: jogamp.opengl.egl.EGLDisplayUtil.eglGetDisplayAndInitialize(EGLDisplayUtil.java:297)
[1]: jogamp.opengl.egl.EGLDisplayUtil.access$300(EGLDisplayUtil.java:58)
[2]: jogamp.opengl.egl.EGLDisplayUtil$1.eglGetAndInitDisplay(EGLDisplayUtil.java:320)
[3]: com.jogamp.nativewindow.egl.EGLGraphicsDevice.open(EGLGraphicsDevice.java:125)
[4]: jogamp.opengl.egl.EGLDrawableFactory$SharedResourceImplementation.createEGLSharedResourceImpl(EGLDrawableFactory.java:532)
[5]: jogamp.opengl.egl.EGLDrawableFactory$SharedResourceImplementation.createSharedResource(EGLDrawableFactory.java:516)
[6]: jogamp.opengl.SharedResourceRunner.run(SharedResourceRunner.java:353)
[7]: java.lang.Thread.run(Thread.java:745)
#'my-sketch.core/my-sketch
user=> Ignoring inkscape:path-effect tag.
[VGL] ERROR: in getGLXDrawable--
[VGL] 186: Window has been deleted by window manager
Exception in thread "Thread-3" clojure.lang.ExceptionInfo: Subprocess failed {:exit-code 1}
at clojure.core$ex_info.invokeStatic(core.clj:4617)
at clojure.core$ex_info.invoke(core.clj:4617)
at leiningen.core.eval$fn__5732.invokeStatic(eval.clj:264)
at leiningen.core.eval$fn__5732.invoke(eval.clj:260)
at clojure.lang.MultiFn.invoke(MultiFn.java:233)
at leiningen.core.eval$eval_in_project.invokeStatic(eval.clj:366)
at leiningen.core.eval$eval_in_project.invoke(eval.clj:356)
at leiningen.repl$server$fn__11838.invoke(repl.clj:243)
at clojure.lang.AFn.applyToHelper(AFn.java:152)
at clojure.lang.AFn.applyTo(AFn.java:144)
at clojure.core$apply.invokeStatic(core.clj:646)
at clojure.core$with_bindings_STAR_.invokeStatic(core.clj:1881)
at clojure.core$with_bindings_STAR_.doInvoke(core.clj:1881)
at clojure.lang.RestFn.invoke(RestFn.java:425)
at clojure.lang.AFn.applyToHelper(AFn.java:156)
at clojure.lang.RestFn.applyTo(RestFn.java:132)
at clojure.core$apply.invokeStatic(core.clj:650)
at clojure.core$bound_fn_STAR_$fn__4671.doInvoke(core.clj:1911)
at clojure.lang.RestFn.invoke(RestFn.java:397)
at clojure.lang.AFn.run(AFn.java:22)
at java.lang.Thread.run(Thread.java:745)

SocketException The transport's socket appears to have lost its connection to the nREPL server
clojure.tools.nrepl.transport/bencode/fn--10199/fn--10200 (transport.clj:95)
clojure.tools.nrepl.transport/bencode/fn--10199 (transport.clj:95)
clojure.tools.nrepl.transport/fn-transport/fn--10171 (transport.clj:42)
clojure.core/binding-conveyor-fn/fn--4676 (core.clj:1938)
java.util.concurrent.FutureTask.run (FutureTask.java:266)
java.util.concurrent.ThreadPoolExecutor.runWorker (ThreadPoolExecutor.java:1142)
java.util.concurrent.ThreadPoolExecutor$Worker.run (ThreadPoolExecutor.java:617)
java.lang.Thread.run (Thread.java:745)
Bye for now!

Wayland EGL interposer

Referring to TurboVNC/turbovnc#18 in the near term at least, it is going to be necessary to implement a Wayland EGL interposer in order to get OpenGL hardware acceleration in a Weston remote display environment with the nVidia proprietary drivers. Although it is likely that this interposer will be rendered obsolete at some point (because the Weston developers have a long-term plan to support a hardware-accelerated headless remote display backend, and eventually the nVidia stuff will be sorted out), at the moment an interposer would be the easiest way to enable remote display with OpenGL hardware acceleration under Weston, which would likely spur development of remote display technologies based on that compositor.

Depends on #10. Once VirtualGL is using EGL "behind the scenes", i.e. once it's converting GLX calls to EGL calls, then implementing an EGL-to-EGL interposer for Wayland would be trivial. It would simply redirect EGL calls targeted at a Wayland display to a DRM device instead and maintain a mapping of Wayland windows to FBOs.

PCSX2 not working

Hello,
PCSX2 just crashes when using VirtualGL (arch linux both sides).
Can you confirm?
DocMAX

ChangeLog.txt renamed, but doc/CMakeLists.txt still refers to it

Trying to build this on Gentoo, I get:

...
CMake Error at doc/cmake_install.cmake:71 (file):
  file INSTALL cannot find
  "/var/tmp/portage/x11-misc/virtualgl-9999-r1/work/virtualgl-9999/ChangeLog.txt".
Call Stack (most recent call first):
  cmake_install.cmake:43 (include)


Makefile:94: recipe for target 'install' failed

This appears to be because, in commit 5c4ece4, ChangeLog.txt got renamed to ChangeLog.md.

Steam + VGL crashes when attempting to launch a game

Refer to this thread. Specifically, this has been tested under Fedora 22 with the July 8, 2016 build of Steam (installed from the Fedora DNF repository) but probably affects all Linux platforms. It has specifically been tested with Dota 2 but probably affects most or all Steam games.

I've reproduced the problem, but I'm clueless as to what's causing it. Symptomatically, what's happening is that, when the VGL faker attempts to load a symbol from libGL using dlsym(), dlsym() returns the interposed symbol from the VGL faker instead. No idea why, but it seems that Steam is somehow interfering with VGL's function dispatching mechanism. When I've seen such problems in the past with other applications, I was able to work around them by setting VGL_GLLIB=/usr/lib64/libGL.so.1 or VGL_GLLIB=/usr/lib/libGL.so.1, which forces VirtualGL to load the "real" OpenGL functions directly from the underlying OpenGL library instead of relying on the dynamic loader to pick those symbols from the next library in the search order. That doesn't work with Steam, however.

I've spent 15 hours of unpaid labor and am unfortunately no closer to solving this. I give up.

VLC crashes when playing video

When vlc is started with vglrun vlc and some video is played then it crashes.

Xlib:  extension "NV-GLX" missing on display ":1".
Xlib:  extension "NV-GLX" missing on display ":1".

backtrace

#0  0x00007f85f338b320 in ?? ()
#1  0x00007f86340bec52 in XCloseDisplay () from /usr/lib/libX11.so.6
#2  0x00007f863528362d in _XCloseDisplay (dpy=0x7f85ec0031e0)
    at /mnt/GitHub/virtualgl/server/faker-sym.h:543
#3  XCloseDisplay (dpy=0x7f85ec0031e0) at /mnt/GitHub/virtualgl/server/faker-x11.cpp:82
#4  0x00007f85f37e9b50 in vdp_get_x11 () from /usr/lib/vlc/libvlc_vdpau.so.0
#5  0x00007f85f39f01bd in ?? () from /usr/lib/vlc/plugins/vdpau/libvdpau_avcodec_plugin.so
#6  0x00007f863390cb65 in ?? () from /usr/lib/libvlccore.so.8
#7  0x00007f863390d10e in vlc_module_load () from /usr/lib/libvlccore.so.8
#8  0x00007f85f88ddff6 in ?? () from /usr/lib/vlc/plugins/codec/libavcodec_plugin.so
#9  0x00007f85f88d9b84 in ?? () from /usr/lib/vlc/plugins/codec/libavcodec_plugin.so
#10 0x00007f85f7d60c01 in ff_get_format (avctx=0x7f8604c34b80, fmt=<optimized out>)
    at /mnt/GitHub/libav/libavcodec/utils.c:905
#11 0x00007f85f7ae36d5 in get_pixel_format (h=0x7f8604c8e680, h=0x7f8604c8e680, h=0x7f8604c8e680)
    at /mnt/GitHub/libav/libavcodec/h264_slice.c:1037
#12 0x00007f85f7ae63d1 in ff_h264_decode_slice_header (h=h@entry=0x7f8604c8e680, h0=h0@entry=0x7f8604c8e680)
    at /mnt/GitHub/libav/libavcodec/h264_slice.c:1355
#13 0x00007f85f7a9df57 in decode_nal_units (parse_extradata=0, buf_size=1564, buf=0x7f85fc0010f0 "", h=0x7f8604c8e680)
    at /mnt/GitHub/libav/libavcodec/h264.c:1527
#14 h264_decode_frame (avctx=0x7f8604c34b80, data=0x7f8604c34fa0, got_frame=0x7f8604c34628, avpkt=<optimized out>)
    at /mnt/GitHub/libav/libavcodec/h264.c:1782
#15 0x00007f85f7cc1f86 in frame_worker_thread (arg=0x7f8604c344c0)
    at /mnt/GitHub/libav/libavcodec/pthread_frame.c:145
#16 0x00007f8634e164a4 in start_thread () from /usr/lib/libpthread.so.0
#17 0x00007f863494813d in clone () from /usr/lib/libc.so.6

I'm using proprietary Nvidia drivers.

Direct buffer read from application

This is mostly a feature-request, though this functionality might already exist and just need to be twisted around to get working.

I'd like the ability to read directly from the VirtualGL pixelbuffer/framebuffer from an arbitrary application running on the same physical server. I'd use the VGL_READBACK=none option to reduce duplication of work. I could read directly from X, but that solution is wasteful because of the repeated copying. Best I can tell, ParaView doesn't need this functionality because it has access to the OpenGL context, and can thus read from the appropriate buffer itself. If there's a means of accomplishing this without altering VirtualGL, I'm all ears.

Vsync / Tearing

I see lots of tearing whatever i do (rgb, jpeg, juv, -sp, -fps).
Is it possible to solve this problem?

Migrate documentation to AsciiDoc

Deplate is not actively maintained, and while it works for the limited scope in which we use it (generating static HTML), it would be nice to be able to generate PDF's and other formats for our User's Guide. I looked at various solutions, and AsciiDoc seems to have the necessary feature set, but I haven't really dug into it yet.

gcc6 compatibility fix.

gcc-6 throws an error while compiling VirtualDrawable.cpp:
...../virtualgl-git/src/virtualgl/server/VirtualDrawable.cpp:30:41: error: unable to find string literal operator ‘operator""m’ with ‘const char [11]’, ‘long unsigned int’ arguments
#define CHECKGL(m) if(glError()) _throw("Could not "m);
This is due to the new user defined literals.
Also in fl_draw.cxx #define min should be put after #include <math.h> as it has this macro undefined.
Fixing those issues results in gcc6 flawless compilation.

Readback not always RenderMode safe

Consider the following X11 sequence

10959: [73120:0] (5) --> Request 143.107 RenderMode (GLX)
context tag       1
mode              GL_SELECT
10963: [73123:0] (5) <-- Reply to Request 143.107 RenderMode (GLX)
sequence #        0xb9b (10959)
return value      0
n                 0
new mode          GL_SELECT
10967: [73124:0] (5) --> Request 143.11 glXSwapBuffers (GLX)
context           1
drawable          0x80008f

https://www.opengl.org/sdk/docs/man2/xhtml/glXMakeCurrent.xml says

     GLXBadContextState is also generated if the rendering context current
     to the calling thread has GL renderer state GLX_FEEDBACK or
     GLX_SELECT.

doGLReadback() has a if(renderMode!=GL_RENDER) return; check
but glXSwapBuffers() jumps right into readback, uses TempContext and gets

X Error of failed request: GLXBadContextState
Opcode of failed request:          155.26
Resource id in failed request:     0x1600009
Sequence number of failed request: 0x0
Current serial number:             0x16e
[VGL] ERROR: in TempContext--
[VGL]    52: Could not bind OpenGL context to window (window may have disappeared)

X Error of failed request: GLXBadContextState
Opcode of failed request:          155.5
Resource id in failed request:     0x1600009
Sequence number of failed request: 0x0
Current serial number:             0x176

ERROR: vglclient failed to execute.

Hi!
I installed virtualgl on cygwin under windows but I face this error:

$ vglconnect -display 1 user@server-i ~/.ssh/mysshstuff
VirtualGL Client 64-bit v2.5.2 (Build 20170302)
[VGL] ERROR: vglclient failed to execute.

It works, however, well when I start virtualgl under a native ubuntu client.

What's the problem?

How I can compile virtualgl with lib32 binaries

Sorry for my bad english.
Hello, how I can compile with lib32 binaries.
In the next topic #16 recommended manualy recompile virtualgl package, but for bumblebee needed lib32-virtualgl library.
I try this command:

export CFLAGS=-m32
export CXXFLAGS=-m32
export LDFLAGS=-m32
sudo cmake -DCMAKE_INSTALL_PREFIX=/usr/share -DTJPEG_INCLUDE_DIR=/usr/include -DTJPEG_LIBRARY=/usr/lib/libturbojpeg.so

But lib32 not installed

virtualgl-2.4-2 crashes wine

My setup: Arch Linux 64 with Bumblebee/Optirun on a NVidia card. Latest upgrade to version 2.4-2 makes the setup unusable, the Wine log says: "err:wgl:has_opengl glAccum not found in libGL, disabling OpenGL."

I was able to solve temporarily the issue by downgrading to version 2.4.1-1.

Many thanks for solving this issue probably impacting many users.

Patches to build with --as-needed

Hello, when building package for openSUSE I had to create two patches to build with --as-needed and -z,now:

This makes sure that libgl is correctly put even when build with the --as-needed
https://build.opensuse.org/package/view_file/X11:Bumblebee/VirtualGL/virtualgl-nodl.patch?expand=1

This makes sure the package libs are properly linked and there are no undefined symbols during linking
https://build.opensuse.org/package/view_file/X11:Bumblebee/VirtualGL/VirtualGL-link-libs.patch?expand=1

"vglrun xterm" errors (2.5 beta1)

This may not be considered a bug (because you generally wouldn't need to run "vglrun xterm"), so feel free to close this if this is expected behaviour:

when I run "vglrun xterm", I get these errors:
ERROR: ld.so: object 'libdlfaker.so' from LD_PRELOAD cannot be preloaded: ignored.
ERROR: ld.so: object 'libvglfaker.so' from LD_PRELOAD cannot be preloaded: ignored.

I have both the 32-bit and 64-bit RPM's installed. I've tested this on RHEL6 and RHEL7 and it happens for both.

xterm is the only program I can see this in, I can run "vglrun ls", "vglrun gedit", etc... with no error messages, I only see it with "vglrun xterm"

If I set LD_PRELOAD to /usr/lib64/libdlfaker.so:/usr/lib64/libvglfaker.so, these errors don't appear. But it does happen even with just the 64-bit RPM installed.

Segfault with ParaView 5.3.0RC2

This is with VGL 2.5.2 and ParaView 5.3.0RC2, both compiled from source. OS is Debian Wheezy 64-bit, nvidia driver 367.57, X.org server 1.12.4, running in a TurboVNC 1.0.1 session.

paulm@s37n8:~/software/virtualgl-2.5.2/bin$ DISPLAY=:0.0 paraview
<doesn't crash, so assuming ParaView is ok>

paulm@s37n8:~/software/virtualgl-2.5.2/bin$ ./vglrun paraview
Segmentation fault (core dumped)

# Nothing useful in the core dump:
paulm@s37n8:~/software/virtualgl-2.5.2/bin$ ./vglrun gdb `which paraview` core
GNU gdb (GDB) 7.4.1-debian
Copyright (C) 2012 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>...
Reading symbols from /sara/sw/paraview/5.3.0-rc2/bin/paraview...done.
[New LWP 14658]
[New LWP 14668]
Core was generated by `/sara/sw/paraview/5.3.0-rc2/lib/paraview-5.3/paraview'.
Program terminated with signal 11, Segmentation fault.
#0  0x00007f23f8ee484f in ?? ()
(gdb) bt
#0  0x00007f23f8ee484f in ?? ()
#1  0x00007f240d462c90 in ?? ()
#2  0x0000000100000001 in ?? ()
#3  0x0000000000000000 in ?? ()

Seems to be a NULL drawable?
lst.gz

(full trace attached):

paulm@s37n8:~/software/virtualgl-2.5.2/bin$ ./vglrun +v +tr paraview
...
[VGL 0x6d6a07a0] glXGetProcAddressARB ((char *)procName=glXGetCurrentReadDrawableSGI [INTERPOSED]) 0.005007 ms
[VGL 0x6d6a07a0] glXGetProcAddressARB ((char *)procName=glXMakeCurrentReadSGI [INTERPOSED]) 0.005007 ms
[VGL 0x6d6a07a0] glXGetProcAddressARB ((char *)procName=glXSwapIntervalSGI [INTERPOSED]) 0.006199 ms
[VGL 0x6d6a07a0] glXGetProcAddressARB ((char *)procName=glXGetTransparentIndexSUN [INTERPOSED]) 0.005007 ms
[VGL] glFlush()
[VGL 0x6d6a07a0] doGLReadback (vw->getGLXDrawable()=0x00200003 sync=0 spoilLast=1 
[VGL 0x6d6a07a0]   XOpenDisplay (name=:1 dpy=0x042c3420(:1) ) 1.242161 ms
[VGL 0x6d6a07a0] 
[VGL 0x6d6a07a0]   XQueryExtension (dpy=0x042c3420(:1) name=MIT-SHM *major_opcode=129 *first_event=65 *first_error=128 ) 0.154972 ms
[VGL 0x6d6a07a0] 
[VGL 0x6d6a07a0]   XQueryExtension (dpy=0x042c3420(:1) name=Generic Event Extension *major_opcode=0 *first_event=69 *first_error=0 ) 0.102043 ms
[VGL 0x6d6a07a0] [VGL] Using pixel buffer objects for readback (BGRA --> BGRA)
) 7.942915 ms
[VGL] glFlush()
[VGL] glFinish()
[VGL 0x6d6a07a0] doGLReadback (vw->getGLXDrawable()=0x00200003 sync=0 spoilLast=0 
[VGL 0x6d6a07a0]   XOpenDisplay (name=:1 dpy=0x0455d9a0(:1) ) 4.344940 ms
[VGL 0x6d6a07a0] 
[VGL 0x6d6a07a0]   XQueryExtension (dpy=0x0455d9a0(:1) name=MIT-SHM *major_opcode=129 *first_event=65 *first_error=128 ) 0.125170 ms
[VGL 0x6d6a07a0] 
[VGL 0x6d6a07a0]   XQueryExtension (dpy=0x0455d9a0(:1) name=Generic Event Extension *major_opcode=0 *first_event=69 *first_error=0 ) 0.140905 ms
[VGL 0x6d6a07a0] ) 7.359028 ms
[VGL 0x6d6a07a0] glXDestroyContext (dpy=0x03f54230(:1) ctx=0x0403ddd0 ) 0.021935 ms
[VGL 0x6d6a07a0] glXMakeCurrent (dpy=0x03f54230(:1) drawable=0x00000000 ctx=0x00000000 Segmentation fault (core dumped)

Virtual Machine

Can I add OpenGL support in a virtual machine Debian, using VirtualGL?

Develop a GLVND vendor library for VirtualGL

Opening this to track a discussion I had offline with nVidia regarding the possibility of creating a GLVND vendor library (refer to https://github.com/NVIDIA/libglvnd, and
https://devtalk.nvidia.com/default/topic/915640/unix-graphics-announcements-and-news/multiple-glx-client-libraries-in-the-nvidia-linux-driver-installer-package/) for VirtualGL.

GLVND allows multiple GPUs with different drivers and OpenGL stacks to co-exist peacefully on the same system. In a GLVND system, libGL is actually a dispatcher, and it will forward OpenGL and GLX calls to a different underlying vendor library, depending on the X screen that is being accessed. So, for instance, you could have an nVidia GPU on display :0.0 and an AMD GPU on display :0.1 if you wanted. Without any modifications, VirtualGL will continue to work fine in this environment. You can simply set VGL_DISPLAY to point to the screen whose GPU you wish to use for rendering. That is how VirtualGL already operates (except that, without GLVND, all of the GPUs in a VirtualGL server have to be from the same vendor.)

My interest in GLVND was mainly to determine whether it would be possible to use that technology to avoid vglrun. The conclusion we came to was: probably not, because VirtualGL also interposes a handful of X11 and xcb functions, for the following reasons:

  1. It's the easiest way to monitor the application's creation and destruction of X windows. We have to track all windows created by the application so that we'll know what drawable type corresponds to the drawable handle passed to glXMake[Context]Current() (and other GLX functions.) We might be able to assume that any drawable passed to glXMake[Context]Current() that wasn't created with a glX*() function is a regular X window, and that would eliminate the need to interpose the XCreate*Window() functions. However, we still need to be able to track window destruction so that we can free the corresponding Pbuffer resources.
  2. We interpose XQueryExtension() and XListExtensions() so that we can report that GLX is supported by the X server, even if it isn't (in our case, we intercept any GLX functions and hand them off to an X server that does support that extension, but applications will often query for the existence of GLX first.) Most X proxies support GLX, so this feature is mainly there because of the few X proxies (including TurboVNC) that don't. It would be straightforward to add an option to TurboVNC that makes it lie about its support for GLX, but of course that would be very hackish and not portable, and it would cause OpenGL applications to fail in unexpected ways if a user attempted to run them without VirtualGL.
  3. We interpose the X11 event queue and window resize functions, because it's the most performant way of detecting when a Window has been resized (bearing in mind that VirtualGL was originally designed as a bolt-on technology for remote X, it still tries to avoid round trips to the X server as much as possible.) We could call XGetGeometry() within the body of glViewport() to ensure that we catch any X window resizes, but some applications call glViewport() with every frame, so this would potentially cause performance issues for the few users who are still using VirtualGL with remote X. It would also be possible to set up a second X connection to monitor for window resizes (VirtualGL already does this for "foreign" windows-- that is, windows created in other processes.) However, this still requires a round-trip to the X server within the body of glViewport(), so it's not a great solution for the general case.
  4. We interpose a handful of XCB functions-- including xcb_get_extension_data() and
    xcb_glx_query_version()-- for similar reasons to the above (reporting that GLX is supported by the X server, even when it isn't, and monitoring the event queue for window resizes.)
  5. We interpose XCopyArea() and XGetImage() in order to handle Pixmap rendering. VirtualGL has to split each GLX Pixmap into a 3D Pixmap, resident on the 3D X server, and a 2D Pixmap, resident on the 2D X server, and it has to synchronize the pixels between the two whenever the application switches back & forth from 2D to 3D rendering. Unfortunately there is no way around this, because some applications create a GLX Pixmap, do some OpenGL rendering to the Pixmap, then call XCopyArea() with the 2D Pixmap handle-- without calling glXWaitGL() or glFinish() or any other OpenGL sync function first. Thus, there is not always an opportunity prior to the XCopyArea() call for VirtualGL to synchronize the contents of the 3D Pixmap to the 2D Pixmap.

VirtualGL 2.5 really goes to huge lengths to be as non-intrusive as possible. It loads functions on-demand from the underlying OpenGL library, so in most cases, that library won't even be touched until the application invokes an OpenGL/GLX function (that is, we won't force libGL to be loaded earlier
than it otherwise would be.) VGL 2.5 uses dlopen()/dlsym() only to obtain the address of the underlying OpenGL library's glXGetProcAddressARB function, which is used to obtain all subsequent OpenGL/GLX symbols, so essentially we're already doing GLVND-esque dispatching. It is unclear whether implementing an actual GLVND library would make any difference in VirtualGL's overall compatibility, except in perhaps the most esoteric of cases. Furthermore, it would necessitate maintaining two different libraries, since some systems would still need to use the old interposer.

It was proposed that the interposed X11/XCB functions might be integrated into TurboVNC, but this is impossible for two reasons:

  1. VirtualGL has to support other X proxies.
  2. Much of VirtualGL's architecture assumes that everything is in the same process. It would be necessary for the VirtualGL GLVND library to exchange data with the TurboVNC X server using some IPC mechanism, and it would be necessary to completely re-architect a lot of things in VGL. For instance, XCopyArea() (which would now be in TurboVNC, not in VirtualGL) has to be able to call glReadPixels() and glCopyPixels() to transfer data between the 3D Pixmap and the 2D Pixmap, or between a 3D Window and a 3D Pixmap, etc. That becomes exceedingly difficult or even impossible when it's outside of the 3D application process. I can't see a very easy way to make it happen without completely re-architecting VirtualGL, which would necessitate a separate solution just for the GLVND case.

Given the movement in the direction of Wayland, which (if I understand Wayland correctly) would eliminate the need for VirtualGL or any split rendering solution in the long term, I am reluctant to invest too much effort into re-inventing VirtualGL, unless there is a clear end-user benefit.

"Could not set PBO size" error when running Metapost

A user reported an intermittent issue with Metapost. Sometimes it would error out with “Error in ReadPixels, 421: Could Not Set PBO size error", and other times it would work fine.

It is believed (but not yet confirmed) that e3d5e86 fixes the bug. I opened this issue in order to track it, in case the aforementioned commit didn't actually fix it.

VirtualGL on Arm issue

I am successfully running virtual gl on linux under ubuntu 16.04 and using as the client a arm based geekbox running ubuntu 14.04. I compiled libjpeg, turbovnc and created debs. It was not a easy and not to hard of a task the main problem was libjpeg needed ./configure --with-java and the configure file path pointed to the includes for the java sdk installation. Turbovnc needed FindJNI.cmake modified so cmake would find the java sdk. I compiled it and it worked :). Then when I used it I got horrible frame rates 4-5 fps. While hunting down the java sdk I noticed a paper on speed times between oracle and the openjava. When I deleted everything and recompiled with oracle 8 se for armhf. I got 25+ fps. So Im posting here in case anyone has the same issue.

Wine crashes on 32-bit installation

I was testing VirtualGL with Xephyr and on 64-bit machines it worked great, but on 32-bit installations while native games worked, every single Wine game crashed with errors about 0x00000000 address (something like EIP=0x00000000). Example of such message:

A violation in 00000000 occured at 00000000 with Read.

Tested on Ubuntu 16.04 (64-bit), Ubuntu 14.04 (32-bit, wine 1.7, VirtualGL 2.5) and Debian 9 (32-bit, wine 1.8, VirtualGL 2.5).

TurboVNC + VirtualGL + steamos-session

Hi there!
Lauching vncserver with -xstartup $DE option, where $DE=/usr/bin/custom_script does not work for steamos-session.
Here's the content of my /usr/bin/custom_script:

#!/usr/bin/env bash
exec vglrun steamos-session

What I get is just a black screen with a total white cursor (actually is the steam cursor, so something is trying to start).
Note that the same script works fine if instead of steamos-session i launch any other session (mate, openbox, etc) as well as any other application and steam itself (but not steam -bigpicture).
Wondering if the problem was related to geometry I try to force a 1366x768 with no luck.

Any help is welcome.

Access the GPU without going through an X server

There are spurious rumors that this either already is possible or will be possible soon with the nVidia drivers, by using EGL, but it is unclear exactly how (the Kronos EGL headers still seem to indicate that Xlib is required when using EGL on Un*x.) As soon as it is possible to do this, it would be a great enhancement for VirtualGL, since it would eliminate the need for a running X server on the server machine. I already know basically how to make such a system work in VirtualGL, because Sun used to have a proprietary API (GLP) that allowed us to accomplish the same thing on SPARC. Even as early as 2007, we identified EGL as a possible replacement for GLP, but Linux driver support was only available for it recently, and even where it is available, EGL still seems to be tied to X11 on Un*x systems. It is assumed that, eventually, that will have to change in order to support Wayland.

Regression on VGL_SAMPLES

VGL_SAMPLES has stopped working after commit 9fe3458.
When used, vglrun prints

Error: couldn't get an RGB, Double-buffered visual

and then exits.

I've tested in different graphics cards, but all are nVidia. The application I was using for testing was glxgears.

It seems that GLX_PIXMAP_BIT can't be used in combination with GLX_SAMPLES, at least where I tested, but it don't find anywhere that's truly the case.

Nvidia IFR

Hi,

I'm trying to use virtualgl in conjunction with Nvidia's IFR, for recording to video/stream. I noticed there is a glreadtest that seems to implement it in the utils. Is there any documentation to use that?
I have also tried to use the "shim" example from the NVIDIA capture SDK (the method that preloads a GLX lib to interpose it's own version of GLXCreateWindow, etc). It seems to conflict a little with vglrun, but I can't help but think these things aren't too far apart. Do you have any advice as to what would be the simplest way to use IFR with virtualgl?

Thanks,
Nick

vglrun wine dx-game.exe, crashes over TurboVNC with amdgpu

Software:
Ubuntu 16.04.1 upgraded from 14.04
VirtualGL 2.5-20160215 (also tried with 2.4)
TurboVNC 2.0.91 (also tried with 1.2.3)
wine 1.8 (also tried with 1.6)
amdgpu 1.1 driver (afaict I'm stuck with this, Radeon R9 380)

The setup is a beefy workstation in the basement and a thin client in the living room. This has worked fine before upgrading Ubuntu on both machines.

wine game.exe works locally on the remote machine, games tested are Path of Exile, Age of Wonders 3 and the Windows binary of Europa Universalis 4.
vglrun glxinfo and vglrun glxgears both give correct results, both locally and via TurboVNC
vglrun game-bin works via VNC for games like warzone2100, Factorio and Heroes of Newerth (all ELF binaries).
vglrun wine winecfg via VNC as well as random non-3D mini-app exes work fine.

vglrun wine game.exe in VNC crashes with all three games (which work without vglrun locally on the workstation) with the same error:

Backtrace:
=>0 0x00000000 (0x0033ef38)
  1 0x7d692d9e in winex11 (+0x32d9d) (0x0033ef58)
  2 0xf6cde5d8 in wined3d (+0x2e5d7) (0x0033efd8)
[...]

is it useful to upload the 20+MB wine relay log or the 1xxMB full log?
Not sure which (combination) of the software components listed in the beginning is at fault, as I've tried nearly all combinations of the alternative versions in parentheses, and also changing the wine prefix from and to 32bit and installing half of what's available through winetricks (DirectX9 etc.).
Wine works flawlessly without vglrun (specifically dx games) though.

What else should I provide and what's the best way to proceed with debugging?

display error in Siemens NX

when I using NX though Virtualgl,by doing section operation ,the option SHOW CAP and SHOW INTERFERENCE can not display correctly, it dispaly a huge plane, not a sectction plane. please help me to solve this issue, thanks

Segault in 2.5.1 due to a bad DisplayString "call"

Attempting to run steam launcher results in segfault. Partial backtrace:

#0  0xf7316cee in __strcasecmp_l_sse4_2 () from /lib32/libc.so.6
#1  0xf766d448 in vglserver::WindowHash::compare (entry=0x57e54110, key2=37748741, key1=0x57f36ff8 "localhost:10.0", this=0x57e54098)
    at /tmp/portage/x11-misc/virtualgl-9999/work/virtualgl-9999/server/WindowHash.h:163
#2  vglserver::Hash::findEntry (key2=, key1=0x57f36ff8 "localhost:10.0", this=0x57e54098)
    at /tmp/portage/x11-misc/virtualgl-9999/work/virtualgl-9999/server/Hash.h:121
#3  vglserver::Hash::find (key2=, key1=0x57f36ff8 "localhost:10.0", this=)
    at /tmp/portage/x11-misc/virtualgl-9999/work/virtualgl-9999/server/Hash.h:90
#4  vglserver::WindowHash::find (this=0x57e54098, dpy=dpy@entry=0x57f08860, glxd=37748741, vwin=@0xffcdf0e4: 0xef80db94)
    at /tmp/portage/x11-misc/virtualgl-9999/work/virtualgl-9999/server/WindowHash.h:63
#5  0xf7683afe in vglserver::WindowHash::find (vwin=@0xffcdf0e4: 0xef80db94, glxd=, dpy=0x57f08860, this=)
    at /tmp/portage/x11-misc/virtualgl-9999/work/virtualgl-9999/server/WindowHash.h:62
#6  glXMakeCurrent (dpy=0x57f08860, drawable=37748741, ctx=0x57ffe91c) at /tmp/portage/x11-misc/virtualgl-9999/work/virtualgl-9999/server/faker-glx.cpp:1697

A bisect led me to a143bf3 and partially reverting - specifically the reintroduction of the deadYet checks in the X... functions of server/faker-x11.cpp - seems to have corrected the problem. However, I'm not familiar with OpenGL nor can I retest with ANSYS HFSS.

Interestingly the segfault disappeared with no changes when compiling with -O0 instead of -O2.

OpenGL versions differs in TurboVNC Server and Client in the same machine.

Hi,

This is a bit strange for me,

TurboVNC Server (localhost):

$ glxinfo | grep OpenGL
OpenGL vendor string: X.Org
OpenGL renderer string: Gallium 0.4 on AMD CAICOS
OpenGL core profile version string: 3.3 (Core Profile) Mesa 10.1.3
OpenGL core profile shading language version string: 3.30
OpenGL core profile context flags: (none)
OpenGL core profile profile mask: core profile
OpenGL core profile extensions:
OpenGL version string: 3.0 Mesa 10.1.3
OpenGL shading language version string: 1.30
OpenGL context flags: (none)
OpenGL extensions:

TurboVNC Client on same machine:

$ glxinfo | grep OpenGL
libGL error: failed to open drm device: Permission denied
libGL error: failed to load driver: r600
OpenGL vendor string: VMware, Inc.
OpenGL renderer string: Gallium 0.4 on llvmpipe (LLVM 3.4, 128 bits)
OpenGL version string: 2.1 Mesa 10.1.3
OpenGL shading language version string: 1.30
OpenGL extensions:

And when I tried with the TurboVNCviewer Window itself, I got:

$ glxinfo | grep OpenGL
Error: couldn't find RGB GLX visual or fbconfig
Error: couldn't find RGB GLX visual or fbconfig

Tried to run glxspheres64 without Virtualgl, and got:

$ /opt/VirtualGL/bin/glxspheres64 
Polygons in scene: 62464 (61 spheres * 1024 polys/spheres)
ERROR in line 619:
Could not obtain RGB visual with requested properties

Then tried with virtualgl:

$ vglrun /opt/VirtualGL/bin/glxspheres64 
Polygons in scene: 62464 (61 spheres * 1024 polys/spheres)
libGL error: failed to open drm device: Permission denied
libGL error: failed to load driver: r600
Visual ID of window: 0x21
Context is Direct
OpenGL Renderer: Gallium 0.4 on llvmpipe (LLVM 3.4, 128 bits)
8.111017 frames/sec - 22.082551 Mpixels/sec
8.139361 frames/sec - 22.159719 Mpixels/sec
8.139943 frames/sec - 22.161305 Mpixels/sec
8.134129 frames/sec - 22.145476 Mpixels/sec
8.117028 frames/sec - 22.098916 Mpixels/sec
8.127523 frames/sec - 22.127491 Mpixels/sec
8.094146 frames/sec - 22.036620 Mpixels/sec
8.082994 frames/sec - 22.006258 Mpixels/sec
8.120539 frames/sec - 22.108475 Mpixels/sec
8.124676 frames/sec - 22.119740 Mpixels/sec
8.072273 frames/sec - 21.977069 Mpixels/sec
8.103361 frames/sec - 22.061709 Mpixels/sec
8.078328 frames/sec - 21.993555 Mpixels/sec

So I compiled mesa (http://www.turbovnc.org/Documentation/Mesa) and ran without Virtualgl:

$ LD_LIBRARY_PATH=/home/vnc/Downloads/mesa-10.5.4/lib /opt/VirtualGL/bin/glxspheres64
Polygons in scene: 62464 (61 spheres * 1024 polys/spheres)
Visual ID of window: 0x68
Context is Direct
OpenGL Renderer: Mesa X11
Segmentation fault (core dumped)

With Mesa with Virtualgl:

$ LD_LIBRARY_PATH=/home/vnc/Downloads/mesa-10.5.4/lib vglrun /opt/VirtualGL/bin/glxspheres64
Polygons in scene: 62464 (61 spheres * 1024 polys/spheres)
Segmentation fault (core dumped)

How can one upgrade to the same version like that of Server? Thanks!

ClearVolume EGL problem 0x3008 Bad Display

Hi,

We are attempting to run the ClearVolume visualization software on HPC nodes that have NVIDIA TeslaP100 cards and a VirtualGL/TurboVNC interactive session setup.

https://clearvolume.github.io/

ClearVolume is failing due to a BAD_DISPLAY 0x3008 error from EGL, when run via vglrun:

10:50 AM $ vglrun java -jar ClearVolume.exe.jar 
Picked up _JAVA_OPTIONS: -Dhttp.proxyHost=proxy.swmed.edu -Dhttp.proxyPort=3128 -Dhttps.proxyHost=proxy.swmed.edu -Dhttps.proxyPort=3128
Picked up _JAVA_OPTIONS: -Dhttp.proxyHost=proxy.swmed.edu -Dhttp.proxyPort=3128 -Dhttps.proxyHost=proxy.swmed.edu -Dhttps.proxyPort=3128
Caught handled GLException: EGLGLXDrawableFactory - Could not initialize shared resources for EGLGraphicsDevice[type .egl, v0.0.0, connection :9.0, unitID 0, handle 0x0, owner true, ResourceToolkitLock[obj 0x7d7a8d8, isOwner true, <9105652, d2fc3a8>[count 1, qsz 0, owner <main-SharedResourceRunner>]]] on thread main-SharedResourceRunner
    [0]: jogamp.opengl.egl.EGLDrawableFactory$SharedResourceImplementation.createSharedResource(EGLDrawableFactory.java:518)
    [1]: jogamp.opengl.SharedResourceRunner.run(SharedResourceRunner.java:353)
    [2]: java.lang.Thread.run(Thread.java:745)
Caused[0] by GLException: Failed to created/initialize EGL display incl. fallback default: native 0x0, error 0x3008/0x3000 on thread main-SharedResourceRunner
    [0]: jogamp.opengl.egl.EGLDisplayUtil.eglGetDisplayAndInitialize(EGLDisplayUtil.java:297)
    [1]: jogamp.opengl.egl.EGLDisplayUtil.access$300(EGLDisplayUtil.java:58)
    [2]: jogamp.opengl.egl.EGLDisplayUtil$1.eglGetAndInitDisplay(EGLDisplayUtil.java:320)
    [3]: com.jogamp.nativewindow.egl.EGLGraphicsDevice.open(EGLGraphicsDevice.java:125)
    [4]: jogamp.opengl.egl.EGLDrawableFactory$SharedResourceImplementation.createEGLSharedResourceImpl(EGLDrawableFactory.java:532)
    [5]: jogamp.opengl.egl.EGLDrawableFactory$SharedResourceImplementation.createSharedResource(EGLDrawableFactory.java:516)
    [6]: jogamp.opengl.SharedResourceRunner.run(SharedResourceRunner.java:353)
    [7]: java.lang.Thread.run(Thread.java:745)
Caught handled GLException: EGLGLXDrawableFactory - Could not initialize shared resources for X11GraphicsDevice[type .x11, connection :9.0, unitID 0, handle 0x0, owner false, ResourceToolkitLock[obj 0x2802a326, isOwner true, <6fa51aa6, 68bd9ecc>[count 1, qsz 0, owner <main-SharedResourceRunner>]]] on thread main-SharedResourceRunner
    [0]: jogamp.opengl.egl.EGLDrawableFactory$SharedResourceImplementation.createSharedResource(EGLDrawableFactory.java:518)
    [1]: jogamp.opengl.SharedResourceRunner.run(SharedResourceRunner.java:353)
    [2]: java.lang.Thread.run(Thread.java:745)
Caused[0] by GLException: Failed to created/initialize EGL display incl. fallback default: native 0x0, error 0x3008/0x3000 on thread main-SharedResourceRunner
    [0]: jogamp.opengl.egl.EGLDisplayUtil.eglGetDisplayAndInitialize(EGLDisplayUtil.java:297)
    [1]: jogamp.opengl.egl.EGLDisplayUtil.access$300(EGLDisplayUtil.java:58)
    [2]: jogamp.opengl.egl.EGLDisplayUtil$1.eglGetAndInitDisplay(EGLDisplayUtil.java:320)
    [3]: com.jogamp.nativewindow.egl.EGLGraphicsDevice.open(EGLGraphicsDevice.java:125)
    [4]: jogamp.opengl.egl.EGLDrawableFactory$SharedResourceImplementation.createEGLSharedResourceImpl(EGLDrawableFactory.java:532)
    [5]: jogamp.opengl.egl.EGLDrawableFactory$SharedResourceImplementation.createSharedResource(EGLDrawableFactory.java:516)
    [6]: jogamp.opengl.SharedResourceRunner.run(SharedResourceRunner.java:353)
    [7]: java.lang.Thread.run(Thread.java:745)

This is with NVIDIA driver 384.81, and VirtualGL-2.5.3-20171031

If I export DISPLAY=:0 and run directly on the GPU card it starts without error.

Other OpenGL apps are running fine via VirtualGl.

I read #46 and was hopeful the fix there would also fix this - but now note the error is a bit different. We have 0x3008 BAD_DISPLAY whereas that issue was giving 0x3001 errors from EGL.

Has anyone come across similar issue with any app, or has any suggestion on anything we can try here?

Thanks!

nvenc

Will you add nvenc (nvidia hardware encoding) as an option to encode and stream?

Framerate does not match monitor refresh rate (debian 9)

Hello!

I've just installed debian 9 and I see a different behaviour since debian 8.
With vglrun +v -c 0 glxgears I had a refresh rate of 60fps matching to monitor refresh rate on debian 8, now I'm at about 500fps on debian 9 (along with relativly high cpu usage, at about 15% for glxgears, compared to about 1% without vglrun).
I'm not sure if this is a bug, or only a question of kernel and/or graphic driver. It is not really a problem, I'm just a bit confused and thought I could tell you.

$ vglrun +v -c 0 glxgears
[VGL] Shared memory segment ID for vglconfig: 59998237
[VGL] VirtualGL v2.5.1 64-bit (Build 20161001)
[VGL] Opening connection to 3D X server :0
[VGL] NOTICE: Replacing dlopen("libGL.so.1") with dlopen("libvglfaker.so")
[VGL] NOTICE: Replacing dlopen("libGL.so.1") with dlopen("libvglfaker.so")
[VGL] Using Pbuffers for rendering
[VGL] WARNING: Could not load function "glXSwapIntervalEXT"
[VGL] WARNING: Could not load function "glXBindSwapBarrierNV"
[VGL] WARNING: Could not load function "glXJoinSwapGroupNV"
[VGL] WARNING: Could not load function "glXQueryFrameCountNV"
[VGL] WARNING: Could not load function "glXQueryMaxSwapGroupsNV"
[VGL] WARNING: Could not load function "glXQuerySwapGroupNV"
[VGL] WARNING: Could not load function "glXResetFrameCountNV"
[VGL] WARNING: Could not load function "glXSwapIntervalEXT"
[VGL] WARNING: Could not load function "glXBindSwapBarrierNV"
[VGL] WARNING: Could not load function "glXJoinSwapGroupNV"
[VGL] WARNING: Could not load function "glXQueryFrameCountNV"
[VGL] WARNING: Could not load function "glXQueryMaxSwapGroupsNV"
[VGL] WARNING: Could not load function "glXQuerySwapGroupNV"
[VGL] WARNING: Could not load function "glXResetFrameCountNV"
[VGL] WARNING: Could not load function "glXSwapIntervalEXT"
[VGL] WARNING: Could not load function "glXBindSwapBarrierNV"
[VGL] WARNING: Could not load function "glXJoinSwapGroupNV"
[VGL] WARNING: Could not load function "glXQueryFrameCountNV"
[VGL] WARNING: Could not load function "glXQueryMaxSwapGroupsNV"
[VGL] WARNING: Could not load function "glXQuerySwapGroupNV"
[VGL] WARNING: Could not load function "glXResetFrameCountNV"
Running synchronized to the vertical refresh.  The framerate should be
approximately the same as the monitor refresh rate.
[VGL] Using pixel buffer objects for readback (BGR --> BGRA)
2523 frames in 5.0 seconds = 504.472 FPS
2563 frames in 5.0 seconds = 512.499 FPS
2555 frames in 5.0 seconds = 510.850 FPS
$ comm -1 -3 <(env | sort) <(vglrun -c 0 env | grep -v '^\[' | sort)
GEOPROBE_USEGLX=1
LD_PRELOAD=libdlfaker.so:libvglfaker.so
PROMAGIC_USEGLX=1
_=/usr/bin/vglrun
VBOX_CROGL_FORCE_SUPPORTED=1
VGL_COMPRESS=0
VGL_ISACTIVE=1

$ glxinfo -B
name of display: :0.0
display: :0  screen: 0
direct rendering: Yes
Extended renderer info (GLX_MESA_query_renderer):
    Vendor: X.Org (0x1002)
    Device: AMD MULLINS (DRM 2.48.0 / 4.9.0-1-amd64, LLVM 3.9.1) (0x9851)
    Version: 13.0.4
    Accelerated: yes
    Video memory: 1024MB
    Unified memory: no
    Preferred profile: core (0x1)
    Max core profile version: 4.3
    Max compat profile version: 3.0
    Max GLES1 profile version: 1.1
    Max GLES[23] profile version: 3.1
OpenGL vendor string: X.Org
OpenGL renderer string: Gallium 0.4 on AMD MULLINS (DRM 2.48.0 / 4.9.0-1-amd64, LLVM 3.9.1)
OpenGL core profile version string: 4.3 (Core Profile) Mesa 13.0.4
OpenGL core profile shading language version string: 4.30
OpenGL core profile context flags: (none)
OpenGL core profile profile mask: core profile

OpenGL version string: 3.0 Mesa 13.0.4
OpenGL shading language version string: 1.30
OpenGL context flags: (none)

OpenGL ES profile version string: OpenGL ES 3.1 Mesa 13.0.4
OpenGL ES profile shading language version string: OpenGL ES GLSL ES 3.10

$ lspci -nnk | grep "VGA\|'Kern'\|3D\|Display" -A2 
00:01.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Mullins [Radeon R4/R5 Graphics] [1002:9851] (rev 40)
	Subsystem: Hewlett-Packard Company Mullins [Radeon R4/R5 Graphics] [103c:81f5]
	Kernel driver in use: radeon
$ uname -a
Linux debian9 4.9.0-1-amd64 #1 SMP Debian 4.9.6-3 (2017-01-28) x86_64 GNU/Linux

Problem with GLFW application: catching WM_DELETE_WINDOW

Thanks for all of your great work on this tool!

We receive this error message from VirtualGL when we close a window created by GLFW:

> [VGL] ERROR: in getGLXDrawable--
> [VGL]    186: Window has been deleted by window manager

This seems to come up pretty often as I search around the web, and I'm aware of the recommendation here to intercept the WM_DELETE_WINDOW signal from X11. My understanding is that VirtualGL watches that signal, and that the error is thrown whenever this line gets executed. So we want to "intercept" the signal in our code, which means that we will get the signal and prevent VirtualGL from getting it.

I've looked around for examples of how to intercept the signal, and there are several (here is one). However since our application uses GLFW and GLFW appears to do the interception here and here I assumed that we would be safe in cleaning up the window using GLFW's ability to set a window close callback function which gets executed when the WM_DELETE_WINDOW signal is received here. However what's happening in our application is that both VirtualGL and GLFW are getting the signal at the same time, so by the time GLFW gets the signal it's already too late to stop the error. It seems that GLFW is sharing rather than intercepting the signal. I confess that I have no X11 programming experience so I may be misunderstanding how interception works. I would like to know if this is the intended behavior of VirtualGL and GLFW (which I can also ask the GLFW developers if necessary), and how to best implement this in our application. Thanks for any insights!

Investigate the need for a "VirtualVulkan" interposer

Vulkan is still nascent, but nVidia now fully supports it. Depending on how the API interacts with the windowing system, it may be necessary to provide a similar split rendering interface for Vulkan to the one that is provided for OpenGL/GLX.

VirtualGL from macOS get segmentation fault

Server machine : Ubuntu 14.04 + Nvidia Titan X + VirtualGL Setup
Client machine : macOS Sierra (mac book air) + VirtualGL Client version 2.5.2 + XQuartz

I connect from client machine to server machine with

$ vglconnect -s user@server
(The connection was fine).
$echo $DISPLAY
localhost:10.0

However, when I run
$ vglrun glxinfo
name of display: localhost:10.0
[1] 4960 segmentation fault (core dumped) vglrun glxinfo
$ vglrun glxgears
[1] 5206 segmentation fault (core dumped) vglrun glxgears

With more debugging info
$ vglrun gdb glxgears

(gdb) run
Starting program: /usr/bin/glxgears
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".

Program received signal SIGSEGV, Segmentation fault.
0x00007ffff52347ad in ?? () from /usr/lib/x86_64-linux-gnu/libGLX_nvidia.so.0

(gdb) bt
#0 0x00007ffff52347ad in ?? () from /usr/lib/x86_64-linux-gnu/libGLX_nvidia.so.0
#1 0x00007ffff51fd0b8 in ?? () from /usr/lib/x86_64-linux-gnu/libGLX_nvidia.so.0
#2 0x00007ffff51f02b9 in glXGetConfig () from /usr/lib/x86_64-linux-gnu/libGLX_nvidia.so.0
#3 0x00007ffff796753e in ?? () from /usr/lib/x86_64-linux-gnu/librrfaker.so
#4 0x00007ffff7968358 in ?? () from /usr/lib/x86_64-linux-gnu/librrfaker.so
#5 0x00007ffff794dee7 in glXChooseVisual () from /usr/lib/x86_64-linux-gnu/librrfaker.so
#6 0x000000000040370f in ?? ()
#7 0x0000000000401a94 in ?? ()
#8 0x00007ffff6cb6f45 in __libc_start_main (main=0x401940, argc=1, argv=0x7fffffffe068, init=, fini=, rtld_fini=,
stack_end=0x7fffffffe058) at libc-start.c:287
#9 0x000000000040238e in ?? ()
(gdb)

Note:

  • VirtualGL on the server machine works fine through tigervnc.
  • vglconnect -s user@server from another Ubuntu 14.04 also works perfectly fine.

uname, hostname and other applications crashes

With just compiled latest VirtualGL from git master

$ vglrun uname
Linux
Aborted (core dumped)
$ vglrun hostname
arch
terminate called after throwing an instance of 'vglutil::Error'
Aborted (core dumped)

all backtraces look same

#0  0x00007fcfd6b065f8 in raise () from /usr/lib/libc.so.6
#1  0x00007fcfd6b07a7a in abort () from /usr/lib/libc.so.6
#2  0x00007fcfd6eda02d in __gnu_cxx::__verbose_terminate_handler() () from /usr/lib/libvglfaker.so
#3  0x00007fcfd6ed9426 in __cxxabiv1::__terminate(void (*)()) () from /usr/lib/libvglfaker.so
#4  0x00007fcfd6ed9471 in std::terminate() () from /usr/lib/libvglfaker.so
#5  0x00007fcfd6ed9578 in __cxa_throw () from /usr/lib/libvglfaker.so
#6  0x00007fcfd6ed79d7 in vglutil::CriticalSection::lock(bool) () from /usr/lib/libvglfaker.so
#7  0x00007fcfd6e87364 in vglfaker::GlobalCleanup::~GlobalCleanup() () from /usr/lib/libvglfaker.so
#8  0x00007fcfd6b092ef in __cxa_finalize () from /usr/lib/libc.so.6
#9  0x00007fcfd6e85763 in __do_global_dtors_aux () from /usr/lib/libvglfaker.so
#10 0x00007ffe54400050 in ?? ()
#11 0x00007fcfd7342867 in _dl_fini () from /lib64/ld-linux-x86-64.so.2
Backtrace stopped: frame did not save the PC


#0  0x00007f4d4a5465f8 in raise () from /usr/lib/libc.so.6
#1  0x00007f4d4a547a7a in abort () from /usr/lib/libc.so.6
#2  0x00007f4d4a91a02d in __gnu_cxx::__verbose_terminate_handler() () from /usr/lib/libvglfaker.so
#3  0x00007f4d4a919426 in __cxxabiv1::__terminate(void (*)()) () from /usr/lib/libvglfaker.so
#4  0x00007f4d4a919471 in std::terminate() () from /usr/lib/libvglfaker.so
#5  0x00007f4d4a919578 in __cxa_throw () from /usr/lib/libvglfaker.so
#6  0x00007f4d4a9179d7 in vglutil::CriticalSection::lock(bool) () from /usr/lib/libvglfaker.so
#7  0x00007f4d4a8c7364 in vglfaker::GlobalCleanup::~GlobalCleanup() () from /usr/lib/libvglfaker.so
#8  0x00007f4d4a5492ef in __cxa_finalize () from /usr/lib/libc.so.6
#9  0x00007f4d4a8c5763 in __do_global_dtors_aux () from /usr/lib/libvglfaker.so
#10 0x00007fff45d39b90 in ?? ()
#11 0x00007f4d4ad82867 in _dl_fini () from /lib64/ld-linux-x86-64.so.2
Backtrace stopped: frame did not save the PC

Using ArchLinux with glibc 2.22 and gcc 5.2

"Deferred readback" mode

This would basically move VirtualGL's buffer pool to the GPU (using PBOs) so that it would not be necessary to read back every frame that is rendered. Only the frames that actually made it to the image transport (i.e. the frames that are not spoiled) would be read back. This would also eliminate the memory copy that currently has to occur when transferring the pixels from the PBO to one of the buffers in the buffer pool, and it would reduce the overhead on the GPU caused by reading back frames that are never displayed. Furthermore, it would give transport plugins the option of performing additional processing on the pixels using the GPU prior to transferring them. The disadvantage would be increased GPU memory usage (it would probably be necessary to maintain 3 PBOs for each OpenGL context.)

dlfaker does not intercept dlopen(, RTLD_DEEPBIND) flag

If an application dlopen()'s a library that makes use of OpenGL with the RTLD_DEEPBIND flag set, it will eventually crash due to a BadRequest.

In my case the application in question is proprietary and thus cannot be modified to not pass RTLD_DEEPBIND. I could write myself a small library that also interposes dlopen() and disables that flag, but it seems like this should be something that dlfaker does, because if it does not do this, there exists a way for applications to "escape" their GL-virtualized sandbox (and promptly die).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.