Giter Club home page Giter Club logo

lasr's People

Contributors

deqings avatar gengshan-y avatar jason718 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lasr's Issues

No module named 'detectron2.config'

I built my environment with docker.
Therefore, I use the following command to get segmentations.

docker run -v $(pwd):/lasr --gpus all lasr bash -c 'cd lasr/detectron2; source activate lasr; python mask.py pika . /detectron2; cd -'

Then I get the following error.

Traceback (most recent call last):
File "mask.py", line 23, in
from detectron2.config import get_cfg
ModuleNotFoundError: No module named 'detectron2.config'

detectron2 is installed with a folder created in the parent directory of preprocess.
Is it because I am using docker that I am getting this error?

RuntimeError: CUDA error: invalid device ordinal

I used the docker to build the environment.
I prepared the DAVIS data and tried to run Optimize on camel observations.

Then, I got a CUDA error and the execution did not proceed.

Can you tell me the cause?

docker run -v $(pwd):/lasr --gpus all lasr bash -c 'cd lasr; source activate lasr; bash scripts/template.sh camel'
Jitting Chamfer 3D
Jitting Chamfer 3D
Loaded JIT 3D CUDA chamfer distance
Loaded JIT 3D CUDA chamfer distance
Traceback (most recent call last):
File "optimize.py", line 59, in
app.run(main)
File "/anaconda3/envs/lasr/lib/python3.8/site-packages/absl/app.py", line 303, in run
_run_main(main, args)
File "/anaconda3/envs/lasr/lib/python3.8/site-packages/absl/app.py", line 251, in _run_main
sys.exit(main(argv))
File "optimize.py", line 40, in main
torch.cuda.set_device(opts.local_rank)
File "/anaconda3/envs/lasr/lib/python3.8/site-packages/torch/cuda/init.py", line 263, in set_device
torch._C._cuda_setDevice(device)
RuntimeError: CUDA error: invalid device ordinal

ModuleNotFoundError: No module named 'point_rend'

I ran
python mask.py pika path-to-detectron2-root; cd -

The following error has occurred. How can I solve this problem?

Traceback (most recent call last):
File "mask.py", line 45, in
import point_rend
ModuleNotFoundError: No module named 'point_rend'
/home/shiori/lasr-main

OpenGL.error.GLError

I set up an environment in docker and tried to run the rendering code.

As a result, I got an OpenGL error. Is this due to a different version installed or something else?

"""
docker run -v $(pwd):/lasr --gpus all lasr bash -c 'cd lasr; source activate lasr; python render_vis.py --testdir log/spot3-1/ --seqname spot3 --freeze --outpath tmp/1.gif'

log/spot3-1/
syn-spot3f/0
syn-spot3f/1
0
Traceback (most recent call last):
File "render_vis.py", line 292, in
main()
File "render_vis.py", line 226, in main
r = OffscreenRenderer(img_size, img_size)
File "/anaconda3/envs/lasr/lib/python3.8/site-packages/pyrender/offscreen.py", line 31, in init
self._create()
File "/anaconda3/envs/lasr/lib/python3.8/site-packages/pyrender/offscreen.py", line 149, in _create
self._platform.init_context()
File "/anaconda3/envs/lasr/lib/python3.8/site-packages/pyrender/platforms/egl.py", line 186, in init_context
self._egl_context = eglCreateContext(
File "/anaconda3/envs/lasr/lib/python3.8/site-packages/OpenGL/platform/baseplatform.py", line 402, in call
return self( *args, **named )
File "/anaconda3/envs/lasr/lib/python3.8/site-packages/OpenGL/error.py", line 228, in glCheckError
raise GLError(
OpenGL.error.GLError: GLError(
err = 12297,
baseOperation = eglCreateContext,
cArguments = (
<OpenGL._opaque.EGLDisplay_pointer object at 0x7f1e16d3a1c0>,
<OpenGL._opaque.EGLConfig_pointer object at 0x7f1e16d3a240>,
<OpenGL._opaque.EGLContext_pointer object at 0x7f1e16d85040>,
<OpenGL.arrays.lists.c_int_Array_7 object at 0x7f1e2fd5a940>,
),
result = <OpenGL._opaque.EGLContext_pointer object at 0x7f1e16d3a8c0>
)
"""

Question about the flatten loss

Dear Authors,

Thank you so much for the great work.
While reading your source code, I found there is a flatten loss here. This loss is not discussed in the paper and it is also not well explained in the code. Can you explain what this loss is about? Thank you very much!

Best,
Xianghui

Clarification for Coarse-to-fine train step

Hi Gengshan,

I was looking at the paper and in that it was mentioned that the step S0 does not have any bones and we start with a sphere.

image

When I looked at the template.sh file

image

here it looks like the bones are initialized to 21 (B=20). I am slightly confused if we call this the step S0 because at the start we should set it to 1 (B=0). (A general trend I observed was n_faces did not align with the n_bones)

I am not sure if I am missing something here. I would really appreciate if you could help with this.
Thank you!

Bone Length

Nice work! I see Jb is the position of the center of the b-th bone (or Gaussian component). But do you need to define the bone length as well?

Question on Flow preprocessing

Hi,

Thank you for open-sourcing your awesome work.

Could you explain on what is going on with the flow pre-processing below?

lasr/dataloader/vidbase.py

Lines 145 to 151 in 492fa41

flow[:,:,0] += (center[0]-length[0]) - (centern[0]-lengthn[0]) + betax*(alp-alpn)
flow[:,:,1] += (center[1]-length[1]) - (centern[1]-lengthn[1]) + betay*(alp-alpn)
flow /= alpn
flow[:,:,0] = 2 * (flow[:,:,0]/maxw)
flow[:,:,1] = 2 * (flow[:,:,1]/maxh)
flow[:,:,2] = np.logical_and(flow[:,:,2]!=0, occ<10) # as the valid pixels

Why is this preferred over a simple MSE penalty over raw flow fields?

Thanks!!

Pdb mode

Excuse me for asking again and again.

"bash scripts/spot3.sh".
After running the above code, terminal goes into Pdb mode.
What should I enter here?

how to plot such figure?

hi,

can I know how to plot figure 2? Especially the colorful 3d mesh on the top right side of the figure 2? maybe which package you used. or which code snippets used by you in the repo. thank you!

bestanoy

LASR fails for sequence of a person

Hello, I am looking to run LASR for a couple different scenes showing a single person. For one (RGB sequence: https://user-images.githubusercontent.com/6766142/126760093-b96c19ae-8e15-4cb6-8942-8ad0a420a2e5.mp4 LASR results: https://user-images.githubusercontent.com/6766142/126760220-8ceff0c3-03bd-432e-8d7a-0b1789112dc7.mp4), LASR works very well using the default parameters and symmetry disabled. However, for the other the method runs to completion, but produces invalid results. The RGB sequence is:
https://user-images.githubusercontent.com/6766142/126758853-57390ec1-966d-4488-979e-a1f92632bfb5.mp4

The results using default values (symmetry enabled) show a phantom copy and the mesh doesn't deform to match the mask:

vi_symm-vi-5-10.mp4

I disabled the symmetry and now the resulting mesh is an amorphous blob that doesn't even overlap the mask:

vi-vi-5-10.mp4

Monitoring the trends in tensorboard seem to show that everything proceeded well until the end of the first epoch, so I ran the method using only a single epoch which gives the best results so far (although somewhat reminiscent of a tadpole):

vi-vi_e1-5-10.mp4

I also tried with larger batchsizes as suggested in the readme (6 and 10), but this didn't seem to cause any difference in the results. I verified that the masks and flow fields didn't look vastly incorrect. I'm wondering if this is a known issue or that you might have an idea what has gone wrong for this scene. Thanks!

ninja: no work to do.

I started to build the conda environment again.
I get an output that ninja is not working properly, does this mean that the environment build was not successful?

cd third_party/softras; python setup.py install; cd -;

running install
running bdist_egg
running egg_info
writing soft_renderer.egg-info/PKG-INFO
writing dependency_links to soft_renderer.egg-info/dependency_links.txt
writing requirements to soft_renderer.egg-info/requires.txt
writing top-level names to soft_renderer.egg-info/top_level.txt
reading manifest file 'soft_renderer.egg-info/SOURCES.txt'
writing manifest file 'soft_renderer.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_py
running build_ext
building 'soft_renderer.cuda.load_textures' extension
Emitting ninja build file /home/kana/lasr3/third_party/softras/build/temp.linux-x86_64-3.8/build.ninja...
Compiling objects...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
ninja: no work to do.
g++ -pthread -shared -B /home/kana/anaconda3/envs/lasr2/compiler_compat -L/home/kana/anaconda3/envs/lasr2/lib -Wl,-rpath=/home/kana/anaconda3/envs/lasr2/lib -Wl,--no-as-needed -Wl,--sysroot=/ /home/kana/lasr3/third_party/softras/build/temp.linux-x86_64-3.8/soft_renderer/cuda/load_textures_cuda.o /home/kana/lasr3/third_party/softras/build/temp.linux-x86_64-3.8/soft_renderer/cuda/load_textures_cuda_kernel.o -L/home/kana/anaconda3/envs/lasr2/lib/python3.8/site-packages/torch/lib -L/home/kana/anaconda3/envs/lasr2/lib64 -lc10 -ltorch -ltorch_cpu -ltorch_python -lcudart -lc10_cuda -ltorch_cuda -o build/lib.linux-x86_64-3.8/soft_renderer/cuda/load_textures.cpython-38-x86_64-linux-gnu.so
building 'soft_renderer.cuda.create_texture_image' extension
Emitting ninja build file /home/kana/lasr3/third_party/softras/build/temp.linux-x86_64-3.8/build.ninja...
Compiling objects...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
ninja: no work to do.
g++ -pthread -shared -B /home/kana/anaconda3/envs/lasr2/compiler_compat -L/home/kana/anaconda3/envs/lasr2/lib -Wl,-rpath=/home/kana/anaconda3/envs/lasr2/lib -Wl,--no-as-needed -Wl,--sysroot=/ /home/kana/lasr3/third_party/softras/build/temp.linux-x86_64-3.8/soft_renderer/cuda/create_texture_image_cuda.o /home/kana/lasr3/third_party/softras/build/temp.linux-x86_64-3.8/soft_renderer/cuda/create_texture_image_cuda_kernel.o -L/home/kana/anaconda3/envs/lasr2/lib/python3.8/site-packages/torch/lib -L/home/kana/anaconda3/envs/lasr2/lib64 -lc10 -ltorch -ltorch_cpu -ltorch_python -lcudart -lc10_cuda -ltorch_cuda -o build/lib.linux-x86_64-3.8/soft_renderer/cuda/create_texture_image.cpython-38-x86_64-linux-gnu.so
building 'soft_renderer.cuda.soft_rasterize' extension
Emitting ninja build file /home/kana/lasr3/third_party/softras/build/temp.linux-x86_64-3.8/build.ninja...
Compiling objects...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
ninja: no work to do.
g++ -pthread -shared -B /home/kana/anaconda3/envs/lasr2/compiler_compat -L/home/kana/anaconda3/envs/lasr2/lib -Wl,-rpath=/home/kana/anaconda3/envs/lasr2/lib -Wl,--no-as-needed -Wl,--sysroot=/ /home/kana/lasr3/third_party/softras/build/temp.linux-x86_64-3.8/soft_renderer/cuda/soft_rasterize_cuda.o /home/kana/lasr3/third_party/softras/build/temp.linux-x86_64-3.8/soft_renderer/cuda/soft_rasterize_cuda_kernel.o -L/home/kana/anaconda3/envs/lasr2/lib/python3.8/site-packages/torch/lib -L/home/kana/anaconda3/envs/lasr2/lib64 -lc10 -ltorch -ltorch_cpu -ltorch_python -lcudart -lc10_cuda -ltorch_cuda -o build/lib.linux-x86_64-3.8/soft_renderer/cuda/soft_rasterize.cpython-38-x86_64-linux-gnu.so
building 'soft_renderer.cuda.voxelization' extension
Emitting ninja build file /home/kana/lasr3/third_party/softras/build/temp.linux-x86_64-3.8/build.ninja...
Compiling objects...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
ninja: no work to do.
g++ -pthread -shared -B /home/kana/anaconda3/envs/lasr2/compiler_compat -L/home/kana/anaconda3/envs/lasr2/lib -Wl,-rpath=/home/kana/anaconda3/envs/lasr2/lib -Wl,--no-as-needed -Wl,--sysroot=/ /home/kana/lasr3/third_party/softras/build/temp.linux-x86_64-3.8/soft_renderer/cuda/voxelization_cuda.o /home/kana/lasr3/third_party/softras/build/temp.linux-x86_64-3.8/soft_renderer/cuda/voxelization_cuda_kernel.o -L/home/kana/anaconda3/envs/lasr2/lib/python3.8/site-packages/torch/lib -L/home/kana/anaconda3/envs/lasr2/lib64 -lc10 -ltorch -ltorch_cpu -ltorch_python -lcudart -lc10_cuda -ltorch_cuda -o build/lib.linux-x86_64-3.8/soft_renderer/cuda/voxelization.cpython-38-x86_64-linux-gnu.so

error [python scripts/render_syn.py]

I ran "python scripts/render_syn.py".

The following error has occurred. How can I solve this problem?

/home/kana/anaconda3/envs/lasr/lib/python3.8/site-packages/kornia/geometry/conversions.py:369: UserWarning: XYZW quaternion coefficient order is deprecated and will be removed after > 0.6. Please use QuaternionCoeffOrder.WXYZ instead.
warnings.warn("XYZW quaternion coefficient order is deprecated and"
/home/kana/anaconda3/envs/lasr/lib/python3.8/site-packages/kornia/geometry/conversions.py:506: UserWarning: XYZW quaternion coefficient order is deprecated and will be removed after > 0.6. Please use QuaternionCoeffOrder.WXYZ instead.
warnings.warn("XYZW quaternion coefficient order is deprecated and"

LASR with known camera intrinsics/extrinsics

Hello, I would like to run LASR with known camera intrinsics & extrinsics. I believe this is already implemented, but I'm having some trouble understanding how to accomplish this myself. The mechanisms seem to be two-fold: with the use_gtpose option and providing per-frame camera files (parsing code here). Could you clarify the functionality of these mechanisms? I was unable to find an example that made use of either, but if I missed one or you have one, that would also be helpful.

Another thing that confuses me is the scaling of the scale (lol) when use_gtpose is set even though the focal length is assigned equivalently if the camera files are provided or not that makes me think these two mechanisms might have different purposes and I am incorrectly conflating them.

Any clarification you can provide would be much appreciated! Thanks!

Does the env yaml file not works on windows?

I cloned your repo and tried to create env by
conda env create -f lasr.yml
then I got following messages.
Solving environment: failed

ResolvePackageNotFound:
  - lz4-c==1.9.3=h2531618_0
  - cudatoolkit==11.0.221=h6bb024c_0
  - ca-certificates==2021.5.30=ha878542_0
  - openssl==1.1.1k=h27cfd23_0
  - libwebp-base==1.2.0=h27cfd23_0
  - tk==8.6.10=hbc83047_0
  - numpy==1.20.2=py38h2d18471_0
  - sqlite==3.35.4=hdfb4753_0
  - numpy-base==1.20.2=py38hfae3a4d_0
  - jpeg==9b=h024ee3a_2
  - freetype==2.10.4=h5ab3b9f_0
  - intel-openmp==2021.2.0=h06a4308_610
  - mkl_random==1.2.1=py38ha9443f7_2
  - pytorch3d==0.4.0=py38_cu110_pyt171
  - pyyaml==5.3.1=py38h8df0ef7_1
  - zstd==1.4.9=haebb681_0
  - ld_impl_linux-64==2.33.1=h53a641e_7
  - pytorch==1.7.1=py3.8_cuda11.0.221_cudnn8.0.5_0
  - readline==8.1=h27cfd23_0
  - xz==5.2.5=h7b6447c_0
  - libtiff==4.2.0=h85742a9_0
  - lcms2==2.12=h3be6417_0
  - cudatoolkit-dev=11.0.3
  - ncurses==6.2=he6710b0_1
  - zlib==1.2.11=h7b6447c_3
  - mkl_fft==1.3.0=py38h42c9631_2
  - mkl-service==2.3.0=py38h27cfd23_1
  - libgcc-ng=9.1.0
  - libffi==3.3=he6710b0_2
  - libstdcxx-ng==9.1.0=hdf63c60_0

when I delete the build then error disappears, but I'm not sure this is right way.

Is this repo is not compatible with windows?

OOM ERROR

Dear authors,
I met a cuda out of memory error, I used a RTX2080TI GPU, which had 11G
Which kind of GPU did you use?
Thanks

Flipped flow maps in the flow loss

Hi Gengshan,

Thanks for the great work.

I noticed that you flip the flows before saving in autogen.py

lasr/preprocess/auto_gen.py

Lines 173 to 178 in 29d8759

write_pfm('%s/FlowFW/flo-%05d.pfm'% (seqname,ix ),flowfw[::-1].astype(np.float32))
write_pfm('%s/FlowFW/occ-%05d.pfm'% (seqname,ix ),occfw[::-1].astype(np.float32))
write_pfm('%s/FlowBW/flo-%05d.pfm'% (seqname,ix+1),flowbw[::-1].astype(np.float32))
write_pfm('%s/FlowBW/occ-%05d.pfm'% (seqname,ix+1),occbw[::-1].astype(np.float32))
cv2.imwrite('%s/JPEGImages/%05d.jpg'% (seqname,ix), imgL_o[:,:,::-1])
cv2.imwrite('%s/JPEGImages/%05d.jpg'% (seqname,ix+1), imgR_o[:,:,::-1])

Is there a good reason to do this?
An unintended consequence is that it leads to flipped flows being loaded at training time and the flow loss ends up being wrong.

Here's an example of flow error being logged while running lasr on Camel example. As you can see, the ground truth flow here is flipped
image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.