Giter Club home page Giter Club logo

deca's People

Contributors

havenfeng avatar mhoangvslev avatar michaeljblack avatar timobolkart avatar yfeng95 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

deca's Issues

Parameter difference from Flame original model

I want to use the output of DECA in Blender addon from the Flame website.
So DECA gives

  • 100 shape parameters
  • 50 expression parameters

But the original Flame model (used in Blender addon) has

  • 300 shape parameters
  • 100 expression parameters

Is there a way to get original Flame parameters from DECA parameters ?
Thanks !

What are the boundary artifacts?

As shown in the ablation studies section,
without symmetry loss, boundary artifacts become visible.

Can you give more explanation about what a boundary artifact is? Why it happens when not using sym loss?

I thought the symmetry loss was about regularizing self occluded parts when given a sided face as an input (e.g., profile face)
but this effect was rather unexpected.

Thanks in advance. Appreciate sharing such a nice work.

training part

Hi,
Thank you for sharing your amazing results.

 Could you please share your train code? 

 Thanks

ModuleNotFoundError: No module named 'iopath'

My system is Ubuntu 20.04 LTS.I only omit usage 1.c. I get error when I try to run "python demos/demo_reconstruct.py -i TestSamples/examples --saveDepth True --saveObj True"

The message is
Traceback (most recent call last):
File "demos/demo_reconstruct.py", line 25, in
from decalib.deca import DECA
File "/home/michael/Downloads/DECA-master/decalib/deca.py", line 27, in
from .utils.renderer import SRenderY
File "/home/michael/Downloads/DECA-master/decalib/utils/renderer.py", line 23, in
from pytorch3d.io import load_obj
File "/home/michael/anaconda3/envs/DECA/lib/python3.7/site-packages/pytorch3d/io/init.py", line 4, in
from .obj_io import load_obj, load_objs_as_meshes, save_obj
File "/home/michael/anaconda3/envs/DECA/lib/python3.7/site-packages/pytorch3d/io/obj_io.py", line 12, in
from pytorch3d.io.mtl_io import load_mtl, make_mesh_texture_atlas
File "/home/michael/anaconda3/envs/DECA/lib/python3.7/site-packages/pytorch3d/io/mtl_io.py", line 11, in
from pytorch3d.io.utils import _open_file, _read_image
File "/home/michael/anaconda3/envs/DECA/lib/python3.7/site-packages/pytorch3d/io/utils.py", line 10, in
from fvcore.common.file_io import PathManager
File "/home/michael/anaconda3/envs/DECA/lib/python3.7/site-packages/fvcore/common/file_io.py", line 10, in
from iopath.common.file_io import (
ModuleNotFoundError: No module named 'iopath'

Is working on rtx 300xx ? and windows?

I cannot get it work, its seem wants cuda 10 i hear python only work with older version of cuda if have rtx 3090 and windows with miniconda, it will works?
im getting error

python demos/demo_transfer.py
from pytorch3d.structures import Meshes
ModuleNotFoundError: No module named 'pytorch3d'

but pytroch3d sayiing # All requested packages already installed.

code for shape-consistency loss

I understand the idea behind the shape-consistency loss and think it is reasonable. But I feel confused how to write the code. According to the paper, shape-consistency loss is the loss of exchanging the beta for two example of the same identity. However, batch size is usually greater than 2, so how to deal with a batch of examples? Any suggestions and comments are welcomed. Thanks.

How to get the face region?

@ TimoBolkart, thanks for your great work!
I notice that in figure 2 of your paper, the resulting image only contains the rendered face region, but in your reconstruction demo, the rendered texture shows the full head result. So could you please give some instructions on how to get the face and eye region only, not containing other parts?
reliao_img_1608545228215

When is remeshing performed?

Hi!
Thanks for sharing your wonderful work.
The code has been a huge help in understanding the work.
However, I'm having some difficulties.

I noticed that the coarse mesh without details has 5023 vertices, which is from a regular FLAME model,
but the mesh with detail has 59315 vertices.

So I'm guessing that there was kind of a subdivision process in between.
The question is that I cannot find which line of code does this.

With a denser UV map, generating dense displacement vectors should be easy,
but how did you manage to map coarse mesh to dense mesh?

And also, if I do not make the mesh denser ( coarse mesh + coarse displacement => 5023 vertices)
will the results not be able to reconstruct geometric details?

Thanks!

Issue while executing on a mac

While executing on a mac i got this error, can you help?

Siddharths-MBP:DECA siddharthjha$ python demos/demo_reconstruct.py --help Traceback (most recent call last): File "demos/demo_reconstruct.py", line 25, in <module> from decalib.deca import DECA File "/Users/siddharthjha/Desktop/codebase/Facial reconstruction/DECA/decalib/deca.py", line 27, in <module> from .utils.renderer import SRenderY File "/Users/siddharthjha/Desktop/codebase/Facial reconstruction/DECA/decalib/utils/renderer.py", line 23, in <module> from pytorch3d.io import load_obj File "/Users/siddharthjha/.pyenv/versions/3.7.9/lib/python3.7/site-packages/pytorch3d/io/__init__.py", line 4, in <module> from .obj_io import load_obj, load_objs_as_meshes, save_obj File "/Users/siddharthjha/.pyenv/versions/3.7.9/lib/python3.7/site-packages/pytorch3d/io/obj_io.py", line 16, in <module> from pytorch3d.renderer import TexturesAtlas, TexturesUV File "/Users/siddharthjha/.pyenv/versions/3.7.9/lib/python3.7/site-packages/pytorch3d/renderer/__init__.py", line 3, in <module> from .blending import ( File "/Users/siddharthjha/.pyenv/versions/3.7.9/lib/python3.7/site-packages/pytorch3d/renderer/blending.py", line 9, in <module> from pytorch3d import _C ImportError: dlopen(/Users/siddharthjha/.pyenv/versions/3.7.9/lib/python3.7/site-packages/pytorch3d/_C.cpython-37m-darwin.so, 2): Symbol not found: __ZN3c104impl23ExcludeDispatchKeyGuardC1ENS_14DispatchKeySetE Referenced from: /Users/siddharthjha/.pyenv/versions/3.7.9/lib/python3.7/site-packages/pytorch3d/_C.cpython-37m-darwin.so Expected in: /Users/siddharthjha/.pyenv/versions/3.7.9/lib/python3.7/site-packages/torch/lib/libc10.dylib in /Users/siddharthjha/.pyenv/versions/3.7.9/lib/python3.7/site-packages/pytorch3d/_C.cpython-37m-darwin.so

Can you give more intuition on why id-mrf loss is used?

Hi!
Thanks for sharing your wonderful work.
Hope to test out the training code someday.

I see in Figure 7 that the effect of L_mrf is quite crucial in obtaining mesh with details.
Can you share more intuition on why you chose to use id-mrf loss?

Id-mrf loss seems to be from an inpainting literature,
and I don't get the need for finding the corresponding nearest neighbor feature patches when in the case of this paper,
the rendered and input images are supposed to be aligned in the image level.
Is this to handle some misalignments?
Was it better than using perceptual losses?
Thanks.

Output meshes with fine details

Hi, thanks for making this great work open sourced.

I follow the instruction in the repository and get output meshes of the example images. However I find that they are all coarse shapes with no fine details. Is it possible to include the code for generating the detailed face meshes as well?

Besides, I also have a question about the paper:

The model in this paper has much lower reconstruction error on NoW benchmark compare to previous methods. But the training losses for coarse shapes seem to be commonly used for previous works (except the new shape consistency loss). I wonder which loss contributes most to the reconstruction quality. What's more, what do you think is the most important part to obtain accurate reconstruction results in your paper?

Really appreciated if you can answer the question :)

CUDA Version issue

I'm trying to run demo on Google Colab, but I get the following error:

AssertionError:
The NVIDIA driver on your system is too old (found version 10010).
Please update your GPU driver by downloading and installing a new
version from the URL: http://www.nvidia.com/Download/index.aspx
Alternatively, go to: https://pytorch.org to install
a PyTorch version that has been compiled with your version
of the CUDA driver.

However, when I check the version with Nvidia-smi, it is CUDA version: 10.1, which is fine.
Is there any solution?

can not geth the DECA trained model

Hello, I am very interested in your wonderful work. But I can not get the deca trained model, when i open the link ,it not work, can you give me the DECA trained model in another way?
Thanks a lot!

Loss Functions of encoder and decoder

Can you please share the code for loss functions for encoder and decoder parts as stated in paper, I am unable to find them in the code and if it's there in code please share the filename so that I can explore that part.

decode return key name error

decode funcition int class DECA(object) return dict opdict, in deca.py, the key name transformed_vertices have changed to trans_verts, but in demo_reconstruct.py still using key name transformed_vertices .

Is it possible to have a CPU-only version for this?

Hi @YadiraF, I got the following
AssertionError: Torch not compiled with CUDA enabled

Pasting the traceback for your reference:

Traceback (most recent call last):
File "demos/demo_reconstruct.py", line 103, in
main(parser.parse_args())
File "demos/demo_reconstruct.py", line 36, in main
testdata = datasets.TestData(args.inputpath, iscrop=args.iscrop, face_detector=args.detector)
File "...DECA/decalib/datasets/datasets.py", line 70, in init
self.face_detector = detectors.FAN()
File ".../DECA/decalib/datasets/detectors.py", line 22, in init
self.model = face_alignment.FaceAlignment(face_alignment.LandmarksType._2D, flip_input=False)
File ".../lib/python3.7/site-packages/face_alignment/api.py", line 75, in init
self.face_detector = face_detector_module.FaceDetector(device=device, verbose=verbose)
File ".../lib/python3.7/site-packages/face_alignment/detection/sfd/sfd_detector.py", line 30, in init
self.face_detector.to(device)
File ".../lib/python3.7/site-packages/torch/nn/modules/module.py", line 612, in to
return self._apply(convert)
File ".../lib/python3.7/site-packages/torch/nn/modules/module.py", line 359, in _apply
module._apply(fn)
File ".../lib/python3.7/site-packages/torch/nn/modules/module.py", line 381, in _apply
param_applied = fn(param)
File ".../lib/python3.7/site-packages/torch/nn/modules/module.py", line 610, in convert
return t.to(device, dtype if t.is_floating_point() else None, non_blocking)
File ".../lib/python3.7/site-packages/torch/cuda/init.py", line 166, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled

Would love to have your answer soon, thanks!

About the test results of PRNet on NoW

hello, thanks for your great job! I am a little confused about the test results on NoW dataset.

Now that all the scans in NoW are neutral ones and the methods need to exclude expressions before testing. How do you test the PRNet? Do you just test the neural images, or just keep the results with expressions? It seems PRNet cannot exclude expressions.

Looking forward to your reply, thank you.

FLAME_albedo_from_BFM

Could you please share the FLAME_albedo_from_BFM.npz?
It is difficult for me to get that.
Thanks!

Drawing facial keypoints on original image

Hello,

I am trying to remap tile valid coordinates to original image valid coordinates.

To be more precise your demo script creates a torch grid out of multiple tiles that detector produces 2dlandmarks, 3dlandmarks, ... i want to be able to pass input image e.g 1280x720 use your preprocessing to create input tile 224x224 detect landmarks and then draw them on original image, for this i need to apply inverse transformation on keypoints based on preprocessing transformations.

Here is some sample code i am testing

# Test 1000x1000 input image
image = cv2.imread('/path/to/test/img.jpg')

testdata = datasets.TestData('/path/to/test/img.jpg', iscrop=True, face_detector='fan')
name = testdata[0]['imagename']
images = testdata[0]['image'].to('cuda')[None, ...]
# Modified to return bbox as well
bbox = testdata[0]['bbox']

deca = DECA(config=deca_cfg, device='cuda')
codedict = deca.encode(images)
opdict, visdict = deca.decode(codedict)  # tensor


start_point = (bbox[0], bbox[1])
end_point = (bbox[2], bbox[3])
cv2.rectangle(image, start_point, end_point, (255, 0, 0), 2)

color = (255, 111, 111)
for landmark in opdict['landmarks2d'][0]:
    # Valid coordinates on croped face tile
    xr = landmark[0].item()
    yr = landmark[1].item()

     # Try to produce original image valid coordinates
    xr = xr * 224/1000 + bbox[0]
    yr = yr * 224/1000 + bbox[1]
    xr = int(xr)
    yr = int(yr)

    cv2.circle(image, (xr, yr), 1, color, 1, cv2.LINE_AA)

So what would be the transformation since

original_valid_x = tile_valid_x * tile_width / original_width + tile_origin_x
original_valid_y = tile_valid_y * tile_height / original_height + tile_origin_y

are not producing expected results

ModuleNotFoundError: No module named 'face_alignment'

Hi, I ran the demo code
python demos/demo_reconstruct.py -i TestSamples/examples --saveDepth True --saveObj True
and got this error message:

total 9 images
Traceback (most recent call last):
File "F:\Softwares\AI\DECA-master\demos\demo_reconstruct.py", line 103, in main(parser.parse_args())
File "F:\Softwares\AI\DECA-master\demos\demo_reconstruct.py", line 36, in main
testdata = datasets.TestData(args.inputpath, iscrop=args.iscrop, face_detector=args.detector)
File "F:\Softwares\AI\DECA-master\decalib\datasets\datasets.py", line 70, in init
self.face_detector = detectors.FAN()
File "F:\Softwares\AI\DECA-master\decalib\datasets\detectors.py", line 21, in init
import face_alignment
ModuleNotFoundError: No module named 'face_alignment'

Not sure what's going wrong.

Several things I noticed but not sure if they are related to this issue:

  1. After ran python demos/demo_reconstruct.py --help
    it mentioned "detector for cropping face, check decalib/detectors.py for details", but detectors.py is located in decalib\datasets directory.
  2. Inside BFM_to_FLAME folder (after unzip) there's no FLAME_albedo_from_BFM.npz but BFM_to_FLAME_corr.npz. Do I need to change the filename to BFLAME_albedo_from_BFM.npz and place it to .\data?

Thanks

FileNotfoundError

I used commmand python demos/demo_reconstruct.py -i TestSamples/examples --saveDepth True --saveObj True but encounteered a problem.

fc.weight not available in reconstructed resnet
fc.bias not available in reconstructed resnet
fc.weight not available in reconstructed resnet
fc.bias not available in reconstructed resnet
creating the FLAME Decoder
trained model found. load E:\毕业设计\DECA\data\deca_model.tar
D:\Anaconda3\envs\py37\lib\site-packages\pytorch3d-0.3.0-py3.7-win-amd64.egg\pytorch3d\io\obj_io.py:476: UserWarning: Mtl file does not exist: E:\毕业设计\DECA\data\template.mtl
warnings.warn(f"Mtl file does not exist: {f}")
0%| | 0/6 [00:00<?, ?it/s]D:\Anaconda3\envs\py37\lib\site-packages\torch\nn\functional.py:3121: UserWarning: Default upsampling behavior when mode=bilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details.
"See the documentation of nn.Upsample for details.".format(mode))
D:\Anaconda3\envs\py37\lib\site-packages\torch\nn\functional.py:3384: UserWarning: Default grid_sample and affine_grid behavior has changed to align_corners=False since 1.3.0. Please specify align_corners=True if the old behavior is desired. See the documentation of grid_sample for details.
warnings.warn("Default grid_sample and affine_grid behavior has changed "
0%| | 0/6 [00:07<?, ?it/s]
Traceback (most recent call last):
File "demos/demo_reconstruct.py", line 103, in
main(parser.parse_args())
File "demos/demo_reconstruct.py", line 58, in main
deca.save_obj(os.path.join(savefolder, name, name + '.obj'), opdict)
File "E:\毕业设计\DECA\decalib\deca.py", line 254, in save_obj
normal_map=normal_map)
File "E:\毕业设计\DECA\decalib\utils\util.py", line 97, in write_obj
with open(obj_name, 'w') as f:
FileNotFoundError: [Errno 2] No such file or directory: 'TestSamples/examples/results\examples\IMG_0392_inputs\examples\IMG_0392_inputs.obj'

Can DECA handle the eye gaze?

Hi, great paper and nice repo that is easy to use. I have a question related to the eye gaze direction. In the FLAME model, the pose params include the eyeball. However, in the DECA paper, it seems like only taking care of head pose and jaw pose. I am wondering whether the eye gaze is a hard problem. Besides, is there any existing tool that can handle the eye gaze in the reconstruction? Thank you.

RuntimeError:

It worked fine two days ago, but when I tried it today, I suddenly got the following error. Is there any countermeasure?

#--------------------------------------

Traceback (most recent call last):
File "demos/demo_reconstruct.py", line 103, in
main(parser.parse_args())
File "demos/demo_reconstruct.py", line 36, in main
testdata = datasets.TestData(args.inputpath, iscrop=args.iscrop, face_detector=args.detector)
File "/content/DECA/DECA/decalib/datasets/datasets.py", line 70, in init
self.face_detector = detectors.FAN()
File "/content/DECA/DECA/decalib/datasets/detectors.py", line 22, in init
self.model = face_alignment.FaceAlignment(face_alignment.LandmarksType._2D, flip_input=False)
File "/usr/local/lib/python3.6/dist-packages/face_alignment/api.py", line 67, in init
self.face_alignment_net = torch.jit.load(load_file_from_url(models_urls[network_name]))
File "/usr/local/lib/python3.6/dist-packages/torch/jit/init.py", line 275, in load
cpp_module = torch._C.import_ir_module(cu, f, map_location, _extra_files)
RuntimeError:

aten::_convolution(Tensor input, Tensor weight, Tensor? bias, int[] stride, int[] padding, int[] dilation, bool transposed, int[] output_padding, int groups, bool benchmark, bool deterministic, bool cudnn_enabled) -> (Tensor):
Expected at most 12 arguments but found 13 positional arguments.
:
/home/SERILOCAL/adrian.bulat/miniconda3/lib/python3.7/site-packages/torch/nn/modules/conv.py(420): _conv_forward
/home/SERILOCAL/adrian.bulat/miniconda3/lib/python3.7/site-packages/torch/nn/modules/conv.py(423): forward
/home/SERILOCAL/adrian.bulat/miniconda3/lib/python3.7/site-packages/torch/nn/modules/module.py(709): _slow_forward
/home/SERILOCAL/adrian.bulat/miniconda3/lib/python3.7/site-packages/torch/nn/modules/module.py(725): _call_impl
/home/SERILOCAL/adrian.bulat/test/face-alignment/face_alignment/models/fan.py(134): forward
/home/SERILOCAL/adrian.bulat/miniconda3/lib/python3.7/site-packages/torch/nn/modules/module.py(709): _slow_forward
/home/SERILOCAL/adrian.bulat/miniconda3/lib/python3.7/site-packages/torch/nn/modules/module.py(725): _call_impl
/home/SERILOCAL/adrian.bulat/miniconda3/lib/python3.7/site-packages/torch/jit/_trace.py(940): trace_module
/home/SERILOCAL/adrian.bulat/miniconda3/lib/python3.7/site-packages/torch/jit/_trace.py(742): trace
test.py(15):
Serialized File "code/torch/torch/nn/modules/conv.py", line 10
input: Tensor) -> Tensor:
_0 = self.bias
input0 = torch._convolution(input, self.weight, _0, [2, 2], [3, 3], [1, 1], False, [0, 0], 1, True, False, True, True)
~~~~~~~~~~~~~~~~~~ <--- HERE
return input0

Not running tests

Hello everyone, thanks for releasing your code! I have been trying to run it and my setup is the same as described in the paper (pytorch, torchvision, and the latest pytorch3d) However, I encounter this error message: fc.weight not available in reconstructed resnet fc.bias not available in reconstructed resnet fc.weight not available in reconstructed resnet fc.bias not available in reconstructed resnet

And at the end this error: `File "/home/daniel/Desktop/DECA/decalib/utils/util.py", line 545, in plot_kpts
image = cv2.circle(image,(st[0], st[1]), 1, c, 2)
cv2.error: OpenCV(4.5.2) : -1: error: (-5:Bad argument) in function 'circle'

Overload resolution failed:

  • Can't parse 'center'. Sequence item with index 0 has a wrong type
  • Can't parse 'center'. Sequence item with index 0 has a wrong type
    `

What can I do?

Problem with texture alignment

Hi folks, thank you for this mind-blowing model!!

I've been trying to use the reconstruction demo and while the geometry reconstruction works like magic, the texture is always misaligned, especially around the eyes. At first I thought it was an issue with the custom image I was using but then when I attempted one of the sample images provided I had the same result.

I then thought it might have been related to the low-poly, pre-displacement OBJ I was using, but then after fumbling around a bit with the normal map with no luck, I opened the high-detail OBJ to check the vertex colors and it turns out they were also misaligned in the same way

Have you guys seen this problem before? Maybe there's something in the MAT file I should be applying? Thank you in advance for any light you can shine on the issue!

image

Error in Running Script (bug?) - Downgrade face-alignment package in requirements to 1.2.0

It seems to be a new bug because it was workign until two days ago. Now When I run:
python demos/demo_reconstruct.py -i TestSamples/examples --saveDepth True --saveObj True
I get the following error:
/usr/local/lib/python3.6/dist-packages/pytorch3d/io/obj_io.py:457: UserWarning: Mtl file does not exist: /content/DECA/data/template.mtl warnings.warn(f"Mtl file does not exist: {f}") 0% 0/9 [00:00<?, ?it/s] Traceback (most recent call last): File "demos/demo_reconstruct.py", line 103, in <module> main(parser.parse_args()) File "demos/demo_reconstruct.py", line 43, in main name = testdata[i]['imagename'] File "/content/DECA/decalib/datasets/datasets.py", line 119, in __getitem__ bbox, bbox_type = self.face_detector.run(image) File "/content/DECA/decalib/datasets/detectors.py", line 29, in run out = self.model.get_landmarks(image) File "/usr/local/lib/python3.6/dist-packages/face_alignment/api.py", line 101, in get_landmarks return self.get_landmarks_from_image(image_or_path, detected_faces) File "/usr/local/lib/python3.6/dist-packages/torch/autograd/grad_mode.py", line 15, in decorate_context return func(*args, **kwargs) File "/usr/local/lib/python3.6/dist-packages/face_alignment/api.py", line 120, in get_landmarks_from_image detected_faces = self.face_detector.detect_from_image(image.copy()) File "/usr/local/lib/python3.6/dist-packages/face_alignment/detection/sfd/sfd_detector.py", line 44, in detect_from_image bboxlist = detect(self.face_detector, image, device=self.device)[0] File "/usr/local/lib/python3.6/dist-packages/face_alignment/detection/sfd/detect.py", line 19, in detect return batch_detect(net, img, device) File "/usr/local/lib/python3.6/dist-packages/face_alignment/detection/sfd/detect.py", line 45, in batch_detect bboxlists = get_predictions(List(olist), batch_size) TypeError: __init__() takes 1 positional argument but 2 were given

How to translation the rendered images?

Thanks for this exciting work.
I have a question about how to adjust the position of rendered images, as illustrated in the below image.
image

The left image is the rendered image, which is perfect. And I want to move down this image so that the neck part will be deleted. The key point here is that in the processed image, the forehead part should be shown, just like the right image.

But you can see the pose between the right image and the left image is much more different.

ps. the right image is generated by changing positions of transformed_vertices in SRenderY.forward().

transformed_vertices[:, :, :2] = transformed_vertices[:, :, :2] * self.image_size / 2 + self.image_size / 2
transform = torch.tensor(t_form, dtype=torch.float32).to(images.device).unsqueeze(0)
transform[:, :2, 2] = - transform[:, :2, 2]
transformed_vertices= torch.einsum('bij,bkj->bki', transform, transformed_vertices)
transformed_vertices[:, :, :2] = transformed_vertices[:, :, :2] / self.image_size * 2. - 1.

And t_form is

[
   [1, 0, 0],
   [0, 1, 20],
   [0, 0, 1]
]

Can you give me any advice to implement this work?

Running on CPU only machine produces error: libcudart.so.10.1: No such file or directory

Hello everybody,
When i try to run the script in my own laptop on CPU i get the following error message:

Traceback (most recent call last):
  File "demos/demo_reconstruct.py", line 25, in <module>
    from decalib.deca import DECA
  File "/home/ubuntu/DECA/decalib/deca.py", line 27, in <module>
    from .utils.renderer import SRenderY
  File "/home/ubuntu/DECA/decalib/utils/renderer.py", line 23, in <module>
    from pytorch3d.io import load_obj
  File "/home/ubuntu/.local/lib/python3.8/site-packages/pytorch3d/io/__init__.py", line 4, in <module>
    from .obj_io import load_obj, load_objs_as_meshes, save_obj
  File "/home/ubuntu/.local/lib/python3.8/site-packages/pytorch3d/io/obj_io.py", line 14, in <module>
    from pytorch3d.renderer import TexturesAtlas, TexturesUV
  File "/home/ubuntu/.local/lib/python3.8/site-packages/pytorch3d/renderer/__init__.py", line 3, in <module>
    from .blending import (
  File "/home/ubuntu/.local/lib/python3.8/site-packages/pytorch3d/renderer/blending.py", line 9, in <module>
    from pytorch3d import _C
ImportError: libcudart.so.10.1: cannot open shared object file: No such file or directory

I own a machine which does not have a GPU, running it on Colab with a CPU only setting works without any issues though.
Any ideas on how to fix it?

Different rendered results

Hi, thanks for the great work. I currently have an issue with rendered images (in the last column). As shown, the rendered images are not similar to the input images (the first column). I am just using the default setting. Could you please help me figure out what setting/code I should change in order to get faithful reconstructions? Thanks!

image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.