print('data_dict.keys(): ', data_dict.keys())
print(data_dict['input_im'].shape)
print(data_dict['recon_albedo'].shape)
print(data_dict['recon_depth'].shape)
print(data_dict['recon_normal'].shape)
print(data_dict['recon_normal'][0,0,:10,:10])
Shouldn't the test image be run without relying on any pre-computed inputs and just the single RGB image? as that is the perception I had reading the paper. All these tensors should be initialized to zero except the input_im. May be having a single minimum inference example can help here: loading the model and running inference for a sample face/object image.