drsleep / densetorch Goto Github PK
View Code? Open in Web Editor NEWAn easy-to-use wrapper for work with dense per-pixel tasks in PyTorch (including multi-task learning)
License: MIT License
An easy-to-use wrapper for work with dense per-pixel tasks in PyTorch (including multi-task learning)
License: MIT License
Hi, the provided link (https://cloudstor.aarnet.edu.au/plus/s/pQY2sgg4fffGUYy) for downloading images, depths and labels is broken. Could you please correct the download link? Thank you!
Just found that numpy needs to be included in setup.py and add include explicitly to avoid install error. If possible, please update setup.py.
--
from setuptools import setup, Extension
import numpy # <<==
with open('requirements.txt') as f:
requirements = f.read().splitlines()
setup(
name="densetorch",
version="0.0.1",
author="Vladimir Nekrasov",
author_email="nekrasowladimir.at.gmail.com",
description="Light-Weight PyTorch Wrapper for dense per-pixel tasks.",
url="https://github.com/drsleep/densetorch",
include_dirs=[numpy.get_include()], # <<==
packages=["densetorch"],
setup_requires=[
'setuptools>=18.0',
'cython'],
install_requires=requirements,
ext_modules=[
Extension('densetorch.engine.miou', sources=['./densetorch/engine/miou.pyx'])],
classifiers=("Programming Language :: Python :: 3"),
zip_safe=False)
HI,
I am trying to follow your instruction to match the result of the paper usubg NYU dataset. But the mIOU and RMSE are still can not be the same. They get stable after 300 iterations and stop at 25% and 0.7. Is there anything I missed or lost in the instructions? Thanks!
Best
Kuo
would you like to share the evaluation scripts of multi-task tight-weight refinenet?Thank you.
I am looking at your documentation of your model parameters like return and combine layers for MTLRefinenet+mobilenet v2 .
https://github.com/DrSleep/DenseTorch/blob/dev/docs/Models.md
You have mentioned as below
But if train a single head like "segm" alone or "depth" alone, what should be the return and combine layers for mobilenet+lwrefinenet. Looking for your support.
I also have other issue, when i load my model trained checkpoints, its says some keys are missing, should i also have to load optimizer state_dict?
Hello, why do I have this problem?
ModuleNotFoundError: No module named 'densetorch.engine.miou'
Hi,
When I want to download NYUDv2 with segmentation and depth masks using the link provided (https://cloudstor.aarnet.edu.au/plus/s/pQY2sgg4fffGUYy), I can't find the file and the link seems out of date. Could you please check and update the link to the dataset?
Great work. I have tried to train a NYUD joint network using default settings provided in the repo. The model is saved as checkpoint.pth.tar.
However, when running the inference using one of the notebook code from https://github.com/DrSleep/multi-task-refinenet/blob/master/src/notebooks/ExpNYUD_joint.ipynb, I get the subject error as follows:
(MTLRefineNet) [ashah29@compute-0-3 notebooks]$ python ExpNYUD_joint.py
Traceback (most recent call last):
File "ExpNYUD_joint.py", line 44, in <module>
model.load_state_dict(ckpt['state_dict'])
File "/project/xfu/aamir/anaconda3/envs/MTLRefineNet/lib/python3.6/site-packages/torch/nn/modules/module.py", line 1483, in load_state_dict
self.__class__.__name__, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for Net:
Missing key(s) in state_dict: "layer1.0.weight", "layer1.1.weight", "layer1.1.bias", "layer1.1.running_mean", "layer1.1.running_var", "layer2.0.output.0.0.weight", "layer2.0.output.0.1.weight", "layer2.0.output.0.1.bias", "layer2.0.output.0.1.running_mean", "layer2.0.output.0.1.running_var", "layer2.0.output.1.0.weight", "layer2.0.output.1.1.weight", "layer2.0.output.1.1.bias", "layer2.0.output.1.1.running_mean", "layer2.0.output.1.1.running_var", "layer2.0.output.2.0.weight", "layer2.0.output.2.1.weight", "layer2.0.output.2.1.bias", "layer2.0.output.2.1.running_mean", "layer2.0.output.2.1.running_var", "layer3.0.output.0.0.weight", "layer3.0.output.0.1.weight", "layer3.0.output.0.1.bias", "layer3.0.output.0.1.running_mean", "layer3.0.output.0.1.running_var", "layer3.0.output.1.0.weight", "layer3.0.output.1.1.weight", "layer3.0.output.1.1.bias", "layer3.0.output.1.1.running_mean", "layer3.0.output.1.1.running_var", "layer3.0.output.2.0.weight", "layer3.0.output.2.1.weight", "layer3.0.output.2.1.bias", "layer3.0.output.2.1.running_mean", "layer3.0.output.2.1.running_var", "layer3.1.output.0.0.weight", "layer3.1.output.0.1.weight", "layer3.1.output.0.1.bias", "layer3.1.output.0.1.running_mean", "layer3.1.output.0.1.running_var", "layer3.1.output.1.0.weight", "layer3.1.output.1.1.weight", "layer3.1.output.1.1.bias", "layer3.1.output.1.1.running_mean", "layer3.1.output.1.1.running_var", "layer3.1.output.2.0.weight", "layer3.1.output.2.1.weight", "layer3.1.output.2.1.bias", "layer3.1.output.2.1.running_mean", "layer3.1.output.2.1.running_var", "layer4.0.output.0.0.weight", "layer4.0.output.0.1.weight", "layer4.0.output.0.1.bias", "layer4.0.output.0.1.running_mean", "layer4.0.output.0.1.running_var", "layer4.0.output.1.0.weight", "layer4.0.output.1.1.weight", "layer4.0.output.1.1.bias", "layer4.0.output.1.1.running_mean", "layer4.0.output.1.1.running_var", "layer4.0.output.2.0.weight", "layer4.0.output.2.1.weight", "layer4.0.output.2.1.bias", "layer4.0.output.2.1.running_mean", "layer4.0.output.2.1.running_var", "layer4.1.output.0.0.weight", "layer4.1.output.0.1.weight", "layer4.1.output.0.1.bias", "layer4.1.output.0.1.running_mean", "layer4.1.output.0.1.running_var", "layer4.1.output.1.0.weight", "layer4.1.output.1.1.weight", "layer4.1.output.1.1.bias", "layer4.1.output.1.1.running_mean", "layer4.1.output.1.1.running_var", "layer4.1.output.2.0.weight", "layer4.1.output.2.1.weight", "layer4.1.output.2.1.bias", "layer4.1.output.2.1.running_mean", "layer4.1.output.2.1.running_var", "layer4.2.output.0.0.weight", "layer4.2.output.0.1.weight", "layer4.2.output.0.1.bias", "layer4.2.output.0.1.running_mean", "layer4.2.output.0.1.running_var", "layer4.2.output.1.0.weight", "layer4.2.output.1.1.weight", "layer4.2.output.1.1.bias", "layer4.2.output.1.1.running_mean", "layer4.2.output.1.1.running_var", "layer4.2.output.2.0.weight", "layer4.2.output.2.1.weight", "layer4.2.output.2.1.bias", "layer4.2.output.2.1.running_mean", "layer4.2.output.2.1.running_var", "layer5.0.output.0.0.weight", "layer5.0.output.0.1.weight", "layer5.0.output.0.1.bias", "layer5.0.output.0.1.running_mean", "layer5.0.output.0.1.running_var", "layer5.0.output.1.0.weight", "layer5.0.output.1.1.weight", "layer5.0.output.1.1.bias", "layer5.0.output.1.1.running_mean", "layer5.0.output.1.1.running_var", "layer5.0.output.2.0.weight", "layer5.0.output.2.1.weight", "layer5.0.output.2.1.bias", "layer5.0.output.2.1.running_mean", "layer5.0.output.2.1.running_var", "layer5.1.output.0.0.weight", "layer5.1.output.0.1.weight", "layer5.1.output.0.1.bias", "layer5.1.output.0.1.running_mean", "layer5.1.output.0.1.running_var", "layer5.1.output.1.0.weight", "layer5.1.output.1.1.weight", "layer5.1.output.1.1.bias", "layer5.1.output.1.1.running_mean", "layer5.1.output.1.1.running_var", "layer5.1.output.2.0.weight", "layer5.1.output.2.1.weight", "layer5.1.output.2.1.bias", "layer5.1.output.2.1.running_mean", "layer5.1.output.2.1.running_var", "layer5.2.output.0.0.weight", "layer5.2.output.0.1.weight", "layer5.2.output.0.1.bias", "layer5.2.output.0.1.running_mean", "layer5.2.output.0.1.running_var", "layer5.2.output.1.0.weight", "layer5.2.output.1.1.weight", "layer5.2.output.1.1.bias", "layer5.2.output.1.1.running_mean", "layer5.2.output.1.1.running_var", "layer5.2.output.2.0.weight", "layer5.2.output.2.1.weight", "layer5.2.output.2.1.bias", "layer5.2.output.2.1.running_mean", "layer5.2.output.2.1.running_var", "layer5.3.output.0.0.weight", "layer5.3.output.0.1.weight", "layer5.3.output.0.1.bias", "layer5.3.output.0.1.running_mean", "layer5.3.output.0.1.running_var", "layer5.3.output.1.0.weight", "layer5.3.output.1.1.weight", "layer5.3.output.1.1.bias", "layer5.3.output.1.1.running_mean", "layer5.3.output.1.1.running_var", "layer5.3.output.2.0.weight", "layer5.3.output.2.1.weight", "layer5.3.output.2.1.bias", "layer5.3.output.2.1.running_mean", "layer5.3.output.2.1.running_var", "layer6.0.output.0.0.weight", "layer6.0.output.0.1.weight", "layer6.0.output.0.1.bias", "layer6.0.output.0.1.running_mean", "layer6.0.output.0.1.running_var", "layer6.0.output.1.0.weight", "layer6.0.output.1.1.weight", "layer6.0.output.1.1.bias", "layer6.0.output.1.1.running_mean", "layer6.0.output.1.1.running_var", "layer6.0.output.2.0.weight", "layer6.0.output.2.1.weight", "layer6.0.output.2.1.bias", "layer6.0.output.2.1.running_mean", "layer6.0.output.2.1.running_var", "layer6.1.output.0.0.weight", "layer6.1.output.0.1.weight", "layer6.1.output.0.1.bias", "layer6.1.output.0.1.running_mean", "layer6.1.output.0.1.running_var", "layer6.1.output.1.0.weight", "layer6.1.output.1.1.weight", "layer6.1.output.1.1.bias", "layer6.1.output.1.1.running_mean", "layer6.1.output.1.1.running_var", "layer6.1.output.2.0.weight", "layer6.1.output.2.1.weight", "layer6.1.output.2.1.bias", "layer6.1.output.2.1.running_mean", "layer6.1.output.2.1.running_var", "layer6.2.output.0.0.weight", "layer6.2.output.0.1.weight", "layer6.2.output.0.1.bias", "layer6.2.output.0.1.running_mean", "layer6.2.output.0.1.running_var", "layer6.2.output.1.0.weight", "layer6.2.output.1.1.weight", "layer6.2.output.1.1.bias", "layer6.2.output.1.1.running_mean", "layer6.2.output.1.1.running_var", "layer6.2.output.2.0.weight", "layer6.2.output.2.1.weight", "layer6.2.output.2.1.bias", "layer6.2.output.2.1.running_mean", "layer6.2.output.2.1.running_var", "layer7.0.output.0.0.weight", "layer7.0.output.0.1.weight", "layer7.0.output.0.1.bias", "layer7.0.output.0.1.running_mean", "layer7.0.output.0.1.running_var", "layer7.0.output.1.0.weight", "layer7.0.output.1.1.weight", "layer7.0.output.1.1.bias", "layer7.0.output.1.1.running_mean", "layer7.0.output.1.1.running_var", "layer7.0.output.2.0.weight", "layer7.0.output.2.1.weight", "layer7.0.output.2.1.bias", "layer7.0.output.2.1.running_mean", "layer7.0.output.2.1.running_var", "layer7.1.output.0.0.weight", "layer7.1.output.0.1.weight", "layer7.1.output.0.1.bias", "layer7.1.output.0.1.running_mean", "layer7.1.output.0.1.running_var", "layer7.1.output.1.0.weight", "layer7.1.output.1.1.weight", "layer7.1.output.1.1.bias", "layer7.1.output.1.1.running_mean", "layer7.1.output.1.1.running_var", "layer7.1.output.2.0.weight", "layer7.1.output.2.1.weight", "layer7.1.output.2.1.bias", "layer7.1.output.2.1.running_mean", "layer7.1.output.2.1.running_var", "layer7.2.output.0.0.weight", "layer7.2.output.0.1.weight", "layer7.2.output.0.1.bias", "layer7.2.output.0.1.running_mean", "layer7.2.output.0.1.running_var", "layer7.2.output.1.0.weight", "layer7.2.output.1.1.weight", "layer7.2.output.1.1.bias", "layer7.2.output.1.1.running_mean", "layer7.2.output.1.1.running_var", "layer7.2.output.2.0.weight", "layer7.2.output.2.1.weight", "layer7.2.output.2.1.bias", "layer7.2.output.2.1.running_mean", "layer7.2.output.2.1.running_var", "layer8.0.output.0.0.weight", "layer8.0.output.0.1.weight", "layer8.0.output.0.1.bias", "layer8.0.output.0.1.running_mean", "layer8.0.output.0.1.running_var", "layer8.0.output.1.0.weight", "layer8.0.output.1.1.weight", "layer8.0.output.1.1.bias", "layer8.0.output.1.1.running_mean", "layer8.0.output.1.1.running_var", "layer8.0.output.2.0.weight", "layer8.0.output.2.1.weight", "layer8.0.output.2.1.bias", "layer8.0.output.2.1.running_mean", "layer8.0.output.2.1.running_var", "conv8.weight", "conv7.weight", "conv6.weight", "conv5.weight", "conv4.weight", "conv3.weight", "crp4.0.1_outvar_dimred.weight", "crp4.0.2_outvar_dimred.weight", "crp4.0.3_outvar_dimred.weight", "crp4.0.4_outvar_dimred.weight", "crp3.0.1_outvar_dimred.weight", "crp3.0.2_outvar_dimred.weight", "crp3.0.3_outvar_dimred.weight", "crp3.0.4_outvar_dimred.weight", "crp2.0.1_outvar_dimred.weight", "crp2.0.2_outvar_dimred.weight", "crp2.0.3_outvar_dimred.weight", "crp2.0.4_outvar_dimred.weight", "crp1.0.1_outvar_dimred.weight", "crp1.0.2_outvar_dimred.weight", "crp1.0.3_outvar_dimred.weight", "crp1.0.4_outvar_dimred.weight", "conv_adapt4.weight", "conv_adapt3.weight", "conv_adapt2.weight", "pre_depth.weight", "depth.weight", "depth.bias", "pre_segm.weight", "segm.weight", "segm.bias".
Unexpected key(s) in state_dict: "module.0.layer1.0.weight", "module.0.layer1.1.weight", "module.0.layer1.1.bias", "module.0.layer1.1.running_mean", "module.0.layer1.1.running_var", "module.0.layer1.1.num_batches_tracked", "module.0.layer2.0.output.0.0.weight", "module.0.layer2.0.output.0.1.weight", "module.0.layer2.0.output.0.1.bias", "module.0.layer2.0.output.0.1.running_mean", "module.0.layer2.0.output.0.1.running_var", "module.0.layer2.0.output.0.1.num_batches_tracked", "module.0.layer2.0.output.1.0.weight", "module.0.layer2.0.output.1.1.weight", "module.0.layer2.0.output.1.1.bias", "module.0.layer2.0.output.1.1.running_mean", "module.0.layer2.0.output.1.1.running_var", "module.0.layer2.0.output.1.1.num_batches_tracked", "module.0.layer2.0.output.2.0.weight", "module.0.layer2.0.output.2.1.weight", "module.0.layer2.0.output.2.1.bias", "module.0.layer2.0.output.2.1.running_mean", "module.0.layer2.0.output.2.1.running_var", "module.0.layer2.0.output.2.1.num_batches_tracked", "module.0.layer3.0.output.0.0.weight", "module.0.layer3.0.output.0.1.weight", "module.0.layer3.0.output.0.1.bias", "module.0.layer3.0.output.0.1.running_mean", "module.0.layer3.0.output.0.1.running_var", "module.0.layer3.0.output.0.1.num_batches_tracked", "module.0.layer3.0.output.1.0.weight", "module.0.layer3.0.output.1.1.weight", "module.0.layer3.0.output.1.1.bias", "module.0.layer3.0.output.1.1.running_mean", "module.0.layer3.0.output.1.1.running_var", "module.0.layer3.0.output.1.1.num_batches_tracked", "module.0.layer3.0.output.2.0.weight", "module.0.layer3.0.output.2.1.weight", "module.0.layer3.0.output.2.1.bias", "module.0.layer3.0.output.2.1.running_mean", "module.0.layer3.0.output.2.1.running_var", "module.0.layer3.0.output.2.1.num_batches_tracked", "module.0.layer3.1.output.0.0.weight", "module.0.layer3.1.output.0.1.weight", "module.0.layer3.1.output.0.1.bias", "module.0.layer3.1.output.0.1.running_mean", "module.0.layer3.1.output.0.1.running_var", "module.0.layer3.1.output.0.1.num_batches_tracked", "module.0.layer3.1.output.1.0.weight", "module.0.layer3.1.output.1.1.weight", "module.0.layer3.1.output.1.1.bias", "module.0.layer3.1.output.1.1.running_mean", "module.0.layer3.1.output.1.1.running_var", "module.0.layer3.1.output.1.1.num_batches_tracked", "module.0.layer3.1.output.2.0.weight", "module.0.layer3.1.output.2.1.weight", "module.0.layer3.1.output.2.1.bias", "module.0.layer3.1.output.2.1.running_mean", "module.0.layer3.1.output.2.1.running_var", "module.0.layer3.1.output.2.1.num_batches_tracked", "module.0.layer4.0.output.0.0.weight", "module.0.layer4.0.output.0.1.weight", "module.0.layer4.0.output.0.1.bias", "module.0.layer4.0.output.0.1.running_mean", "module.0.layer4.0.output.0.1.running_var", "module.0.layer4.0.output.0.1.num_batches_tracked", "module.0.layer4.0.output.1.0.weight", "module.0.layer4.0.output.1.1.weight", "module.0.layer4.0.output.1.1.bias", "module.0.layer4.0.output.1.1.running_mean", "module.0.layer4.0.output.1.1.running_var", "module.0.layer4.0.output.1.1.num_batches_tracked", "module.0.layer4.0.output.2.0.weight", "module.0.layer4.0.output.2.1.weight", "module.0.layer4.0.output.2.1.bias", "module.0.layer4.0.output.2.1.running_mean", "module.0.layer4.0.output.2.1.running_var", "module.0.layer4.0.output.2.1.num_batches_tracked", "module.0.layer4.1.output.0.0.weight", "module.0.layer4.1.output.0.1.weight", "module.0.layer4.1.output.0.1.bias", "module.0.layer4.1.output.0.1.running_mean", "module.0.layer4.1.output.0.1.running_var", "module.0.layer4.1.output.0.1.num_batches_tracked", "module.0.layer4.1.output.1.0.weight", "module.0.layer4.1.output.1.1.weight", "module.0.layer4.1.output.1.1.bias", "module.0.layer4.1.output.1.1.running_mean", "module.0.layer4.1.output.1.1.running_var", "module.0.layer4.1.output.1.1.num_batches_tracked", "module.0.layer4.1.output.2.0.weight", "module.0.layer4.1.output.2.1.weight", "module.0.layer4.1.output.2.1.bias", "module.0.layer4.1.output.2.1.running_mean", "module.0.layer4.1.output.2.1.running_var", "module.0.layer4.1.output.2.1.num_batches_tracked", "module.0.layer4.2.output.0.0.weight", "module.0.layer4.2.output.0.1.weight", "module.0.layer4.2.output.0.1.bias", "module.0.layer4.2.output.0.1.running_mean", "module.0.layer4.2.output.0.1.running_var", "module.0.layer4.2.output.0.1.num_batches_tracked", "module.0.layer4.2.output.1.0.weight", "module.0.layer4.2.output.1.1.weight", "module.0.layer4.2.output.1.1.bias", "module.0.layer4.2.output.1.1.running_mean", "module.0.layer4.2.output.1.1.running_var", "module.0.layer4.2.output.1.1.num_batches_tracked", "module.0.layer4.2.output.2.0.weight", "module.0.layer4.2.output.2.1.weight", "module.0.layer4.2.output.2.1.bias", "module.0.layer4.2.output.2.1.running_mean", "module.0.layer4.2.output.2.1.running_var", "module.0.layer4.2.output.2.1.num_batches_tracked", "module.0.layer5.0.output.0.0.weight", "module.0.layer5.0.output.0.1.weight", "module.0.layer5.0.output.0.1.bias", "module.0.layer5.0.output.0.1.running_mean", "module.0.layer5.0.output.0.1.running_var", "module.0.layer5.0.output.0.1.num_batches_tracked", "module.0.layer5.0.output.1.0.weight", "module.0.layer5.0.output.1.1.weight", "module.0.layer5.0.output.1.1.bias", "module.0.layer5.0.output.1.1.running_mean", "module.0.layer5.0.output.1.1.running_var", "module.0.layer5.0.output.1.1.num_batches_tracked", "module.0.layer5.0.output.2.0.weight", "module.0.layer5.0.output.2.1.weight", "module.0.layer5.0.output.2.1.bias", "module.0.layer5.0.output.2.1.running_mean", "module.0.layer5.0.output.2.1.running_var", "module.0.layer5.0.output.2.1.num_batches_tracked", "module.0.layer5.1.output.0.0.weight", "module.0.layer5.1.output.0.1.weight", "module.0.layer5.1.output.0.1.bias", "module.0.layer5.1.output.0.1.running_mean", "module.0.layer5.1.output.0.1.running_var", "module.0.layer5.1.output.0.1.num_batches_tracked", "module.0.layer5.1.output.1.0.weight", "module.0.layer5.1.output.1.1.weight", "module.0.layer5.1.output.1.1.bias", "module.0.layer5.1.output.1.1.running_mean", "module.0.layer5.1.output.1.1.running_var", "module.0.layer5.1.output.1.1.num_batches_tracked", "module.0.layer5.1.output.2.0.weight", "module.0.layer5.1.output.2.1.weight", "module.0.layer5.1.output.2.1.bias", "module.0.layer5.1.output.2.1.running_mean", "module.0.layer5.1.output.2.1.running_var", "module.0.layer5.1.output.2.1.num_batches_tracked", "module.0.layer5.2.output.0.0.weight", "module.0.layer5.2.output.0.1.weight", "module.0.layer5.2.output.0.1.bias", "module.0.layer5.2.output.0.1.running_mean", "module.0.layer5.2.output.0.1.running_var", "module.0.layer5.2.output.0.1.num_batches_tracked", "module.0.layer5.2.output.1.0.weight", "module.0.layer5.2.output.1.1.weight", "module.0.layer5.2.output.1.1.bias", "module.0.layer5.2.output.1.1.running_mean", "module.0.layer5.2.output.1.1.running_var", "module.0.layer5.2.output.1.1.num_batches_tracked", "module.0.layer5.2.output.2.0.weight", "module.0.layer5.2.output.2.1.weight", "module.0.layer5.2.output.2.1.bias", "module.0.layer5.2.output.2.1.running_mean", "module.0.layer5.2.output.2.1.running_var", "module.0.layer5.2.output.2.1.num_batches_tracked", "module.0.layer5.3.output.0.0.weight", "module.0.layer5.3.output.0.1.weight", "module.0.layer5.3.output.0.1.bias", "module.0.layer5.3.output.0.1.running_mean", "module.0.layer5.3.output.0.1.running_var", "module.0.layer5.3.output.0.1.num_batches_tracked", "module.0.layer5.3.output.1.0.weight", "module.0.layer5.3.output.1.1.weight", "module.0.layer5.3.output.1.1.bias", "module.0.layer5.3.output.1.1.running_mean", "module.0.layer5.3.output.1.1.running_var", "module.0.layer5.3.output.1.1.num_batches_tracked", "module.0.layer5.3.output.2.0.weight", "module.0.layer5.3.output.2.1.weight", "module.0.layer5.3.output.2.1.bias", "module.0.layer5.3.output.2.1.running_mean", "module.0.layer5.3.output.2.1.running_var", "module.0.layer5.3.output.2.1.num_batches_tracked", "module.0.layer6.0.output.0.0.weight", "module.0.layer6.0.output.0.1.weight", "module.0.layer6.0.output.0.1.bias", "module.0.layer6.0.output.0.1.running_mean", "module.0.layer6.0.output.0.1.running_var", "module.0.layer6.0.output.0.1.num_batches_tracked", "module.0.layer6.0.output.1.0.weight", "module.0.layer6.0.output.1.1.weight", "module.0.layer6.0.output.1.1.bias", "module.0.layer6.0.output.1.1.running_mean", "module.0.layer6.0.output.1.1.running_var", "module.0.layer6.0.output.1.1.num_batches_tracked", "module.0.layer6.0.output.2.0.weight", "module.0.layer6.0.output.2.1.weight", "module.0.layer6.0.output.2.1.bias", "module.0.layer6.0.output.2.1.running_mean", "module.0.layer6.0.output.2.1.running_var", "module.0.layer6.0.output.2.1.num_batches_tracked", "module.0.layer6.1.output.0.0.weight", "module.0.layer6.1.output.0.1.weight", "module.0.layer6.1.output.0.1.bias", "module.0.layer6.1.output.0.1.running_mean", "module.0.layer6.1.output.0.1.running_var", "module.0.layer6.1.output.0.1.num_batches_tracked", "module.0.layer6.1.output.1.0.weight", "module.0.layer6.1.output.1.1.weight", "module.0.layer6.1.output.1.1.bias", "module.0.layer6.1.output.1.1.running_mean", "module.0.layer6.1.output.1.1.running_var", "module.0.layer6.1.output.1.1.num_batches_tracked", "module.0.layer6.1.output.2.0.weight", "module.0.layer6.1.output.2.1.weight", "module.0.layer6.1.output.2.1.bias", "module.0.layer6.1.output.2.1.running_mean", "module.0.layer6.1.output.2.1.running_var", "module.0.layer6.1.output.2.1.num_batches_tracked", "module.0.layer6.2.output.0.0.weight", "module.0.layer6.2.output.0.1.weight", "module.0.layer6.2.output.0.1.bias", "module.0.layer6.2.output.0.1.running_mean", "module.0.layer6.2.output.0.1.running_var", "module.0.layer6.2.output.0.1.num_batches_tracked", "module.0.layer6.2.output.1.0.weight", "module.0.layer6.2.output.1.1.weight", "module.0.layer6.2.output.1.1.bias", "module.0.layer6.2.output.1.1.running_mean", "module.0.layer6.2.output.1.1.running_var", "module.0.layer6.2.output.1.1.num_batches_tracked", "module.0.layer6.2.output.2.0.weight", "module.0.layer6.2.output.2.1.weight", "module.0.layer6.2.output.2.1.bias", "module.0.layer6.2.output.2.1.running_mean", "module.0.layer6.2.output.2.1.running_var", "module.0.layer6.2.output.2.1.num_batches_tracked", "module.0.layer7.0.output.0.0.weight", "module.0.layer7.0.output.0.1.weight", "module.0.layer7.0.output.0.1.bias", "module.0.layer7.0.output.0.1.running_mean", "module.0.layer7.0.output.0.1.running_var", "module.0.layer7.0.output.0.1.num_batches_tracked", "module.0.layer7.0.output.1.0.weight", "module.0.layer7.0.output.1.1.weight", "module.0.layer7.0.output.1.1.bias", "module.0.layer7.0.output.1.1.running_mean", "module.0.layer7.0.output.1.1.running_var", "module.0.layer7.0.output.1.1.num_batches_tracked", "module.0.layer7.0.output.2.0.weight", "module.0.layer7.0.output.2.1.weight", "module.0.layer7.0.output.2.1.bias", "module.0.layer7.0.output.2.1.running_mean", "module.0.layer7.0.output.2.1.running_var", "module.0.layer7.0.output.2.1.num_batches_tracked", "module.0.layer7.1.output.0.0.weight", "module.0.layer7.1.output.0.1.weight", "module.0.layer7.1.output.0.1.bias", "module.0.layer7.1.output.0.1.running_mean", "module.0.layer7.1.output.0.1.running_var", "module.0.layer7.1.output.0.1.num_batches_tracked", "module.0.layer7.1.output.1.0.weight", "module.0.layer7.1.output.1.1.weight", "module.0.layer7.1.output.1.1.bias", "module.0.layer7.1.output.1.1.running_mean", "module.0.layer7.1.output.1.1.running_var", "module.0.layer7.1.output.1.1.num_batches_tracked", "module.0.layer7.1.output.2.0.weight", "module.0.layer7.1.output.2.1.weight", "module.0.layer7.1.output.2.1.bias", "module.0.layer7.1.output.2.1.running_mean", "module.0.layer7.1.output.2.1.running_var", "module.0.layer7.1.output.2.1.num_batches_tracked", "module.0.layer7.2.output.0.0.weight", "module.0.layer7.2.output.0.1.weight", "module.0.layer7.2.output.0.1.bias", "module.0.layer7.2.output.0.1.running_mean", "module.0.layer7.2.output.0.1.running_var", "module.0.layer7.2.output.0.1.num_batches_tracked", "module.0.layer7.2.output.1.0.weight", "module.0.layer7.2.output.1.1.weight", "module.0.layer7.2.output.1.1.bias", "module.0.layer7.2.output.1.1.running_mean", "module.0.layer7.2.output.1.1.running_var", "module.0.layer7.2.output.1.1.num_batches_tracked", "module.0.layer7.2.output.2.0.weight", "module.0.layer7.2.output.2.1.weight", "module.0.layer7.2.output.2.1.bias", "module.0.layer7.2.output.2.1.running_mean", "module.0.layer7.2.output.2.1.running_var", "module.0.layer7.2.output.2.1.num_batches_tracked", "module.0.layer8.0.output.0.0.weight", "module.0.layer8.0.output.0.1.weight", "module.0.layer8.0.output.0.1.bias", "module.0.layer8.0.output.0.1.running_mean", "module.0.layer8.0.output.0.1.running_var", "module.0.layer8.0.output.0.1.num_batches_tracked", "module.0.layer8.0.output.1.0.weight", "module.0.layer8.0.output.1.1.weight", "module.0.layer8.0.output.1.1.bias", "module.0.layer8.0.output.1.1.running_mean", "module.0.layer8.0.output.1.1.running_var", "module.0.layer8.0.output.1.1.num_batches_tracked", "module.0.layer8.0.output.2.0.weight", "module.0.layer8.0.output.2.1.weight", "module.0.layer8.0.output.2.1.bias", "module.0.layer8.0.output.2.1.running_mean", "module.0.layer8.0.output.2.1.running_var", "module.0.layer8.0.output.2.1.num_batches_tracked", "module.1.stem_convs.0.weight", "module.1.stem_convs.1.weight", "module.1.stem_convs.2.weight", "module.1.stem_convs.3.weight", "module.1.stem_convs.4.weight", "module.1.stem_convs.5.weight", "module.1.crp_blocks.0.0.1_outvar_dimred.weight", "module.1.crp_blocks.0.0.2_outvar_dimred.weight", "module.1.crp_blocks.0.0.3_outvar_dimred.weight", "module.1.crp_blocks.0.0.4_outvar_dimred.weight", "module.1.crp_blocks.1.0.1_outvar_dimred.weight", "module.1.crp_blocks.1.0.2_outvar_dimred.weight", "module.1.crp_blocks.1.0.3_outvar_dimred.weight", "module.1.crp_blocks.1.0.4_outvar_dimred.weight", "module.1.crp_blocks.2.0.1_outvar_dimred.weight", "module.1.crp_blocks.2.0.2_outvar_dimred.weight", "module.1.crp_blocks.2.0.3_outvar_dimred.weight", "module.1.crp_blocks.2.0.4_outvar_dimred.weight", "module.1.crp_blocks.3.0.1_outvar_dimred.weight", "module.1.crp_blocks.3.0.2_outvar_dimred.weight", "module.1.crp_blocks.3.0.3_outvar_dimred.weight", "module.1.crp_blocks.3.0.4_outvar_dimred.weight", "module.1.adapt_convs.0.weight", "module.1.adapt_convs.1.weight", "module.1.adapt_convs.2.weight", "module.1.heads.0.0.weight", "module.1.heads.0.2.weight", "module.1.heads.0.2.bias", "module.1.heads.1.0.weight", "module.1.heads.1.2.weight", "module.1.heads.1.2.bias".
I have tried replacing the module with "" as follows:
ckpt = torch.load('../../DenseTorch/ckpt/checkpoint.pth.tar')
pretrained_dict = ckpt['state_dict']
new_pretrained_dict = {key.replace("module.0.", ""): value for key, value in pretrained_dict.items()}
print("new state_dict keys: ", new_pretrained_dict.keys())
model.load_state_dict(new_pretrained_dict, strict=True)
but the error doesnt go away. Any suggestion on how to address this error?
Traceback (most recent call last):
File "ExpNYUD_joint.py", line 48, in <module>
model.load_state_dict(new_pretrained_dict, strict=True)
File "/project/xfu/aamir/anaconda3/envs/MTLRefineNet/lib/python3.6/site-packages/torch/nn/modules/module.py", line 1483, in load_state_dict
self.__class__.__name__, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for Net:
Missing key(s) in state_dict: "conv8.weight", "conv7.weight", "conv6.weight", "conv5.weight", "conv4.weight", "conv3.weight", "crp4.0.1_outvar_dimred.weight", "crp4.0.2_outvar_dimred.weight", "crp4.0.3_outvar_dimred.weight", "crp4.0.4_outvar_dimred.weight", "crp3.0.1_outvar_dimred.weight", "crp3.0.2_outvar_dimred.weight", "crp3.0.3_outvar_dimred.weight", "crp3.0.4_outvar_dimred.weight", "crp2.0.1_outvar_dimred.weight", "crp2.0.2_outvar_dimred.weight", "crp2.0.3_outvar_dimred.weight", "crp2.0.4_outvar_dimred.weight", "crp1.0.1_outvar_dimred.weight", "crp1.0.2_outvar_dimred.weight", "crp1.0.3_outvar_dimred.weight", "crp1.0.4_outvar_dimred.weight", "conv_adapt4.weight", "conv_adapt3.weight", "conv_adapt2.weight", "pre_depth.weight", "depth.weight", "depth.bias", "pre_segm.weight", "segm.weight", "segm.bias".
Unexpected key(s) in state_dict: "stem_convs.0.weight", "stem_convs.1.weight", "stem_convs.2.weight", "stem_convs.3.weight", "stem_convs.4.weight", "stem_convs.5.weight", "crp_blocks.0.0.1_outvar_dimred.weight", "crp_blocks.0.0.2_outvar_dimred.weight", "crp_blocks.0.0.3_outvar_dimred.weight", "crp_blocks.0.0.4_outvar_dimred.weight", "crp_blocks.1.0.1_outvar_dimred.weight", "crp_blocks.1.0.2_outvar_dimred.weight", "crp_blocks.1.0.3_outvar_dimred.weight", "crp_blocks.1.0.4_outvar_dimred.weight", "crp_blocks.2.0.1_outvar_dimred.weight", "crp_blocks.2.0.2_outvar_dimred.weight", "crp_blocks.2.0.3_outvar_dimred.weight", "crp_blocks.2.0.4_outvar_dimred.weight", "crp_blocks.3.0.1_outvar_dimred.weight", "crp_blocks.3.0.2_outvar_dimred.weight", "crp_blocks.3.0.3_outvar_dimred.weight", "crp_blocks.3.0.4_outvar_dimred.weight", "adapt_convs.0.weight", "adapt_convs.1.weight", "adapt_convs.2.weight", "heads.0.0.weight", "heads.0.2.weight", "heads.0.2.bias", "heads.1.0.weight", "heads.1.2.weight", "heads.1.2.bias".
Using strict=False
does make the code run without errors, but the prediction is incorrect and the predictions do not show anything at the output images
HI,
Just wonder if python 3.5 can work with your DenseTorch?
Thanks!
Best
Kuo
I have a dataset that contains masked and depth images. I would like to train this model on this dataset. As I understand, first, my masks and depth images should be 1 channel, then I need to modify the hyper-parameters within the config.py file accordingly to the paper. My concern here is the pre-trained model, in the train.py
, line 12
the checkpoint is ckpt_postfix = 'mtrflw-nyudv2'
which is the multi-task-refinenet trained on the nuydv2 dataset. Since my dataset is outdoor images, I need to feed-in the pre-trained model on the kitti dataset. If my understanding of the whole process is right, how can I do that and what other things should I be considering ? @DrSleep
Thank you very much for your work, I find your examples are supervised, I wonder if I can use some custom unsupervised loss function to train, or do you have any suggestions?
In addition, I found that the training process was encapsulated in dt.engine, so could Tensorboardx be used to visualize the intermediate results of the training process?
Hello, dear author.
How do I get segmentation map and depth of field map with model (. Tar)
Building wheels for collected packages: python2
Building wheel for python2 (setup.py) ... done
Stored in directory: /root/.cache/pip/wheels/d1/6a/52/2ea03062735c314798c8c5ac3da63271888638d2f4fed6d4bd
Successfully built python2
Installing collected packages: python2, densetorch
Running setup.py develop for densetorch
ERROR: Complete output from command /usr/bin/python3.6 -c 'import setuptools, tokenize;file='"'"'/root/data/densetorch/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(file);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' develop --no-deps:
ERROR: running develop
running egg_info
writing densetorch.egg-info/PKG-INFO
writing dependency_links to densetorch.egg-info/dependency_links.txt
writing requirements to densetorch.egg-info/requires.txt
writing top-level names to densetorch.egg-info/top_level.txt
reading manifest file 'densetorch.egg-info/SOURCES.txt'
writing manifest file 'densetorch.egg-info/SOURCES.txt'
running build_ext
skipping './densetorch/engine/miou.c' Cython extension (up-to-date)
building 'densetorch.engine.miou' extension
x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/local/lib/python3.6/dist-packages/numpy/core/include -I/usr/include/python3.6m -I/usr/include/python3.6m -c ./densetorch/engine/miou.c -o build/temp.linux-x86_64-3.6/./densetorch/engine/miou.o
./densetorch/engine/miou.c:14:20: fatal error: Python.h: No such file or directory
compilation terminated.
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
----------------------------------------
ERROR: Command "/usr/bin/python3.6 -c 'import setuptools, tokenize;file='"'"'/root/data/densetorch/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(file);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' develop --no-deps" failed with error code 1 in /root/data/densetorch/
Good evening,
I'm trying to use DenseTorch to train a multi task module on a custom created dataset (rgb, masks and depth with the masks and depth being grayscale). whenever I'm trying to run the training I get the following error that I couldn't solve:
Traceback (most recent call last): File "train.py", line 82, in <module> dt.engine.train(model1, optims, [crit_segm, crit_depth], trainloader, loss_coeffs) File "/media/pfe_historiar/data/dataset/DenseTorch-master/densetorch/engine/trainval.py", line 80, in train target.squeeze(dim=1), File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 547, in __call__ result = self.forward(*input, **kwargs) File "/media/pfe_historiar/data/dataset/DenseTorch-master/densetorch/engine/losses.py", line 27, in forward c = 0.2 * torch.max(err) RuntimeError: cuda runtime error (59) : device-side assert triggered at /pytorch/aten/src/THC/THCReduceAll.cuh:327
Any recommendations on how to solve this issue?
The full terminal log is in this file log.txt.
I have trained the model using a custom dataset, where the GT_depth maps were generated using colmap. However after training, and running the inference notebook, this is what I get as output, the predicted depth maps seems more to be as normal maps.
@DrSleep any idea or guidance on the matter ?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.