Comments (28)
@JiaweiShiCV 它没有接着输出了么...
from gfpgan.
这个问题我没有遇到过, 需要更多信息
- 四卡就没有问题?
- pytorch版本?
from gfpgan.
@xinntao 感谢您的回复,我暂时还未能上四卡。
pytorch = '1.8.0+cu111'
之前同样的环境单卡train esrgan没有类似问题。
我发现出问题的好像只是log出不来,训练照常进行,模型也保存了,wandb也一切正常。
如果您之前双卡没出现类似情况,那应该是环境不匹配造成的吧
from gfpgan.
补充一下,esrgan原本就是单卡训练,所以没有问题。但basicsr内有其他项目是四卡的,改双卡就出现类似问题。
from gfpgan.
这个问题确实很奇怪, 我在 pytorch 1.8 cuda10.2下 没有遇到这个问题。
那你的程序, 它有保存 .log的文件吗?
from gfpgan.
@xinntao 我刚又检查了,确实没有.log文件,终端也是没输出
其他一切正常。
from gfpgan.
@xinntao 您好,我有一些关于网络改进的想法,想跟您讨论一下是否可行。您能否给我一个联系方式。我的邮箱:[email protected]
from gfpgan.
八卡情况下,log正常。
from gfpgan.
same issue...
from gfpgan.
@syfbme @JiaweiShiCV
I cannot reproduce this issue. Could you guys help me to debug it?
It may be caused by the logging mechanism in BasicSR.
In the basicsr folder: basicsr/utils/logger.py Line106 -Line40
Could you please add these lines and post the outputs here? Thanks
def get_root_logger(logger_name='basicsr', log_level=logging.INFO, log_file=None):
"""Get the root logger.
The logger will be initialized if it has not been initialized. By default a
StreamHandler will be added. If `log_file` is specified, a FileHandler will
also be added.
Args:
logger_name (str): root logger name. Default: 'basicsr'.
log_file (str | None): The log filename. If specified, a FileHandler
will be added to the root logger.
log_level (int): The root logger level. Note that only the process of
rank 0 is affected, while other processes will set the level to
"Error" and be silent most of the time.
Returns:
logging.Logger: The root logger.
"""
print('Enter get_root_logger')
logger = logging.getLogger(logger_name)
# if the logger has been initialized, just return it
if logger.hasHandlers():
return logger
print('logger: add handlers')
format_str = '%(asctime)s %(levelname)s: %(message)s'
logging.basicConfig(format=format_str, level=log_level)
rank, _ = get_dist_info()
if rank != 0:
logger.setLevel('ERROR')
elif log_file is not None:
file_handler = logging.FileHandler(log_file, 'w')
file_handler.setFormatter(logging.Formatter(format_str))
file_handler.setLevel(log_level)
logger.addHandler(file_handler)
print('logger: last return')
return logger
from gfpgan.
Hi @xinntao
Only output "Enter get_root_logger"
from gfpgan.
@syfbme Thanks
It is strange...
Could you please modify this function to the follows, and post the outputs?
def get_root_logger(logger_name='basicsr', log_level=logging.INFO, log_file=None):
"""Get the root logger.
The logger will be initialized if it has not been initialized. By default a
StreamHandler will be added. If `log_file` is specified, a FileHandler will
also be added.
Args:
logger_name (str): root logger name. Default: 'basicsr'.
log_file (str | None): The log filename. If specified, a FileHandler
will be added to the root logger.
log_level (int): The root logger level. Note that only the process of
rank 0 is affected, while other processes will set the level to
"Error" and be silent most of the time.
Returns:
logging.Logger: The root logger.
"""
print('Enter get_root_logger')
logger = logging.getLogger(logger_name)
# if the logger has been initialized, just return it
# if logger.hasHandlers():
# return logger
if log_file is None:
return logger
print('logger: add handlers')
format_str = '%(asctime)s %(levelname)s: %(message)s'
logging.basicConfig(format=format_str, level=log_level)
rank, _ = get_dist_info()
if rank != 0:
logger.setLevel('ERROR')
elif log_file is not None:
file_handler = logging.FileHandler(log_file, 'w')
file_handler.setFormatter(logging.Formatter(format_str))
file_handler.setLevel(log_level)
logger.addHandler(file_handler)
print('logger: last return')
return logger
from gfpgan.
Hi @xinntao
i only used 1 gpu to make display cleaner. And below is the output:
Only the first time enter has "add handlers" and "last return"
from gfpgan.
@syfbme If it prints "add handlers" and "last return", then the issue has been solved.
So, you could see the screen outputs, and also have a log file in the experiments file, right?
from gfpgan.
@JiaweiShiCV 确实没有.log文件,终端也是没输出
这个问题,你现在还遇到么
from gfpgan.
@xinntao 我目前八卡以及四卡都没问题,双卡的话应该还是没输出
from gfpgan.
@JiaweiShiCV
能帮忙在两卡上(即不能输出log 的case) 测试下面的解决方案吗? (我这边没法复现,所以没法debug)
在 BasicSR folder: basicsr/utils/logger.py Line106 -Line40
修改为:
def get_root_logger(logger_name='basicsr', log_level=logging.INFO, log_file=None):
"""Get the root logger.
The logger will be initialized if it has not been initialized. By default a
StreamHandler will be added. If `log_file` is specified, a FileHandler will
also be added.
Args:
logger_name (str): root logger name. Default: 'basicsr'.
log_file (str | None): The log filename. If specified, a FileHandler
will be added to the root logger.
log_level (int): The root logger level. Note that only the process of
rank 0 is affected, while other processes will set the level to
"Error" and be silent most of the time.
Returns:
logging.Logger: The root logger.
"""
print('Enter get_root_logger')
logger = logging.getLogger(logger_name)
# if the logger has been initialized, just return it
# if logger.hasHandlers():
# return logger
if log_file is None:
return logger
print('logger: add handlers')
format_str = '%(asctime)s %(levelname)s: %(message)s'
logging.basicConfig(format=format_str, level=log_level)
rank, _ = get_dist_info()
if rank != 0:
logger.setLevel('ERROR')
elif log_file is not None:
file_handler = logging.FileHandler(log_file, 'w')
file_handler.setFormatter(logging.Formatter(format_str))
file_handler.setLevel(log_level)
logger.addHandler(file_handler)
print('logger: last return')
return logger
谢谢!
from gfpgan.
@xinntao 好的
from gfpgan.
@syfbme If it prints "add handlers" and "last return", then the issue has been solved.
So, you could see the screen outputs, and also have a log file in the experiments file, right?
Yes. Thanks~
from gfpgan.
@xinntao 双卡终端输出:
(BasicSR) ➜ GFPGAN git:(master) ✗ python -m torch.distributed.launch --nproc_per_node 2 --master_port 8888 train.py -opt train_gfpgan_v1.yml --launcher pytorch
*****************************************
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
*****************************************
Path already exists. Rename it to /home/sjw/文档/SR/GFPGAN/experiments/train_GFPGANv1_512_2gpu_archived_20210712_132006
Path already exists. Rename it to /home/sjw/文档/SR/GFPGAN/tb_logger/train_GFPGANv1_512_2gpu_archived_20210712_132006
Enter get_root_logger
logger: add handlers
logger: last return
Enter get_root_logger
logger: add handlers
logger: last return
Enter get_root_logger
Enter get_root_logger
Enter get_root_logger
Enter get_root_logger
Enter get_root_logger
Enter get_root_logger
Enter get_root_logger
Enter get_root_logger
Enter get_root_logger
Enter get_root_logger
Enter get_root_logger
Enter get_root_logger
Enter get_root_loggerEnter get_root_logger
Enter get_root_logger
Enter get_root_logger
Enter get_root_loggerEnter get_root_logger
Enter get_root_loggerEnter get_root_logger
Enter get_root_logger
Enter get_root_logger
Enter get_root_logger
Enter get_root_logger
Enter get_root_logger
Enter get_root_logger
Enter get_root_logger
Enter get_root_logger
WARNING:basicsr:Current net - loaded net:
WARNING:basicsr: bn1.num_batches_tracked
WARNING:basicsr: bn4.num_batches_tracked
WARNING:basicsr: bn5.num_batches_tracked
WARNING:basicsr: layer1.0.bn0.num_batches_tracked
WARNING:basicsr: layer1.0.bn1.num_batches_tracked
WARNING:basicsr: layer1.0.bn2.num_batches_tracked
WARNING:basicsr: layer1.1.bn0.num_batches_tracked
WARNING:basicsr: layer1.1.bn1.num_batches_tracked
WARNING:basicsr: layer1.1.bn2.num_batches_tracked
WARNING:basicsr: layer2.0.bn0.num_batches_tracked
WARNING:basicsr: layer2.0.bn1.num_batches_tracked
WARNING:basicsr: layer2.0.bn2.num_batches_tracked
WARNING:basicsr: layer2.0.downsample.1.num_batches_tracked
WARNING:basicsr: layer2.1.bn0.num_batches_tracked
WARNING:basicsr: layer2.1.bn1.num_batches_tracked
WARNING:basicsr: layer2.1.bn2.num_batches_tracked
WARNING:basicsr: layer3.0.bn0.num_batches_tracked
WARNING:basicsr: layer3.0.bn1.num_batches_tracked
WARNING:basicsr: layer3.0.bn2.num_batches_tracked
WARNING:basicsr: layer3.0.downsample.1.num_batches_tracked
WARNING:basicsr: layer3.1.bn0.num_batches_tracked
WARNING:basicsr: layer3.1.bn1.num_batches_tracked
WARNING:basicsr: layer3.1.bn2.num_batches_tracked
WARNING:basicsr: layer4.0.bn0.num_batches_tracked
WARNING:basicsr: layer4.0.bn1.num_batches_tracked
WARNING:basicsr: layer4.0.bn2.num_batches_tracked
WARNING:basicsr: layer4.0.downsample.1.num_batches_tracked
WARNING:basicsr: layer4.1.bn0.num_batches_tracked
WARNING:basicsr: layer4.1.bn1.num_batches_tracked
WARNING:basicsr: layer4.1.bn2.num_batches_tracked
WARNING:basicsr:Loaded net - current net:
Enter get_root_logger
Enter get_root_logger
Enter get_root_logger
Enter get_root_logger
[W reducer.cpp:1050] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters, consider turning this flag off. Note that this warning may be a false positive your model has flow control causing later iterations to have unused parameters. (function operator())
/home/sjw/anaconda3/envs/BasicSR/lib/python3.8/site-packages/torch/nn/functional.py:3499: UserWarning: The default behavior for interpolate/upsample with float scale_factor changed in 1.6.0 to align with other frameworks/libraries, and now uses scale_factor directly, instead of relying on the computed output size. If you wish to restore the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details.
warnings.warn(
[W reducer.cpp:1050] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters, consider turning this flag off. Note that this warning may be a false positive your model has flow control causing later iterations to have unused parameters. (function operator())
/home/sjw/anaconda3/envs/BasicSR/lib/python3.8/site-packages/torch/nn/functional.py:3499: UserWarning: The default behavior for interpolate/upsample with float scale_factor changed in 1.6.0 to align with other frameworks/libraries, and now uses scale_factor directly, instead of relying on the computed output size. If you wish to restore the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details.
warnings.warn(
log文件内容:
2021-07-12 13:20:14,046 WARNING: Current net - loaded net:
2021-07-12 13:20:14,046 WARNING: bn1.num_batches_tracked
2021-07-12 13:20:14,046 WARNING: bn4.num_batches_tracked
2021-07-12 13:20:14,046 WARNING: bn5.num_batches_tracked
2021-07-12 13:20:14,046 WARNING: layer1.0.bn0.num_batches_tracked
2021-07-12 13:20:14,046 WARNING: layer1.0.bn1.num_batches_tracked
2021-07-12 13:20:14,046 WARNING: layer1.0.bn2.num_batches_tracked
2021-07-12 13:20:14,047 WARNING: layer1.1.bn0.num_batches_tracked
2021-07-12 13:20:14,047 WARNING: layer1.1.bn1.num_batches_tracked
2021-07-12 13:20:14,047 WARNING: layer1.1.bn2.num_batches_tracked
2021-07-12 13:20:14,047 WARNING: layer2.0.bn0.num_batches_tracked
2021-07-12 13:20:14,047 WARNING: layer2.0.bn1.num_batches_tracked
2021-07-12 13:20:14,047 WARNING: layer2.0.bn2.num_batches_tracked
2021-07-12 13:20:14,047 WARNING: layer2.0.downsample.1.num_batches_tracked
2021-07-12 13:20:14,047 WARNING: layer2.1.bn0.num_batches_tracked
2021-07-12 13:20:14,047 WARNING: layer2.1.bn1.num_batches_tracked
2021-07-12 13:20:14,047 WARNING: layer2.1.bn2.num_batches_tracked
2021-07-12 13:20:14,047 WARNING: layer3.0.bn0.num_batches_tracked
2021-07-12 13:20:14,047 WARNING: layer3.0.bn1.num_batches_tracked
2021-07-12 13:20:14,047 WARNING: layer3.0.bn2.num_batches_tracked
2021-07-12 13:20:14,047 WARNING: layer3.0.downsample.1.num_batches_tracked
2021-07-12 13:20:14,047 WARNING: layer3.1.bn0.num_batches_tracked
2021-07-12 13:20:14,047 WARNING: layer3.1.bn1.num_batches_tracked
2021-07-12 13:20:14,047 WARNING: layer3.1.bn2.num_batches_tracked
2021-07-12 13:20:14,047 WARNING: layer4.0.bn0.num_batches_tracked
2021-07-12 13:20:14,047 WARNING: layer4.0.bn1.num_batches_tracked
2021-07-12 13:20:14,047 WARNING: layer4.0.bn2.num_batches_tracked
2021-07-12 13:20:14,047 WARNING: layer4.0.downsample.1.num_batches_tracked
2021-07-12 13:20:14,047 WARNING: layer4.1.bn0.num_batches_tracked
2021-07-12 13:20:14,047 WARNING: layer4.1.bn1.num_batches_tracked
2021-07-12 13:20:14,047 WARNING: layer4.1.bn2.num_batches_tracked
2021-07-12 13:20:14,047 WARNING: Loaded net - current net:
from gfpgan.
@syfbme Thanks for your feedback!
from gfpgan.
@JiaweiShiCV
It seems that this issue could be solved by the above modification!
from gfpgan.
@xinntao 。。。 输出不是还是只有这么点吗
from gfpgan.
@JiaweiShiCV 它没有接着输出了么...
没......
from gfpgan.
@JiaweiShiCV
This bug has been fixed in BasicSR: XPixelGroup/BasicSR@bf93f27
It should be OK now!
from gfpgan.
@xinntao 重新装basicsr=1.3.3.5就ok了是吗
from gfpgan.
@xinntao 重新装basicsr=1.3.3.5就ok了是吗
这个目前是改在master分支上, 还没有新的版本, 我现在发一个新版 1.3.3.6
from gfpgan.
ok!
from gfpgan.
Related Issues (20)
- Anaconda Prompt Displays Error Code When Running Pip Command
- Huggingface scaling issue
- Training Data and configurations for GFPGAN v1.4
- ls: cannot access 'results/cmp': No such file or directory HOT 4
- Would love to have you all as part of our company
- Issues with PyTorch Distributed Training on Google Colab HOT 12
- Please help me
- DirectML on Windows with AMD GPUs
- Aaa
- 关于数据集的问题 HOT 2
- TEST GFPGAN
- Photo upload problem
- 请问推理的时候如何能进行batch操作?
- Reconstruir esta foto
- Зураг сэргээх
- Missing Modules, Colab doesn't work anymore HOT 3
- Windows runs V1 but meets "time.sleep" problem HOT 1
- Examples – tencentarc/gfpgan – Replicate
- Gy
- ZALA NARENDRASINH HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from gfpgan.