Giter Club home page Giter Club logo

metahuman-stream's Issues

推流后如何添加背景图

大佬,想问下在推理视频完成后,如何在绿幕视频上添加背景图。(应该推流前还是后,类比ffmpeg的filter_complex)
提前感谢

srs镜像从哪儿获取呢?

作者您好,按照readme基本已经完成了所有的配置,但是这一步 docker run --rm -it -p 1935:1935 -p 1985:1985 -p 8080:8080 registry.cn-hangzhou.aliyuncs.com/ossrs/srs:5 中的镜像是从哪儿下载呢?还是说这些操作只能在阿里云服务器上执行?谢谢

延时问题

项目正常跑通,4090上延时还是很大,而且跑的越久延时越大,一开始大概延时20秒往上,几分钟后延时会达到几分钟,请问这是什么原因?

推流失败,没有生成视频

环境: wsl2 ubuntu20
操作步骤:按文档安装完,启动nginx, 访问echo页面, 输入文字,发送后,无反应。
flask_sockets 已改,websocket正常。
rtmp-server有错误提示:
[2024-02-19 12:13:27.941][ERROR][1][hsbl0o71][4] serve error code=1011(SocketTimeout)(Socket io timeout) : service cycle : rtmp: stream service : rtmp: publish timeout 20000ms, nb_msgs=0
thread [1][hsbl0o71]: do_cycle() [./src/app/srs_app_rtmp_conn.cpp:262][errno=4]
thread [1][hsbl0o71]: service_cycle() [./src/app/srs_app_rtmp_conn.cpp:456][errno=4]
thread [1][hsbl0o71]: do_publishing() [./src/app/srs_app_rtmp_conn.cpp:1035][errno=62](Interrupted system call)

报错

[WARN] Failed to load optimizer.
[INFO] loaded scheduler.
[INFO] loaded scaler.
[INFO] load 7272 frames.
Loading data: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7272/7272 [00:00<00:00, 87434.94it/s]
[INFO] eye_area: 0.25 - 0.25
[INFO] loading ASR model cpierse/wav2vec2-large-xlsr-53-esperanto...
/usr/local/lib/python3.10/dist-packages/transformers/configuration_utils.py:381: UserWarning: Passing gradient_checkpointing to a config initialization is deprecated and will be removed in v5 Transformers. Using model.gradient_checkpointing_enable() instead, or if you are using the Trainer API, pass gradient_checkpointing=True in your TrainingArguments.
warnings.warn(
Ignored unknown kwarg option normalize
Ignored unknown kwarg option normalize
Ignored unknown kwarg option normalize
Ignored unknown kwarg option normalize
[tcp @ 0x558b1aefc640] Connection to tcp://localhost:1935 failed: Connection refused
[rtmp @ 0x558b1aefc240] Cannot open connection tcp://localhost:1935
failed to open stream output context, stream will not work
[INFO] warm up ASR live model, expected latency = 1.400000s
start websocket server
[INFO] frame_to_text...
[INFO] warm-up done, actual latency = 1.121296s
[INFO] frame_to_text...
[INFO] frame_to_text...
[INFO] frame_to_text...
[INFO] frame_to_text...
------actual avg fps:39.8501
[INFO] frame_to_text...
[INFO] frame_to_text...
[INFO] frame_to_text...
[INFO] frame_to_text...
------actual avg fps:44.1976
[INFO] frame_to_text...
[INFO] frame_to_text...
[INFO] frame_to_text...

网页打不开

nginx

root@sd-server-01:~/avatar# nginx
nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Unknown error)
nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Unknown error)
nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Unknown error)
nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Unknown error)
nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Unknown error)
nginx: [emerg] still could not bind()

如何解决呢

OSError: CUDA_HOME environment variable is not set

ModuleNotFoundError: No module named '_raymarching_face'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/zhihui/work/metahuman-stream/app.py", line 19, in
from nerf_triplane.network import NeRFNetwork
File "/home/zhihui/work/metahuman-stream/nerf_triplane/network.py", line 6, in
from .renderer import NeRFRenderer
File "/home/zhihui/work/metahuman-stream/nerf_triplane/renderer.py", line 10, in
import raymarching
File "/home/zhihui/work/metahuman-stream/raymarching/init.py", line 1, in
from .raymarching import *
File "/home/zhihui/work/metahuman-stream/raymarching/raymarching.py", line 12, in
from .backend import _backend
File "/home/zhihui/work/metahuman-stream/raymarching/backend.py", line 31, in
_backend = load(name='_raymarching_face',
File "/home/zhihui/anaconda3/envs/metahuman/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1202, in load
return _jit_compile(
File "/home/zhihui/anaconda3/envs/metahuman/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1425, in _jit_compile
_write_ninja_file_and_build_library(
File "/home/zhihui/anaconda3/envs/metahuman/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1514, in _write_ninja_file_and_build_library
extra_ldflags = _prepare_ldflags(
File "/home/zhihui/anaconda3/envs/metahuman/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1622, in _prepare_ldflags
extra_ldflags.append(f'-L{_join_cuda_home("lib64")}')
File "/home/zhihui/anaconda3/envs/metahuman/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 2125, in _join_cuda_home
raise EnvironmentError('CUDA_HOME environment variable is not set. '
OSError: CUDA_HOME environment variable is not set. Please set it to your CUDA install root.
运行app.py时出现这个问题

换自己训练的模型运行就报错,大佬帮看看,感谢!

换自己的训练模型时,声音我是用的Hubert, 启动app.py报错,大佬帮看看,感谢!
trainer = Trainer('ngp', opt, model, device=device, workspace=opt.workspace, criterion=criterion, fp16=opt.fp16, metrics=metrics, use_checkpoint=opt.ckpt)
File "/root/nerf/nerf_triplane/utils.py", line 724, in init
self.load_checkpoint(self.use_checkpoint)
File "/root/nerf/nerf_triplane/utils.py", line 1824, in load_checkpoint
missing_keys, unexpected_keys = self.model.load_state_dict(checkpoint_dict['model'], strict=False)
File "/root/miniconda3/envs/er/lib/python3.9/site-packages/torch/nn/modules/module.py", line 2041, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for NeRFNetwork:
size mismatch for individual_codes: copying a param with shape torch.Size([12000, 4]) from checkpoint, the shape in current model is torch.Size([10000, 4]).
size mismatch for individual_codes_torso: copying a param with shape torch.Size([12000, 8]) from checkpoint, the shape in current model is torch.Size([10000, 8]).

推流语速问题

使用现有奥巴马模型,现有语音模型,跑起来推流的时候,发现语速很快,有没有控制参数可用?使用 rtmp_streaming 有参数可以控制吗?

Cant get any video stream on Vlc with rtmp://localhost/live/livestream, err with Requested output format 'flv' is not a suitable output format

python 3.10
pytorch 1.12.1 cuda 11.3
ubuntu 18.04
ffmpeg 4.4.2

after the install guide, run docker and app.py, cant get the video on vlc and get a err in app.py that said Requested output format 'flv' is not a suitable output format, did there are any advise to fix this issue? if you know, please help me, thx

[INFO] warm up ASR live model, expected latency = 1.400000s
start websocket server
[INFO] frame_to_text... 
[INFO] warm-up done, actual latency = 2.698054s
[NULL @ 0x7f91eacabea0] Requested output format 'flv' is not a suitable output format
[INFO] frame_to_text... 
[INFO] frame_to_text... 
[INFO] frame_to_text... 
[INFO] frame_to_text... 
------actual avg fps:28.9061
[INFO] frame_to_text... 
[INFO] frame_to_text... 
[INFO] frame_to_text... 
[INFO] frame_to_text... 
------actual avg fps:39.0283

输入文本后数字人没有变化

1、docker启动
2、能看到页面和奥巴马动态视频
3、浏览器network查看websocket连接状态正常

大佬请教下,下一步该怎么排查问题?

运行`python app.py`出现一大堆报错!

(py39) root@vultr:~/AIGC/metahuman-stream# python app.py 
Traceback (most recent call last):
  File "/root/AIGC/metahuman-stream/raymarching/raymarching.py", line 10, in <module>
    import _raymarching_face as _backend
ModuleNotFoundError: No module named '_raymarching_face'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 2100, in _run_ninja_build
    subprocess.run(
  File "/root/miniconda3/envs/py39/lib/python3.9/subprocess.py", line 528, in run
    raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/root/AIGC/metahuman-stream/app.py", line 19, in <module>
    from nerf_triplane.network import NeRFNetwork
  File "/root/AIGC/metahuman-stream/nerf_triplane/network.py", line 6, in <module>
    from .renderer import NeRFRenderer
  File "/root/AIGC/metahuman-stream/nerf_triplane/renderer.py", line 10, in <module>
    import raymarching
  File "/root/AIGC/metahuman-stream/raymarching/__init__.py", line 1, in <module>
    from .raymarching import *
  File "/root/AIGC/metahuman-stream/raymarching/raymarching.py", line 12, in <module>
    from .backend import _backend
  File "/root/AIGC/metahuman-stream/raymarching/backend.py", line 31, in <module>
    _backend = load(name='_raymarching_face',
  File "/root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 1308, in load
    return _jit_compile(
  File "/root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 1710, in _jit_compile
    _write_ninja_file_and_build_library(
  File "/root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 1823, in _write_ninja_file_and_build_library
    _run_ninja_build(
  File "/root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 2116, in _run_ninja_build
    raise RuntimeError(message) from e
RuntimeError: Error building extension '_raymarching_face': [1/3] /usr/local/cuda/bin/nvcc  -DTORCH_EXTENSION_NAME=_raymarching_face -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -isystem /root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include -isystem /root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/torch/csrc/api/include -isystem /root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/TH -isystem /root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/THC -isystem /usr/local/cuda/include -isystem /root/miniconda3/envs/py39/include/python3.9 -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_80,code=compute_80 -gencode=arch=compute_80,code=sm_80 --compiler-options '-fPIC' -O3 -std=c++14 -U__CUDA_NO_HALF_OPERATORS__ -U__CUDA_NO_HALF_CONVERSIONS__ -U__CUDA_NO_HALF2_OPERATORS__ -c /root/AIGC/metahuman-stream/raymarching/src/raymarching.cu -o raymarching.cuda.o 
FAILED: raymarching.cuda.o 
/usr/local/cuda/bin/nvcc  -DTORCH_EXTENSION_NAME=_raymarching_face -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -isystem /root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include -isystem /root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/torch/csrc/api/include -isystem /root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/TH -isystem /root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/THC -isystem /usr/local/cuda/include -isystem /root/miniconda3/envs/py39/include/python3.9 -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_80,code=compute_80 -gencode=arch=compute_80,code=sm_80 --compiler-options '-fPIC' -O3 -std=c++14 -U__CUDA_NO_HALF_OPERATORS__ -U__CUDA_NO_HALF_CONVERSIONS__ -U__CUDA_NO_HALF2_OPERATORS__ -c /root/AIGC/metahuman-stream/raymarching/src/raymarching.cu -o raymarching.cuda.o 
In file included from /root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/c10/util/string_view.h:4,
                 from /root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/c10/util/StringUtil.h:6,
                 from /root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/c10/util/Exception.h:5,
                 from /root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/ATen/core/Generator.h:11,
                 from /root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/ATen/CPUGeneratorImpl.h:3,
                 from /root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/ATen/Context.h:3,
                 from /root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/ATen/cuda/CUDAContext.h:18,
                 from /root/AIGC/metahuman-stream/raymarching/src/raymarching.cu:5:
/root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/c10/util/C++17.h:27:2: error: #error You need C++17 to compile PyTorch
   27 | #error You need C++17 to compile PyTorch
      |  ^~~~~
In file included from /root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/torch/csrc/api/include/torch/torch.h:3,
                 from /root/AIGC/metahuman-stream/raymarching/src/raymarching.cu:6:
/root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/torch/csrc/api/include/torch/all.h:4:2: error: #error C++17 or later compatible compiler is required to use PyTorch.
    4 | #error C++17 or later compatible compiler is required to use PyTorch.
      |  ^~~~~
In file included from /root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
                 from /root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
                 from /root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
                 from /root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:4,
                 from /root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
                 from /root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3,
                 from /root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/torch/csrc/api/include/torch/all.h:9,
                 from /root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/torch/csrc/api/include/torch/torch.h:3,
                 from /root/AIGC/metahuman-stream/raymarching/src/raymarching.cu:6:
/root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/ATen/ATen.h:4:2: error: #error C++17 or later compatible compiler is required to use ATen.
    4 | #error C++17 or later compatible compiler is required to use ATen.
      |  ^~~~~
[2/3] c++ -MMD -MF bindings.o.d -DTORCH_EXTENSION_NAME=_raymarching_face -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -isystem /root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include -isystem /root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/torch/csrc/api/include -isystem /root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/TH -isystem /root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/THC -isystem /usr/local/cuda/include -isystem /root/miniconda3/envs/py39/include/python3.9 -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++17 -O3 -std=c++14 -c /root/AIGC/metahuman-stream/raymarching/src/bindings.cpp -o bindings.o 
FAILED: bindings.o 
c++ -MMD -MF bindings.o.d -DTORCH_EXTENSION_NAME=_raymarching_face -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -isystem /root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include -isystem /root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/torch/csrc/api/include -isystem /root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/TH -isystem /root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/THC -isystem /usr/local/cuda/include -isystem /root/miniconda3/envs/py39/include/python3.9 -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++17 -O3 -std=c++14 -c /root/AIGC/metahuman-stream/raymarching/src/bindings.cpp -o bindings.o 
In file included from /root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/torch/extension.h:5,
                 from /root/AIGC/metahuman-stream/raymarching/src/bindings.cpp:1:
/root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/torch/csrc/api/include/torch/all.h:4:2: error: #error C++17 or later compatible compiler is required to use PyTorch.
    4 | #error C++17 or later compatible compiler is required to use PyTorch.
      |  ^~~~~
In file included from /root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/c10/util/string_view.h:4,
                 from /root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/c10/util/StringUtil.h:6,
                 from /root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/c10/util/Exception.h:5,
                 from /root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/c10/core/Device.h:5,
                 from /root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/ATen/core/TensorBody.h:11,
                 from /root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/ATen/core/Tensor.h:3,
                 from /root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/ATen/Tensor.h:3,
                 from /root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/torch/csrc/autograd/function_hook.h:3,
                 from /root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/torch/csrc/autograd/cpp_hook.h:2,
                 from /root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/torch/csrc/autograd/variable.h:6,
                 from /root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/torch/csrc/autograd/autograd.h:3,
                 from /root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/torch/csrc/api/include/torch/autograd.h:3,
                 from /root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/torch/csrc/api/include/torch/all.h:7,
                 from /root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/torch/extension.h:5,
                 from /root/AIGC/metahuman-stream/raymarching/src/bindings.cpp:1:
/root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/c10/util/C++17.h:27:2: error: #error You need C++17 to compile PyTorch
   27 | #error You need C++17 to compile PyTorch
      |  ^~~~~
In file included from /root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
                 from /root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
                 from /root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
                 from /root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:4,
                 from /root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
                 from /root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3,
                 from /root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/torch/csrc/api/include/torch/all.h:9,
                 from /root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/torch/extension.h:5,
                 from /root/AIGC/metahuman-stream/raymarching/src/bindings.cpp:1:
/root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/ATen/ATen.h:4:2: error: #error C++17 or later compatible compiler is required to use ATen.
    4 | #error C++17 or later compatible compiler is required to use ATen.
      |  ^~~~~
In file included from /root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/ATen/core/ivalue.h:1499,
                 from /root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/ATen/core/List_inl.h:4,
                 from /root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/ATen/core/List.h:490,
                 from /root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/ATen/core/IListRef_inl.h:3,
                 from /root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/ATen/core/IListRef.h:632,
                 from /root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/ATen/WrapDimUtils.h:3,
                 from /root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/ATen/TensorNames.h:3,
                 from /root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/ATen/NamedTensorUtils.h:3,
                 from /root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/torch/csrc/autograd/variable.h:11,
                 from /root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/torch/csrc/autograd/autograd.h:3,
                 from /root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/torch/csrc/api/include/torch/autograd.h:3,
                 from /root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/torch/csrc/api/include/torch/all.h:7,
                 from /root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/torch/extension.h:5,
                 from /root/AIGC/metahuman-stream/raymarching/src/bindings.cpp:1:
/root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/ATen/core/ivalue_inl.h: In lambda function:
/root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/ATen/core/ivalue_inl.h:1061:30: error: ‘is_convertible_v’ is not a member of ‘std’; did you mean ‘is_convertible’?
 1061 |         if constexpr (::std::is_convertible_v<typename c10::invoke_result_t<T &&, Future&>, IValueWithStorages>) {
      |                              ^~~~~~~~~~~~~~~~
      |                              is_convertible
/root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/ATen/core/ivalue_inl.h:1061:91: error: expected ‘(’ before ‘,’ token
 1061 |         if constexpr (::std::is_convertible_v<typename c10::invoke_result_t<T &&, Future&>, IValueWithStorages>) {
      |                                                                                           ^
/root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/ATen/core/ivalue_inl.h:1061:111: error: expected primary-expression before ‘>’ token
 1061 |         if constexpr (::std::is_convertible_v<typename c10::invoke_result_t<T &&, Future&>, IValueWithStorages>) {
      |                                                                                                               ^
/root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/ATen/core/ivalue_inl.h:1061:112: error: expected primary-expression before ‘)’ token
 1061 |         if constexpr (::std::is_convertible_v<typename c10::invoke_result_t<T &&, Future&>, IValueWithStorages>) {
      |                                                                                                                ^
In file included from /root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/ATen/core/boxing/KernelFunction_impl.h:1,
                 from /root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/ATen/core/boxing/KernelFunction.h:251,
                 from /root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/ATen/core/op_registration/op_registration.h:11,
                 from /root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/torch/library.h:68,
                 from /root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/torch/csrc/autograd/autograd_not_implemented_fallback.h:3,
                 from /root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/torch/csrc/api/include/torch/autograd.h:4,
                 from /root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/torch/csrc/api/include/torch/all.h:7,
                 from /root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/torch/extension.h:5,
                 from /root/AIGC/metahuman-stream/raymarching/src/bindings.cpp:1:
/root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/ATen/core/boxing/impl/boxing.h: In static member function ‘static Result c10::impl::BoxedKernelWrapper<Result(Args ...), typename std::enable_if<((c10::guts::conjunction<c10::guts::disjunction<std::is_constructible<c10::IValue, typename std::decay<Args>::type>, std::is_same<c10::TensorOptions, typename std::decay<Args>::type> >...>::value && c10::guts::conjunction<c10::guts::disjunction<c10::impl::has_ivalue_to<T, void>, std::is_same<void, ReturnType> >, c10::guts::negation<std::is_lvalue_reference<_Tp> > >::value) && (! c10::impl::is_tuple_of_mutable_tensor_refs<Result>::value)), void>::type>::call(const c10::BoxedKernel&, const c10::OperatorHandle&, c10::DispatchKeySet, Args ...)’:
/root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/ATen/core/boxing/impl/boxing.h:229:25: error: ‘is_same_v’ is not a member of ‘std’; did you mean ‘is_same’?
  229 |     if constexpr (!std::is_same_v<void, Result>) {
      |                         ^~~~~~~~~
      |                         is_same
/root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/ATen/core/boxing/impl/boxing.h:229:35: error: expected primary-expression before ‘void’
  229 |     if constexpr (!std::is_same_v<void, Result>) {
      |                                   ^~~~
/root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/ATen/core/boxing/impl/boxing.h:229:35: error: expected ‘)’ before ‘void’
/root/miniconda3/envs/py39/lib/python3.9/site-packages/torch/include/ATen/core/boxing/impl/boxing.h:229:18: note: to match this ‘(’
  229 |     if constexpr (!std::is_same_v<void, Result>) {
      |                  ^
ninja: build stopped: subcommand failed.

主要的pip依赖版本如下:

pytorch3d                    0.7.5
torch                        2.1.1+cu118
torchaudio                   2.1.1+cu118
torchvision                  0.16.1+cu118

CUDA版本:

$ nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Wed_Sep_21_10:33:58_PDT_2022
Cuda compilation tools, release 11.8, V11.8.89
Build cuda_11.8.r11.8/compiler.31833905_0

Step:

  1. clone了这个仓库
  2. 运行
docker run --rm -it -p 1935:1935 -p 1985:1985 -p 8080:8080 registry.cn-hangzhou.aliyuncs.com/ossrs/srs:5

...
[2024-01-04 07:50:56.319][INFO][1][92y51g1q] Process: cpu=1.00%,23MB, threads=2
[2024-01-04 07:51:00.704][INFO][1][9r6618wj] Hybrid cpu=1.00%,23MB, cid=1,0, timer=62,0,0, clock=0,49,0,0,0,0,0,0,0
[2024-01-04 07:51:01.324][INFO][1][92y51g1q] Process: cpu=1.00%,23MB, threads=2
[2024-01-04 07:51:05.705][INFO][1][9r6618wj] Hybrid cpu=1.00%,23MB, cid=1,0, timer=63,0,0, clock=0,48,1,0,0,0,0,0,0
[2024-01-04 07:51:06.328][INFO][1][92y51g1q] Process: cpu=0.00%,23MB, threads=2
[2024-01-04 07:51:10.705][INFO][1][9r6618wj] Hybrid cpu=1.00%,23MB, cid=1,0, timer=63,0,0, clock=0,48,1,0,0,0,0,0,0
[2024-01-04 07:51:11.333][INFO][1][92y51g1q] Process: cpu=0.00%,23MB, threads=2
[2024-01-04 07:51:15.706][INFO][1][9r6618wj] Hybrid cpu=0.00%,23MB, cid=1,0, timer=63,0,0, clock=0,48,1,0,0,0,0,0,0
[2024-01-04 07:51:16.338][INFO][1][92y51g1q] Process: cpu=0.00%,23MB, threads=2
[2024-01-04 07:51:20.706][INFO][1][9r6618wj] Hybrid cpu=0.00%,23MB, cid=1,0, timer=62,0,0, clock=0,47,1,1,0,0,0,0,0
[2024-01-04 07:51:21.343][INFO][1][92y51g1q] Process: cpu=1.00%,23MB, threads=2
  1. 运行python app.py

推流延迟问题

大佬,你Demo中依赖的ffmpeg版本是多少?

我们现在用你的Demo推流,到云端,在本地观看,从发文字到观看,其中延迟有30s左右.我们的ffmpeg是4.4.2.

SRS镜像

请问使用更快地SRS需要更改echo.html吗,我用第一个SRS可以有画面,第二个更快地没有画面出现

推流失败,没有生成视频

环境: wsl2 ubuntu20
操作步骤:按文档安装完,启动nginx, 访问echo页面, 输入文字,发送后,无反应。
flask_sockets 已改,websocket正常。
rtmp-server有错误提示:
[2024-02-19 12:13:27.941][ERROR][1][hsbl0o71][4] serve error code=1011(SocketTimeout)(Socket io timeout) : service cycle : rtmp: stream service : rtmp: publish timeout 20000ms, nb_msgs=0
thread [1][hsbl0o71]: do_cycle() [./src/app/srs_app_rtmp_conn.cpp:262][errno=4]
thread [1][hsbl0o71]: service_cycle() [./src/app/srs_app_rtmp_conn.cpp:456][errno=4]
thread [1][hsbl0o71]: do_publishing() [./src/app/srs_app_rtmp_conn.cpp:1035][errno=62](Interrupted system call)

换自己的训练的模型出错,用了wav2vec训练,结果还是出错

训练时,换了wav2vec还是报以上同样的错误,我训练出来的模型是38.4M,我看作者大佬是的ngp_kf.pth文件大小是38M,哪里有问题呢,错误信息:
RuntimeError: Error(s) in loading state_dict for NeRFNetwork:
size mismatch for individual_codes: copying a param with shape torch.Size([12000, 4]) from checkpoint, the shape in current model is torch.Size([10000, 4]).
size mismatch for individual_codes_torso: copying a param with shape torch.Size([12000, 8]) from checkpoint, the shape in current model is torch.Size([10000, 8]).

亲爱的 PyGui 展示

你好 我认为 metahuman-stream 可能会使 对 GitHub 上 Dear PyGui 展示的重要补充。 https://github.com/hoffstadt/DearPyGui/wiki/Dear-PyGui-Showcase 您是否愿意创建一个简短的 GIF(小于 8 Mb),我们可以将其包含在展示中? 感谢您的考虑

Hi,

I think that metahuman-stream might make a
great addition to the Dear PyGui showcase on GitHub.

https://github.com/hoffstadt/DearPyGui/wiki/Dear-PyGui-Showcase

Would you be willing to create a short GIF (less than 8 Mb) that we could include in the showcase?

Thanks for your consideration

[aac @ 0x953bb00] Input contains (near) NaN/+-Inf

我们完成了搭建,但是出现了这个错误,貌似是音频解码的错误。
这个错误是随机的,有些音频可以推送,但是推送失败的音频在本地可以整成播放。
请问作者知道这样的原因吗?

OSError: CUDA_HOME environment variable is not set

ModuleNotFoundError: No module named '_raymarching_face'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/zhihui/work/metahuman-stream/app.py", line 19, in
from nerf_triplane.network import NeRFNetwork
File "/home/zhihui/work/metahuman-stream/nerf_triplane/network.py", line 6, in
from .renderer import NeRFRenderer
File "/home/zhihui/work/metahuman-stream/nerf_triplane/renderer.py", line 10, in
import raymarching
File "/home/zhihui/work/metahuman-stream/raymarching/init.py", line 1, in
from .raymarching import *
File "/home/zhihui/work/metahuman-stream/raymarching/raymarching.py", line 12, in
from .backend import _backend
File "/home/zhihui/work/metahuman-stream/raymarching/backend.py", line 31, in
_backend = load(name='_raymarching_face',
File "/home/zhihui/anaconda3/envs/metahuman/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1202, in load
return _jit_compile(
File "/home/zhihui/anaconda3/envs/metahuman/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1425, in _jit_compile
_write_ninja_file_and_build_library(
File "/home/zhihui/anaconda3/envs/metahuman/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1514, in _write_ninja_file_and_build_library
extra_ldflags = _prepare_ldflags(
File "/home/zhihui/anaconda3/envs/metahuman/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1622, in _prepare_ldflags
extra_ldflags.append(f'-L{_join_cuda_home("lib64")}')
File "/home/zhihui/anaconda3/envs/metahuman/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 2125, in _join_cuda_home
raise EnvironmentError('CUDA_HOME environment variable is not set. '
OSError: CUDA_HOME environment variable is not set. Please set it to your CUDA install root.
运行app.py时出现这个问题

执行python app.py报错

RuntimeError: CUDA error: CUBLAS_STATUS_INVALID_VALUE when calling cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)
详细报错信息如下:
[INFO] warm up ASR live model, expected latency = 1.400000s
start websocket server
[INFO] frame_to_text...
Exception in thread Thread-2 (render):
Traceback (most recent call last):
File "/home/ycw/anaconda3/envs/nerfstream/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
self.run()
File "/home/ycw/anaconda3/envs/nerfstream/lib/python3.10/threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "/home/ycw/project/company/metahuman-stream/app.py", line 155, in render
nerfreal.render()
File "/home/ycw/project/company/metahuman-stream/nerfreal.py", line 155, in render
self.asr.warm_up()
File "/home/ycw/project/company/metahuman-stream/asrreal.py", line 445, in warm_up
self.run_step()
File "/home/ycw/project/company/metahuman-stream/asrreal.py", line 220, in run_step
logits, labels, text = self.frame_to_text(inputs)
File "/home/ycw/project/company/metahuman-stream/asrreal.py", line 341, in frame_to_text
result = self.model(inputs.input_values.to(self.device))
File "/home/ycw/anaconda3/envs/nerfstream/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/ycw/anaconda3/envs/nerfstream/lib/python3.10/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 1969, in forward
outputs = self.wav2vec2(
File "/home/ycw/anaconda3/envs/nerfstream/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/ycw/anaconda3/envs/nerfstream/lib/python3.10/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 1563, in forward
hidden_states, extract_features = self.feature_projection(extract_features)
File "/home/ycw/anaconda3/envs/nerfstream/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/ycw/anaconda3/envs/nerfstream/lib/python3.10/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 487, in forward
hidden_states = self.projection(norm_hidden_states)
File "/home/ycw/anaconda3/envs/nerfstream/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/ycw/anaconda3/envs/nerfstream/lib/python3.10/site-packages/torch/nn/modules/linear.py", line 114, in forward
return F.linear(input, self.weight, self.bias)
RuntimeError: CUDA error: CUBLAS_STATUS_INVALID_VALUE when calling cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)

readme启动步骤问题

readme的意思是本地要装一份环境,然后还要用docker。还是本地和docker二选一即可,能否标号完整的启动步骤。启动失败中。

installation steps ( 安装步骤)

which command should I execute to set up metahuman-stream using docker as readme file looks bit confusing?
请问我应该执行哪个命令来使用 Docker 设置 metahuman-stream?因为读起来 readme 文件有点混乱。

vlc拉流,播放延迟的问题

老师您好,我这边尝试使用系统的pipe配合ffmpeg去推流,参考这个方法:https://zhuanlan.zhihu.com/p/656815080
我看您代码里有类似的方法。我使用该方法将音视频对齐了,现在唯一的问题就是代码已经推流,但是vlc播放器总是在推了100多帧(大概3-4s)时候才开始播放。我怀疑是缓存机制,但是设置了vlc播放器的缓存,仍然有这个延迟问题。请问您当时试错时候遇到过吗

nerfreal.py中的warm_up()是必要的嗎

如題,請問有辦法不要做warm_up()嗎? 我發現做了warm_up()後,聲音會比較慢出來,會有播放延遲的感覺,缺乏了實時的互動效果。

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.