Giter Club home page Giter Club logo

vits_with_chatgpt-gpt3's Introduction

部署流程

I:启用前端应用,从Live2DMascot仓库下载后,修改config.json文件

"ChatAPI" : 
{
	"ChatSavePath" : "chat",  //聊天音频和文本保存路径
	"CustomChatServer" : 
	{
		"HostPort" : "http://yourhost:8080",  //服务器地址,端口默认8080
		"On" : true,  //开启自定义聊天接口
		"ReadTimeOut" : 114,  //等待响应时间(s)
		"Route" : "/chat"  //路径
	},

Window一键部署

#前置条件 已安装Anaconda
conda create -n chatbot python=3.8
conda activate chatbot
git clone https://huggingface.co/spaces/Mahiruoshi/vits-chatbot
cd vits-chatbot
pip install -r requirements.txt
python main.py

合成日语时要安装pyopenjtalk或者编译好的日语cleaner文件(效果不一定好),所以你完全可以选择忽视该模块的安装。

在cleaner程序中,也就是text文件下的cleaners.py,注释掉所有的japanese模块,比如说:

#第3行
from text.japanese import japanese_to_romaji_with_accent, japanese_to_ipa, japanese_to_ipa2, japanese_to_ipa3

在你所采用的config.json文件中,找到对应的cleaner,比如说zh_ja_mixture_cleaners,然后注释掉这一段

#第50行开始
for japanese_text in japanese_texts:
        cleaned_text = japanese_to_romaji_with_accent(
            japanese_text[4:-4]).replace('ts', 'ʦ').replace('u', 'ɯ').replace('...', '')
        text = text.replace(japanese_text, cleaned_text+' ', 1)

II:server端启动后端api程序(Windows也可以)

如需使用pyopenjtalk,则需要先安装好cmake, 结合自己的系统搜索相关安装教程

Linux

#Installing cmake and FFmpeg,showing FFmpeg
sudo apt update
sudo apt upgrade
sudo apt install ffmpeg
ffmpeg -version
#Creating enviornments
conda create -n chatbot python=3.8
conda init bash
bash
conda activate chatbot
git clone https://huggingface.co/Mahiruoshi/vits_with_chatbot
cd vits_with_chatbot
pip install -r requirements.txt

# 控制面板兼启动文件
python main.py
# * Running on http://127.0.0.1:8080
# * Running on http://172.16.5.4:8080
#Running on local URL:  http://127.0.0.1:7860
#端口号7860是面板,8080是api

(较为复杂,建议参考另一个分支部署)如需使用chatglm,需提前部署好环境在自己的环境下安装好依赖。建议protobuf==3.20.0, transformers>=4.26.1,否则会有奇怪报错

Window详细说明

I 安装FFmpeg并且添加环境变量

II.安装Torch+gpu(如需cpu推理则跳过)

(Alternative).使用封装版的Japanese cleaner,用该text文件夹替换原本的text文件夹,然后从该仓库的release中下载,将cleaners压缩包解压至vits项目的路径下

Image text

V.python创建虚拟环境后

git clone https://huggingface.co/Mahiruoshi/vits_with_chatbot
cd vits_with_chatbot
pip install -r requirements.txt
python main.py

面板说明

完成chatbot方式选择及vits模型加载

Image text 可供选择的方式: gpt3.5/gpt3的api,CHATGLM 方法:将路径或者api key填写进文本框中

测试api是否启动

Image text

由于openai将扩大对非官方api的打压力度,如以盈利为目的,建议采用官方的api key以及openai标准库

vits_with_chatgpt-gpt3's People

Contributors

paraworks avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

vits_with_chatgpt-gpt3's Issues

需要帮助

Traceback (most recent call last):
File "api_launch.py", line 2, in
from text import text_to_sequence
File "D:\vits_with_chatgpt-gpt3\text_init_.py", line 2, in
from text import cleaners
File "D:\vits_with_chatgpt-gpt3\text\cleaners.py", line 3, in
from text.japanese import clean_japanese, japanese_to_romaji_with_accent, japanese_to_ipa, japanese_to_ipa2, japanese_to_ipa3
File "D:\vits_with_chatgpt-gpt3\text\japanese.py", line 6, in
dll = ctypes.cdll.LoadLibrary('cleaners/JapaneseCleaner.dll')
File "D:\ProgramData\anaconda3\envs\cvl\lib\ctypes_init_.py", line 451, in LoadLibrary
return self.dlltype(name)
File "D:\ProgramData\anaconda3\envs\cvl\lib\ctypes_init
.py", line 373, in init
self._handle = _dlopen(self._name, mode)
FileNotFoundError: Could not find module 'D:\vits_with_chatgpt-gpt3\cleaners\JapaneseCleaner.dll' (or one of its dependencies). Try using the full path with constructor syntax.

MB-iSTFT-VITS 和 one_step 在生成音频时报错

根据 README,我在 Linux (Ubuntu 20.04 Python 3.8) 和 Windows 11 (Python 3.7.3) 在努力解决了各种依赖和编译问题后,正常运行 one_step.py。但是在游戏端发起请求时只能收到文字内容,在转换音频的代码部分报错如下:

image

我的 Linux 系统和 Windows 系统都是这个同样的错误。请问是怎么回事呀?

我的操作流程是:

  1. 按照 README 执行所有步骤。并安装其他依赖(pip install flask)。没有用 conda 或 venv,都是直接装到全局。
  2. 复制 one_step.py 到 MB-iSTFT-VITS 目录下。
  3. 修改配置,见下文。
  4. cd 进去并运行。

one_step.py 内容如下,我只贴上我改动过的地方。除了此文件之外其他所有文件均无任何修改。

#using nene, you can find it in the /MB-iSTFT-VITS/tree/main/configs
#### 直接修改了这个路径 ####
hps = utils.get_hparams_from_file("configs/ljs_mb_istft_vits.json")
net_g = SynthesizerTrn(
    len(symbols),
    hps.data.filter_length // 2 + 1,
    hps.train.segment_size // hps.data.hop_length,
    **hps.model)
_ = net_g.eval()
#### 从这个链接下载的文件,就放在了当前文件夹里 ####
#https://huggingface.co/innnky/mb-vits-models/resolve/main/tempbest.pth
_ = utils.load_checkpoint("tempbest.pth", net_g, None)
import time
#Editing your setting here
def friend_chat(text):
  call_name = "her name"
  openai.api_key = "********************************************"

求助

求问,在codespace里按照教程搭上环境后,启动api_launch.py的时候报错 AssertionError
具体错误:
Traceback (most recent call last):
File "api_launch.py", line 44, in
symbols = get_symbols_from_json(args.cfg)
File "api_launch.py", line 38, in get_symbols_from_json
assert os.path.isfile(path)
AssertionError

缺少cleaner模组

FileNotFoundError: Could not find module 'D:\live2d+chatgpt\4.19\vits_with_chatgpt-gpt3-window\cleaners\JapaneseCleaner.dll' (or one of its dependencies). Try using the full path with constructor syntax.
这个是vits的模组还是哪的?

使用colab导出onnx的错误 (安装依赖时)

fatal: destination path 'vits_web_demo' already exists and is not an empty directory.
/content/vits_web_demo/export
Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/
Requirement already satisfied: onnxruntime in /usr/local/lib/python3.8/dist-packages (1.14.0)
Requirement already satisfied: Cython in /usr/local/lib/python3.8/dist-packages (0.29.33)
Requirement already satisfied: packaging in /usr/local/lib/python3.8/dist-packages (from onnxruntime) (23.0)
Requirement already satisfied: coloredlogs in /usr/local/lib/python3.8/dist-packages (from onnxruntime) (15.0.1)
Requirement already satisfied: protobuf in /usr/local/lib/python3.8/dist-packages (from onnxruntime) (3.19.6)
Requirement already satisfied: sympy in /usr/local/lib/python3.8/dist-packages (from onnxruntime) (1.7.1)
Requirement already satisfied: flatbuffers in /usr/local/lib/python3.8/dist-packages (from onnxruntime) (23.1.21)
Requirement already satisfied: numpy>=1.21.6 in /usr/local/lib/python3.8/dist-packages (from onnxruntime) (1.22.4)
Requirement already satisfied: humanfriendly>=9.1 in /usr/local/lib/python3.8/dist-packages (from coloredlogs->onnxruntime) (10.0)
Requirement already satisfied: mpmath>=0.19 in /usr/local/lib/python3.8/dist-packages (from sympy->onnxruntime) (1.2.1)
Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/
Collecting pyopenjtalk
Using cached pyopenjtalk-0.3.0.tar.gz (1.5 MB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Collecting unidecode
Using cached Unidecode-1.3.6-py3-none-any.whl (235 kB)
Collecting pypinyin
Using cached pypinyin-0.48.0-py2.py3-none-any.whl (1.4 MB)
Requirement already satisfied: jieba in /usr/local/lib/python3.8/dist-packages (from -r requirements.txt (line 5)) (0.42.1)
Collecting cn2an
Using cached cn2an-0.5.19-py3-none-any.whl (19 kB)
Requirement already satisfied: tqdm in /usr/local/lib/python3.8/dist-packages (from -r requirements.txt (line 10)) (4.64.1)
Collecting pycodestyle==2.6.0
Using cached pycodestyle-2.6.0-py2.py3-none-any.whl (41 kB)
Collecting pyflakes==2.2.0
Using cached pyflakes-2.2.0-py2.py3-none-any.whl (66 kB)
Collecting sklearn
Using cached sklearn-0.0.post1.tar.gz (3.6 kB)
error: subprocess-exited-with-error

× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> See above for output.

note: This error originates from a subprocess, and is likely not a problem with pip.
Preparing metadata (setup.py) ... error
error: metadata-generation-failed

× Encountered error while generating package metadata.
╰─> See above for output.

note: This is an issue with the package mentioned above, not pip.
hint: See above for details.

请问如下报错如何解决?

Traceback (most recent call last):
File "D:...\vits_with_chatgpt-gpt3\main.py", line 6, in
from text import text_to_sequence
File "D:...\vits_with_chatgpt-gpt3\text_init_.py", line 2, in
from text import cleaners
File "D:...\vits_with_chatgpt-gpt3\text\cleaners.py", line 3, in
from text.japanese import japanese_to_romaji_with_accent, japanese_to_ipa, japanese_to_ipa2, japanese_to_ipa3
File "D:...\vits_with_chatgpt-gpt3\text\japanese.py", line 3, in
import pyopenjtalk
File "D:...\vits_with_chatgpt-gpt3\venv\lib\site-packages\pyopenjtalk_init_.py", line 20, in
from .htsengine import HTSEngine
File "pyopenjtalk/htsengine.pyx", line 1, in init pyopenjtalk.htsengine
ValueError: numpy.ndarray size changed, may indicate binary incompatibility. Expected 96 from C header, got 88 from PyObject

transformers版本4.25.1

File "C:\Users\yuhua/.cache\huggingface\modules\transformers_modules\local\modeling_chatglm.py", line 29, in
from transformers.generation.utils import LogitsProcessorList, StoppingCriteriaList, GenerationConfig
ImportError: cannot import name 'GenerationConfig' from 'transformers.generation.utils' (C:\Users\yuhua\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\generation\utils.py)

请问怎么解决?

报错

C:\Users\linh\Desktop\chatglm-voice>python chat.py
Explicitly passing a revision is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
Explicitly passing a revision is encouraged when loading a configuration with custom code to ensure no malicious code has been contributed in a newer revision.
Explicitly passing a revision is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
No compiled kernel found.
Compiling kernels : C:\Users\linh.cache\huggingface\modules\transformers_modules\local\quantization_kernels_parallel.c
Compiling gcc -O3 -fPIC -pthread -fopenmp -std=c99 C:\Users\linh.cache\huggingface\modules\transformers_modules\local\quantization_kernels_parallel.c -shared -o C:\Users\linh.cache\huggingface\modules\transformers_modules\local\quantization_kernels_parallel.so
Kernels compiled : C:\Users\linh.cache\huggingface\modules\transformers_modules\local\quantization_kernels_parallel.so
Cannot load cpu kernel, don't use quantized model on cpu.
Using quantization cache
Applying quantization to glm layers
INFO:transformers_modules.local.modeling_chatglm:Already quantized, reloading cpu kernel.
No compiled kernel found.
Compiling kernels : C:\Users\linh.cache\huggingface\modules\transformers_modules\local\quantization_kernels_parallel.c
Compiling gcc -O3 -fPIC -pthread -fopenmp -std=c99 C:\Users\linh.cache\huggingface\modules\transformers_modules\local\quantization_kernels_parallel.c -shared -o C:\Users\linh.cache\huggingface\modules\transformers_modules\local\quantization_kernels_parallel.so
Kernels compiled : C:\Users\linh.cache\huggingface\modules\transformers_modules\local\quantization_kernels_parallel.so
Traceback (most recent call last):
File "C:\Users\linh\Desktop\chatglm-voice\chat.py", line 93, in
model = AutoModel.from_pretrained(args.ChatGLM, trust_remote_code=True).half().quantize(4).cuda()
File "C:\Users\linh/.cache\huggingface\modules\transformers_modules\local\modeling_chatglm.py", line 1397, in quantize load_cpu_kernel(**kwargs)
File "C:\Users\linh/.cache\huggingface\modules\transformers_modules\local\quantization.py", line 394, in load_cpu_kernel
cpu_kernels = CPUKernel(**kwargs)
File "C:\Users\linh/.cache\huggingface\modules\transformers_modules\local\quantization.py", line 161, in init
kernels = ctypes.cdll.LoadLibrary(kernel_file)
File "C:\Users\linh\AppData\Local\conda\conda\envs\voice\lib\ctypes_init_.py", line 452, in LoadLibrary
return self.dlltype(name)
File "C:\Users\linh\AppData\Local\conda\conda\envs\voice\lib\ctypes_init
.py", line 374, in init
self._handle = _dlopen(self._name, mode)
FileNotFoundError: Could not find module 'C:\Users\linh.cache\huggingface\modules\transformers_modules\local\quantization_kernels_parallel.so' (or one of its dependencies). Try using the full path with constructor syntax.
报错如上。

Front-end Issues

你好,谢谢你的项目,我大致了解了一下项目,目前的前端界面是windows program吧,有没有做web前端界面的计划呢?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.