-
Notifications
You must be signed in to change notification settings - Fork 5.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] 本地启动报错 #4020
Comments
To address the startup error you're encountering with
By following these steps, you should be able to diagnose and potentially resolve the startup error. If the problem persists, providing more specific error messages from the logs can help in further troubleshooting.
|
请问您解决这个问题了吗 |
没有。我在
不知道哪里有问题 |
我将model_config.py中的MODEL_ROOT_PATH修改为chatglm的base dir后解决了这个问题,但是目前弹出了新的问题,是关于CUDA driver版本过时的问题,我正在解决 |
问题描述 / Problem Description
本地启动报错,报错日志如下:
2024-05-14 18:11:17 | INFO | model_worker | Register to controller
2024-05-14 18:11:17 | ERROR | stderr | INFO: Started server process [86837]
2024-05-14 18:11:17 | ERROR | stderr | INFO: Waiting for application startup.
2024-05-14 18:11:17 | ERROR | stderr | INFO: Application startup complete.
2024-05-14 18:11:17 | ERROR | stderr | INFO: Uvicorn running on http://0.0.0.0:20000 (Press CTRL+C to quit)
2024-05-14 18:11:20 | INFO | model_worker | Loading the model ['chatglm3-6b'] on worker ae64670c ...
2024-05-14 18:11:20 | ERROR | stderr | Process model_worker - chatglm3-6b:
2024-05-14 18:11:20 | ERROR | stderr | Traceback (most recent call last):
2024-05-14 18:11:20 | ERROR | stderr | File "/Users/ken.li/.pyenv/versions/3.11.6/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap
2024-05-14 18:11:20 | ERROR | stderr | self.run()
2024-05-14 18:11:20 | ERROR | stderr | File "/Users/ken.li/.pyenv/versions/3.11.6/lib/python3.11/multiprocessing/process.py", line 108, in run
2024-05-14 18:11:20 | ERROR | stderr | self._target(*self._args, **self._kwargs)
2024-05-14 18:11:20 | ERROR | stderr | File "/Users/ken.li/PycharmProjects/Langchain-Chatchat/startup.py", line 389, in run_model_worker
2024-05-14 18:11:20 | ERROR | stderr | app = create_model_worker_app(log_level=log_level, **kwargs)
2024-05-14 18:11:20 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-05-14 18:11:20 | ERROR | stderr | File "/Users/ken.li/PycharmProjects/Langchain-Chatchat/startup.py", line 217, in create_model_worker_app
2024-05-14 18:11:20 | ERROR | stderr | worker = ModelWorker(
2024-05-14 18:11:20 | ERROR | stderr | ^^^^^^^^^^^^
2024-05-14 18:11:20 | ERROR | stderr | File "/Users/ken.li/.pyenv/versions/chatglm/lib/python3.11/site-packages/fastchat/serve/model_worker.py", line 77, in init
2024-05-14 18:11:20 | ERROR | stderr | self.model, self.tokenizer = load_model(
2024-05-14 18:11:20 | ERROR | stderr | ^^^^^^^^^^^
2024-05-14 18:11:20 | ERROR | stderr | File "/Users/ken.li/.pyenv/versions/chatglm/lib/python3.11/site-packages/fastchat/model/model_adapter.py", line 348, in load_model
2024-05-14 18:11:20 | ERROR | stderr | model, tokenizer = adapter.load_model(model_path, kwargs)
2024-05-14 18:11:20 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-05-14 18:11:20 | ERROR | stderr | File "/Users/ken.li/.pyenv/versions/chatglm/lib/python3.11/site-packages/fastchat/model/model_adapter.py", line 816, in load_model
2024-05-14 18:11:20 | ERROR | stderr | tokenizer = AutoTokenizer.from_pretrained(
2024-05-14 18:11:20 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-05-14 18:11:20 | ERROR | stderr | File "/Users/ken.li/.pyenv/versions/chatglm/lib/python3.11/site-packages/transformers/models/auto/tokenization_auto.py", line 801, in from_pretrained
2024-05-14 18:11:20 | ERROR | stderr | return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
2024-05-14 18:11:20 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-05-14 18:11:20 | ERROR | stderr | File "/Users/ken.li/.pyenv/versions/chatglm/lib/python3.11/site-packages/transformers/tokenization_utils_base.py", line 2029, in from_pretrained
复现问题的步骤 / Steps to Reproduce
环境信息 / Environment Information
langchain-ChatGLM 版本/commit 号:v0.2.10
是否使用 Docker 部署(是/否):否
使用的模型(ChatGLM2-6B / Qwen-7B 等):ChatGLM3-6B
使用的 Embedding 模型(moka-ai/m3e-base 等):BAAI/bge-large-zh
操作系统及版本 / Operating system and version: macos 13.6.4
Python 版本 / Python version: 3.11.6
其他相关环境信息 / Other relevant environment information:
附加信息 / Additional Information
添加与问题相关的任何其他信息 / Add any other information related to the issue.
The text was updated successfully, but these errors were encountered: