-
Notifications
You must be signed in to change notification settings - Fork 5.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
2.10版本是否支持BlueLM-7B-Chat,BlueLM-7B-Chat-32k 按默认的配置设置路径后,python startup.py -a无法启动 #4037
Comments
To address the Additionally, confirm that you're using the correct version of the Ensure that your If these steps do not resolve the issue, please check that the model is correctly available on the model hosting platform and that your environment can access it. If modifications were made to the configuration or code related to model loading, reverting to a known good configuration may also help isolate the issue.
|
我按你的方法安装transformers=4.37.2,还是报一样的错误。 ==============================Langchain-Chatchat Configuration============================== 当前使用的分词器:ChineseRecursiveTextSplitter 2024-05-17 14:00:30,369 - startup.py[line:655] - INFO: 正在启动服务: 配置文件也检查了,模型下载在本地的。配置应该没问题,切换到本地chatglm3是不会报错的。 Embedding 模型运行设备。设为 "auto" 会自动检测(会有警告),也可手动设定为 "cuda","mps","cpu","xpu" 其中之一。EMBEDDING_DEVICE = "auto" 选用的reranker模型RERANKER_MODEL = "bge-reranker-large" 是否启用reranker模型USE_RERANKER = False 如果需要在 EMBEDDING_MODEL 中增加自定义的关键字时配置EMBEDDING_KEYWORD_FILE = "keywords.txt" 要运行的 LLM 名称,可以包括本地模型和在线模型。列表中本地模型将在启动项目时全部加载。列表中第一个模型将作为 API 和 WEBUI 的默认模型。在这里,我们使用目前主流的两个离线模型,其中,chatglm3-6b 为默认加载模型。如果你的显存不足,可使用 Qwen-1_8B-Chat, 该模型 FP16 仅需 3.8G显存。LLM_MODELS = ["BlueLM-7B-Chat", "zhipu-api", "openai-api"] LLM 模型运行设备。设为"auto"会自动检测(会有警告),也可手动设定为 "cuda","mps","cpu","xpu" 其中之一。LLM_DEVICE = "cuda" "llm_model": {
|
(Langchain-Chatchat) tcarh@K5RCPVT45N2DX5K:~/Langchain-Chatchat-0.2.10$ python startup.py -a
==============================Langchain-Chatchat Configuration==============================
操作系统:Linux-5.15.133.1-microsoft-standard-WSL2-x86_64-with-glibc2.35.
python版本:3.10.12 | packaged by conda-forge | (main, Jun 23 2023, 22:40:32) [GCC 12.3.0]
项目版本:v0.2.10
langchain版本:0.0.354. fastchat版本:0.2.35
当前使用的分词器:ChineseRecursiveTextSplitter
当前启动的LLM模型:['BlueLM-7B-Chat', 'zhipu-api', 'openai-api'] @ cuda
{'device': 'cuda',
'host': '0.0.0.0',
'infer_turbo': False,
'model_path': '/home/tcarh/langchain-ChatGLM/BlueLM-7B-Chat',
'model_path_exists': True,
'port': 20002}
{'api_key': '',
'device': 'cuda',
'host': '0.0.0.0',
'infer_turbo': False,
'online_api': True,
'port': 21001,
'provider': 'ChatGLMWorker',
'version': 'glm-4',
'worker_class': <class 'server.model_workers.zhipu.ChatGLMWorker'>}
{'api_base_url': 'https://api.openai.com/v1',
'api_key': '',
'device': 'cuda',
'host': '0.0.0.0',
'infer_turbo': False,
'model_name': 'gpt-4',
'online_api': True,
'openai_proxy': '',
'port': 20002}
当前Embbedings模型: bge-large-zh-v1.5 @ cuda
==============================Langchain-Chatchat Configuration==============================
2024-05-17 09:32:41,222 - startup.py[line:655] - INFO: 正在启动服务:
2024-05-17 09:32:41,222 - startup.py[line:656] - INFO: 如需查看 llm_api 日志,请前往 /home/tcarh/Langchain-Chatchat-0.2.10/logs
/home/tcarh/anaconda3/envs/Langchain-Chatchat/lib/python3.10/site-packages/langchain_core/_api/deprecation.py:117: LangChainDeprecationWarning: 模型启动功能将于 Langchain-Chatchat 0.3.x重写,支持更多模式和加速启动,0.2.x中相关功能将废弃
warn_deprecated(
2024-05-17 09:32:47 | INFO | model_worker | Register to controller
2024-05-17 09:32:47 | ERROR | stderr | INFO: Started server process [211407]
2024-05-17 09:32:47 | ERROR | stderr | INFO: Waiting for application startup.
2024-05-17 09:32:47 | ERROR | stderr | INFO: Application startup complete.
2024-05-17 09:32:47 | ERROR | stderr | INFO: Uvicorn running on http://0.0.0.0:20000 (Press CTRL+C to quit)
2024-05-17 09:32:47 | INFO | model_worker | Loading the model ['BlueLM-7B-Chat'] on worker 4a9aa8ec ...
2024-05-17 09:32:48 | ERROR | stderr | Process model_worker - BlueLM-7B-Chat:
2024-05-17 09:32:48 | ERROR | stderr | Traceback (most recent call last):
2024-05-17 09:32:48 | ERROR | stderr | File "/home/tcarh/anaconda3/envs/Langchain-Chatchat/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
2024-05-17 09:32:48 | ERROR | stderr | self.run()
2024-05-17 09:32:48 | ERROR | stderr | File "/home/tcarh/anaconda3/envs/Langchain-Chatchat/lib/python3.10/multiprocessing/process.py", line 108, in run
2024-05-17 09:32:48 | ERROR | stderr | self._target(*self._args, **self._kwargs)
2024-05-17 09:32:48 | ERROR | stderr | File "/home/tcarh/Langchain-Chatchat-0.2.10/startup.py", line 389, in run_model_worker
2024-05-17 09:32:48 | ERROR | stderr | app = create_model_worker_app(log_level=log_level, **kwargs)
2024-05-17 09:32:48 | ERROR | stderr | File "/home/tcarh/Langchain-Chatchat-0.2.10/startup.py", line 217, in create_model_worker_app
2024-05-17 09:32:48 | ERROR | stderr | worker = ModelWorker(
2024-05-17 09:32:48 | ERROR | stderr | File "/home/tcarh/anaconda3/envs/Langchain-Chatchat/lib/python3.10/site-packages/fastchat/serve/model_worker.py", line 77, in init
2024-05-17 09:32:48 | ERROR | stderr | self.model, self.tokenizer = load_model(
2024-05-17 09:32:48 | ERROR | stderr | File "/home/tcarh/anaconda3/envs/Langchain-Chatchat/lib/python3.10/site-packages/fastchat/model/model_adapter.py", line 348, in load_model
2024-05-17 09:32:48 | ERROR | stderr | model, tokenizer = adapter.load_model(model_path, kwargs)
2024-05-17 09:32:48 | ERROR | stderr | File "/home/tcarh/anaconda3/envs/Langchain-Chatchat/lib/python3.10/site-packages/fastchat/model/model_adapter.py", line 826, in load_model
2024-05-17 09:32:48 | ERROR | stderr | model = AutoModel.from_pretrained(
2024-05-17 09:32:48 | ERROR | stderr | File "/home/tcarh/anaconda3/envs/Langchain-Chatchat/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 569, in from_pretrained
2024-05-17 09:32:48 | ERROR | stderr | raise ValueError(
2024-05-17 09:32:48 | ERROR | stderr | ValueError: Unrecognized configuration class <class 'transformers_modules.BlueLM-7B-Chat.configuration_bluelm.BlueLMConfig'> for this kind of AutoModel: AutoModel.
2024-05-17 09:32:48 | ERROR | stderr | Model type should be one of AlbertConfig, AlignConfig, AltCLIPConfig, ASTConfig, AutoformerConfig, BarkConfig, BartConfig, BeitConfig, BertConfig, BertGenerationConfig, BigBirdConfig, BigBirdPegasusConfig, BioGptConfig, BitConfig, BlenderbotConfig, BlenderbotSmallConfig, BlipConfig, Blip2Config, BloomConfig, BridgeTowerConfig, BrosConfig, CamembertConfig, CanineConfig, ChineseCLIPConfig, ClapConfig, CLIPConfig, CLIPVisionConfig, CLIPSegConfig, ClvpConfig, LlamaConfig, CodeGenConfig, ConditionalDetrConfig, ConvBertConfig, ConvNextConfig, ConvNextV2Config, CpmAntConfig, CTRLConfig, CvtConfig, Data2VecAudioConfig, Data2VecTextConfig, Data2VecVisionConfig, DebertaConfig, DebertaV2Config, DecisionTransformerConfig, DeformableDetrConfig, DeiTConfig, DetaConfig, DetrConfig, DinatConfig, Dinov2Config, DistilBertConfig, DonutSwinConfig, DPRConfig, DPTConfig, EfficientFormerConfig, EfficientNetConfig, ElectraConfig, EncodecConfig, ErnieConfig, ErnieMConfig, EsmConfig, FalconConfig, FlaubertConfig, FlavaConfig, FNetConfig, FocalNetConfig, FSMTConfig, FunnelConfig, GitConfig, GLPNConfig, GPT2Config, GPT2Config, GPTBigCodeConfig, GPTNeoConfig, GPTNeoXConfig, GPTNeoXJapaneseConfig, GPTJConfig, GPTSanJapaneseConfig, GraphormerConfig, GroupViTConfig, HubertConfig, IBertConfig, IdeficsConfig, ImageGPTConfig, InformerConfig, JukeboxConfig, Kosmos2Config, LayoutLMConfig, LayoutLMv2Config, LayoutLMv3Config, LEDConfig, LevitConfig, LiltConfig, LlamaConfig, LongformerConfig, LongT5Config, LukeConfig, LxmertConfig, M2M100Config, MarianConfig, MarkupLMConfig, Mask2FormerConfig, MaskFormerConfig, MaskFormerSwinConfig, MBartConfig, MCTCTConfig, MegaConfig, MegatronBertConfig, MgpstrConfig, MistralConfig, MixtralConfig, MobileBertConfig, MobileNetV1Config, MobileNetV2Config, MobileViTConfig, MobileViTV2Config, MPNetConfig, MptConfig, MraConfig, MT5Config, MvpConfig, NatConfig, NezhaConfig, NllbMoeConfig, NystromformerConfig, OneFormerConfig, OpenLlamaConfig, OpenAIGPTConfig, OPTConfig, Owlv2Config, OwlViTConfig, PatchTSMixerConfig, PatchTSTConfig, PegasusConfig, PegasusXConfig, PerceiverConfig, PersimmonConfig, PhiConfig, PLBartConfig, PoolFormerConfig, ProphetNetConfig, PvtConfig, QDQBertConfig, ReformerConfig, RegNetConfig, RemBertConfig, ResNetConfig, RetriBertConfig, RobertaConfig, RobertaPreLayerNormConfig, RoCBertConfig, RoFormerConfig, RwkvConfig, SamConfig, SeamlessM4TConfig, SeamlessM4Tv2Config, SegformerConfig, SEWConfig, SEWDConfig, Speech2TextConfig, SpeechT5Config, SplinterConfig, SqueezeBertConfig, SwiftFormerConfig, SwinConfig, Swin2SRConfig, Swinv2Config, SwitchTransformersConfig, T5Config, TableTransformerConfig, TapasConfig, TimeSeriesTransformerConfig, TimesformerConfig, TimmBackboneConfig, TrajectoryTransformerConfig, TransfoXLConfig, TvltConfig, TvpConfig, UMT5Config, UniSpeechConfig, UniSpeechSatConfig, UnivNetConfig, VanConfig, VideoMAEConfig, ViltConfig, VisionTextDualEncoderConfig, VisualBertConfig, ViTConfig, ViTHybridConfig, ViTMAEConfig, ViTMSNConfig, VitDetConfig, VitsConfig, VivitConfig, Wav2Vec2Config, Wav2Vec2ConformerConfig, WavLMConfig, WhisperConfig, XCLIPConfig, XGLMConfig, XLMConfig, XLMProphetNetConfig, XLMRobertaConfig, XLMRobertaXLConfig, XLNetConfig, XmodConfig, YolosConfig, YosoConfig.
The text was updated successfully, but these errors were encountered: