You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
显存只有16G,想处理一批自己的数据,如何量化加载liuhaotian/llava-v1.6-34b模型,报错ValueError: Calling cuda() is not supported for 4-bit or 8-bit quantized models. Please use the model as it is, since the model has already been set to the correct devices and casted to the correct dtype.有什么办法减少显存占用吗,或者说最低需要多少显存才能跑data process部分。
The text was updated successfully, but these errors were encountered:
显存只有16G,想处理一批自己的数据,如何量化加载liuhaotian/llava-v1.6-34b模型,报错ValueError: Calling
cuda()
is not supported for4-bit
or8-bit
quantized models. Please use the model as it is, since the model has already been set to the correct devices and casted to the correctdtype
.有什么办法减少显存占用吗,或者说最低需要多少显存才能跑data process部分。The text was updated successfully, but these errors were encountered: