-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How do I verify that my rook-ceph is using bluestore #14231
Comments
Rook only supports bluestore, so you can be sure they are all running bluestore. Rook creates OSDs with |
ceph deployment requires log disk and data disk, how to distinguish this |
I have tried lsblk /dev/sdb and created a new partition /dev/sdb1, but it does not affect the use. In this case, what if someone uses this disk? Maybe even someone does not know that this disk has been used by ceph rook, because it is used at the bottom. Others also don't know if this data disk is being used, and it doesn't look very friendly |
Which disks do you mean? Each OSD only requires one disk. |
This place has said that you can create multiple devices to store log devices, and some DB and other data |
Yes, that is an option, it is just not the default. Try searching the Rook docs for "metadataDevice" |
I found the file here,
According to the comment information, there is no introduction to how to use the distinction between data disks, log disks, and db disks, but there is an additional partition, and a udev device |
Is this a bug report or feature request?
I used three nodes as ceph nodes, and sdb as the data storage disk of ceph,But why can't I see lsblk partitions for ceph to use after installing rook
I can see that the container for rook-ceph-osd-0 is already set to the bluestore type and uses sdb
I can see the status of the osd and do currently reference the size of the disk on my three nodes, but why can't lsblk see traces of ceph use
this is my cluster.yaml
I am not sure how to manage rook because I have three data disks. Is the current situation correct? Besides, log disks are required for ceph deployment in production, how should I plan log disks for three nodes and declare them in cluster.yaml
Deviation from expected behavior:
Expect to explain
Expected behavior:
How to reproduce it (minimal and precise):
File(s) to submit:
cluster.yaml
, if necessaryLogs to submit:
Operator's logs, if necessary
Crashing pod(s) logs, if necessary
To get logs, use
kubectl -n <namespace> logs <pod name>
When pasting logs, always surround them with backticks or use the
insert code
button from the Github UI.Read GitHub documentation if you need help.
Cluster Status to submit:
Output of kubectl commands, if necessary
To get the health of the cluster, use
kubectl rook-ceph health
To get the status of the cluster, use
kubectl rook-ceph ceph status
For more details, see the Rook kubectl Plugin
Environment:
uname -a
):rook version
inside of a Rook Pod):rook-1.10.12ceph -v
): ceph version 17.2.5kubectl version
):1.28.6ceph health
in the Rook Ceph toolbox):The text was updated successfully, but these errors were encountered: