-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
rook rgw failed to apply for pvc #14219
Comments
StorageClasses used for OBCs cannot be mounted to pods as Kubernetes PVCs. OBCs are consumed using the S3 credentials present in the generated Secret. Please refer to the Ceph object bucket claim documentation for moredetails about how to create and consume bucket claims. |
Accidentally, closed, sorry. In the OBC, I see these 2 values, which are not compatible together. Use
Use of both of these fields may be confusing the OBC controller, leading to this message:
Based on this operator log, it seems that the bucket
As for these log messages, it's possible (but unlikely) that trying to consume the OBC storageclass via PVC may have affected the OBC controller's ability to provision the bucket.
I would suggest deleting the OBC, and then using I should also note that Rook v1.10 is now unsupported. If there is a bug here, we won't be making any code changes to resolve it. Please upgrade to v1.13 or v1.14 for support. |
Does rook not support rgw to apply for pvc? I am still confused. Will you support RGW in the future? Is the current rgw in rook only used to create buckets |
RGW provides object storage via an S3 interface. PVCs are for block and file storage only, so there is no way to add PVC support for RGW. Block storage and single-user file storage is provided by Ceph natively. CephFilesystem allows for shared-user file storage similarly to how RGW (CephObjectStore) provides S3 object storage. Rook integrated with the OBC project many years back to allow users to have a PVC-like experience with object storage. |
This project can be combined with csi to achieve the mounting use of pvc, the bottom is also the use of ceph storage, why not refer to this project to achieve? |
We weren't aware of that project until now. If you or any other users want to use that project to mount S3 storage to pods, you are welcome to, but the Rook project doesn't have time to vet every possible integration. Currently, Rook uses the OBC project for bucket claims. Rook integrated with it early, but the OBC project was largely abandoned after the v1alpha1 release. Rook is now stuck maintaining support for it until we can replace it with COSI, which is our long-term strategy. The Container Object Storage Interface (COSI) Kubernetes Enhancement Project is outlined here: https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/1979-object-storage-support Since COSI is on its way to become a K8s standard, and is starting to have wider industry involvement, we don't see any reason for Rook to pivot to the yandex project. We believe COSI is the future of self-service object storage on Kubernetes. Neither OBCs nor COSI support mounting buckets as a filesystem into pods. COSI considered this early in its development but opted not to do so since nearly all object storage systems are mounted and consumed via HTTP-based APIs. |
Ok, thank you for your answer, I believe rook will be better |
Is this a bug report or feature request?
ceph status
According to
to create a storage class, But pvc is in a waiting state
my pod
describe pvc
describe po
rook-ceph-rgw logs
operator log
Deviation from expected behavior:
I check the operator log and it shows that no bucket is created. Does this need to be created in advance? Will cr not automatically create it
Expect normal use of pvc
Expected behavior:
How to reproduce it (minimal and precise):
File(s) to submit:
cluster.yaml
, if necessaryLogs to submit:
Operator's logs, if necessary
Crashing pod(s) logs, if necessary
To get logs, use
kubectl -n <namespace> logs <pod name>
When pasting logs, always surround them with backticks or use the
insert code
button from the Github UI.Read GitHub documentation if you need help.
Cluster Status to submit:
Output of kubectl commands, if necessary
To get the health of the cluster, use
kubectl rook-ceph health
To get the status of the cluster, use
kubectl rook-ceph ceph status
For more details, see the Rook kubectl Plugin
Environment:
uname -a
):rook version
inside of a Rook Pod): rook-1.10.12ceph -v
): ceph version 17.2.5kubectl version
): 1.28.6ceph health
in the Rook Ceph toolbox):The text was updated successfully, but these errors were encountered: