Running different rgw daemon with same cephxuser

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,

In OCS(Rook) env workflow for RGW daemons as follows,

Normally for creating ceph object-store, the first Rook creates pools for rgw daemon with the specified configuration.

Then depending on the no of instances, Rook create cephxuser and then rgw spawn daemon in the container(pod) using its id
with following arguments for radosgw binary
    Args:
      --fsid=91501490-4b55-47db-b226-f9d9968774c1
      --keyring=/etc/ceph/keyring-store/keyring
      --log-to-stderr=true
      --err-to-stderr=true
      --mon-cluster-log-to-stderr=true
      --log-stderr-prefix=debug 
      --default-log-to-file=false
      --default-mon-cluster-log-to-file=false
      --mon-host=$(ROOK_CEPH_MON_HOST)
      --mon-initial-members=$(ROOK_CEPH_MON_INITIAL_MEMBERS)
      --id=rgw.my.store.a
      --setuser=ceph
      --setgroup=ceph
      --foreground
      --rgw-frontends=beast port=8080
      --host=$(POD_NAME)
      --rgw-mime-types-file=/etc/ceph/rgw/mime.types
      --rgw-realm=my-store
      --rgw-zonegroup=my-store
      --rgw-zone=my-store

And here cephxuser will be "client.rgw.my.store.a" and all the pools for rgw will be created as my-store*. Normally if there is 
a request for another instance in the config file for a ceph-object-store config file[1] for rook, another user "client.rgw.mystore.b"
will be created by rook and will consume the same pools.

There is a feature in Kubernetes known as autoscale in which pods can be automatically scaled based on specified metrics. If we apply that
feature for rgw pods, Kubernetes will automatically scale the rgw pods(like a clone of the existing pod) with the same argument for "--id"
based on the metrics, but ceph cannot distinguish those as different rgw daemons even though multiple pods of rgw are running simultaneously.
 In "ceph status" shows only one daemon rgw as well

In vstart or ceph ansible(Ali help me to figure it out), I can see for each rgw daemon a cephxuser is getting created as well

Is this behaviour intended ? or am I hitting any corner case which was never tested before?

There is no point of autoscaling of rgw pod if it considered to the same daemon, the s3 client will talk to only one of the pods and ceph mgr
provides metrics can give incorrect data as well which can affect the autoscale feature

Also opened an issue in rook for the time being [2]

[1] https://github.com/rook/rook/blob/master/cluster/examples/kubernetes/ceph/object-test.yaml
[2] https://github.com/rook/rook/issues/6943

Regards,
Jiffin
_______________________________________________
Dev mailing list -- dev@xxxxxxx
To unsubscribe send an email to dev-leave@xxxxxxx



[Index of Archives]     [CEPH Users]     [Ceph Devel]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux