Re: Difficulty adding / using a non-default RGW placement target & storage class

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Following up on my own post from last month, for posterity.

The trick was updating the period.  I'm not using multisite, but Rook seems to deploy so that one can.

-- aad

> On Nov 6, 2023, at 16:52, Anthony D'Atri <aad@xxxxxxxxxxxxxx> wrote:
> 
> I'm having difficulty adding and using a non-default placement target & storage class and would appreciate insights.  Am I going about this incorrectly?  Rook does not yet have the ability to do this, so I'm adding it by hand.
> 
> Following instructions on the net I added a second bucket pool, placement target, and storageclass, and created a user defaulting to the new pg/sc, but I get an error when trying to create a bucket:
> 
> [rook@rook-ceph-tools-5ff8d58445-gkl5w .aws]$ s5cmd --endpoint-url http://rook-ceph-rgw-ceph-objectstore.rook-ceph.svc mb s3://foofoobars
> ERROR "mb s3://foofoobars": InvalidLocationConstraint:  status code: 400, request id: tx0000057b71002881d48ca-0065495d54-1abe555-ceph-objectstore, host id:
> 
> 
> I found an article suggesting that the placement target and/or storage class should have the api_name prepended, so I tried setting either or both to "ceph-objectstore:HDD-EC" / "ceph-objectstore:GLACIER" with no success.  I suspect that I'm missing something subtle -- or that Rook has provisioned these bits in an atypical fashion.
> 
> Log entry:
> 
> /var/log/ceph/ceph-client.rgw.ceph.objectstore.a.log-2023-11-06T21:40:36.543+0000 7f6573a9f700  1 ====== starting new request req=0x7f64818ba730 =====
> /var/log/ceph/ceph-client.rgw.ceph.objectstore.a.log-2023-11-06T21:40:36.546+0000 7f6570a99700  0 req 6320538205097380042 0.003000009s s3:create_bucket could not find user default placement id HDD-EC/GLACIER within zonegroup
> /var/log/ceph/ceph-client.rgw.ceph.objectstore.a.log-2023-11-06T21:40:36.546+0000 7f6570a99700  1 ====== req done req=0x7f64818ba730 op status=-2208 http_status=400 latency=0.003000009s ======
> /var/log/ceph/ceph-client.rgw.ceph.objectstore.a.log:2023-11-06T21:40:36.546+0000 7f6570a99700  1 beast: 0x7f64818ba730: 10.233.90.156 - aad [06/Nov/2023:21:40:36.543 +0000] "PUT /foofoobars HTTP/1.1" 400 266 - "aws-sdk-go/1.40.25 (go1.18.3; linux; amd64)" - latency=0.003000009s
> 
> 
> 
> [rook@rook-ceph-tools-5ff8d58445-gkl5w ~]$ ceph -v
> ceph version 17.2.5 (98318ae89f1a893a6ded3a640405cdbb33e08757) quincy (stable)
> 
> Here's the second buckets pool, constrained to HDDs.  AFAICT it can share the index and data_extra_pool created for the default / STANDARD pt/sc by Rook.  I initially omitted ec_overwrites but enabled it after creation.
> 
> pool 19 'ceph-objectstore.rgw.buckets.data' erasure profile ceph-objectstore.rgw.buckets.data_ecprofile size 6 min_size 5 crush_rule 10 object_hash rjenkins pg_num 8192 pgp_num 8192 autoscale_mode off last_change 165350 lfor 0/156300/165341 flags hashpspool,ec_overwrites stripe_width 16384 application rook-ceph-rgw
> pool 21 'ceph-objectstore.rgw.buckets.data.hdd' erasure profile ceph-objectstore.rgw.buckets.data_ecprofile_hdd size 6 min_size 5 crush_rule 11 object_hash rjenkins pg_num 8192 pgp_num 8192 autoscale_mode off last_change 167193 lfor 0/0/164453 flags hashpspool,ec_overwrites stripe_width 16384 application rook-ceph-rgw
> [rook@rook-ceph-tools-5ff8d58445-gkl5w ~]$
> 
> 
> [rook@rook-ceph-tools-5ff8d58445-gkl5w ~]$ radosgw-admin zonegroup get
> {
>    "id": "d994155c-2a9c-4e37-ae30-64fd2934ff99",
>    "name": "ceph-objectstore",
>    "api_name": "ceph-objectstore",
>    "is_master": "true",
>    "endpoints": [
>        "http://rook-ceph-rgw-ceph-objectstore.rook-ceph.svc:80";
>    ],
>    "hostnames": [],
>    "hostnames_s3website": [],
>    "master_zone": "72035401-a6d9-426b-8c89-9a17e268825f",
>    "zones": [
>        {
>            "id": "72035401-a6d9-426b-8c89-9a17e268825f",
>            "name": "ceph-objectstore",
>            "endpoints": [
>                "http://rook-ceph-rgw-ceph-objectstore.rook-ceph.svc:80";
>            ],
>            "log_meta": "false",
>            "log_data": "false",
>            "bucket_index_max_shards": 11,
>            "read_only": "false",
>            "tier_type": "",
>            "sync_from_all": "true",
>            "sync_from": [],
>            "redirect_zone": ""
>        }
>    ],
>    "placement_targets": [
>        {
>            "name": "HDD-EC",
>            "tags": [],
>            "storage_classes": [
>                "GLACIER"
>            ]
>        },
>        {
>            "name": "default-placement",
>            "tags": [],
>            "storage_classes": [
>                "STANDARD"
>            ]
>        }
>    ],
>    "default_placement": "default-placement",
>    "realm_id": "51fb8875-31ac-40ef-ab21-0ffd4e229f15",
>    "sync_policy": {
>        "groups": []
>    }
> }
> 
> 
> 
> [rook@rook-ceph-tools-5ff8d58445-gkl5w ~]$ radosgw-admin zone get
> {
>    "id": "72035401-a6d9-426b-8c89-9a17e268825f",
>    "name": "ceph-objectstore",
>    "domain_root": "ceph-objectstore.rgw.meta:root",
>    "control_pool": "ceph-objectstore.rgw.control",
>    "gc_pool": "ceph-objectstore.rgw.log:gc",
>    "lc_pool": "ceph-objectstore.rgw.log:lc",
>    "log_pool": "ceph-objectstore.rgw.log",
>    "intent_log_pool": "ceph-objectstore.rgw.log:intent",
>    "usage_log_pool": "ceph-objectstore.rgw.log:usage",
>    "roles_pool": "ceph-objectstore.rgw.meta:roles",
>    "reshard_pool": "ceph-objectstore.rgw.log:reshard",
>    "user_keys_pool": "ceph-objectstore.rgw.meta:users.keys",
>    "user_email_pool": "ceph-objectstore.rgw.meta:users.email",
>    "user_swift_pool": "ceph-objectstore.rgw.meta:users.swift",
>    "user_uid_pool": "ceph-objectstore.rgw.meta:users.uid",
>    "otp_pool": "ceph-objectstore.rgw.otp",
>    "system_key": {
>        "access_key": "",
>        "secret_key": ""
>    },
>    "placement_pools": [
>        {
>            "key": "HDD-EC",
>            "val": {
>                "index_pool": "ceph-objectstore.rgw.buckets.index",
>                "storage_classes": {
>                    "GLACIER": {
>                        "data_pool": "ceph-objectstore.rgw.buckets.data.hdd"
>                    },
>                    "STANDARD": {}        # <------------- seems like this shouldn't be here?
>                },
>                "data_extra_pool": "ceph-objectstore.rgw.buckets.non-ec",
>                "index_type": 0
>            }
>        },
>        {
>            "key": "default-placement",
>            "val": {
>                "index_pool": "ceph-objectstore.rgw.buckets.index",
>                "storage_classes": {
>                    "STANDARD": {
>                        "data_pool": "ceph-objectstore.rgw.buckets.data"
>                    }
>                },
>                "data_extra_pool": "ceph-objectstore.rgw.buckets.non-ec",
>                "index_type": 0
>            }
>        }
>    ],
>    "realm_id": "",
>    "notif_pool": "ceph-objectstore.rgw.log:notif"
> }
> 
> 
> 
> 
> [rook@rook-ceph-tools-5ff8d58445-gkl5w ~]$ radosgw-admin user info --uid=aad
> {
>    "user_id": "aad",
>    "display_name": "Anthony",
>    "email": "",
>    "suspended": 0,
>    "max_buckets": 1000,
>    "subusers": [],
>    "keys": [
>        {
>            "user": "aad",
>            "access_key": "xxxx",
>            "secret_key": "yyyyyy"
>        }
>    ],
>    "swift_keys": [],
>    "caps": [],
>    "op_mask": "read, write, delete",
>    "default_placement": "HDD-EC",
>    "default_storage_class": "GLACIER",
>    "placement_tags": [],
>    "bucket_quota": {
>        "enabled": false,
>        "check_on_raw": false,
>        "max_size": -1,
>        "max_size_kb": 0,
>        "max_objects": -1
>    },
>    "user_quota": {
>        "enabled": false,
>        "check_on_raw": false,
>        "max_size": -1,
>        "max_size_kb": 0,
>        "max_objects": -1
>    },
>    "temp_url_keys": [],
>    "type": "rgw",
>    "mfa_ids": []
> }
> 
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux