s3cmd does have special handling for 'US' and 'us-east-1' that skips
the LocationConstraint on bucket creation:
https://github.com/s3tools/s3cmd/blob/master/S3/S3.py#L380
On 02/26/2018 05:16 PM, David Turner
wrote:
I just realized the difference between the internal
realm, local realm, and local-atl realm. local-atl is a
Luminous cluster while the other 2 are Jewel. It looks like
that option was completely ignored in Jewel and now Luminous is
taking it into account (which is better imo). I think you're
right that 'us' is probably some sort of default in s3cmd that
doesn't actually send the variable to the gateway.
Unfortunately we only allow https for rgw in the
environments I have set up, but I think we found the cause of
the initial randomness of things. Thanks Yehuda.
I don't know
why 'us' works for you, but it could be that s3cmd is
just not sending any location constraint when 'us' is set. You
can try
looking at the capture for this. You can try using wireshark
for the
capture (assuming http endpoint and not https).
Yehuda
On Mon, Feb 26, 2018 at 1:21 PM, David Turner <drakonstein@xxxxxxxxx> wrote:
> I set it to that for randomness. I don't have a
zonegroup named 'us'
> either, but that works fine. I don't see why 'cn' should
be any different.
> The bucket_location that triggered me noticing this was
'gd1'. I don't know
> where that one came from, but I don't see why we should
force people setting
> it to 'us' when that has nothing to do with the realm.
If it needed to be
> set to 'local-atl' that would make sense, but 'us' works
just fine. Perhaps
> 'us' working is what shouldn't work as opposed to
allowing whatever else to
> be able to work.
>
> I tested setting bucket_location to 'local-atl' and it
did successfully
> create the bucket. So the question becomes, why do my
other realms not care
> what that value is set to and why does this realm allow
'us' to be used when
> it isn't correct?
>
> On Mon, Feb 26, 2018 at 4:12 PM Yehuda Sadeh-Weinraub
<yehuda@xxxxxxxxxx>
> wrote:
>>
>> If that's what you set in the config file, I assume
that's what passed
>> in. Why did you set that in your config file? You
don't have a
>> zonegroup named 'cn', right?
>>
>> On Mon, Feb 26, 2018 at 1:10 PM, David Turner <drakonstein@xxxxxxxxx>
>> wrote:
>> > I'm also not certain how to do the tcpdump for
this. Do you have any
>> > pointers to how to capture that for you?
>> >
>> > On Mon, Feb 26, 2018 at 4:09 PM David Turner
<drakonstein@xxxxxxxxx>
>> > wrote:
>> >>
>> >> That's what I set it to in the config file.
I probably should have
>> >> mentioned that.
>> >>
>> >> On Mon, Feb 26, 2018 at 4:07 PM Yehuda
Sadeh-Weinraub
>> >> <yehuda@xxxxxxxxxx>
>> >> wrote:
>> >>>
>> >>> According to the log here, it says that
the location constraint it got
>> >>> is "cn", can you take a look at a
tcpdump, see if that's actually
>> >>> what's passed in?
>> >>>
>> >>> On Mon, Feb 26, 2018 at 12:02 PM, David
Turner <drakonstein@xxxxxxxxx>
>> >>> wrote:
>> >>> > I run with `debug rgw = 10` and was
able to find these lines at the
>> >>> > end
>> >>> > of a
>> >>> > request to create the bucket.
>> >>> >
>> >>> > Successfully creating a bucket with
`bucket_location = US` looks
>> >>> > like
>> >>> > [1]this. Failing to create a
bucket has "ERROR: S3 error: 400
>> >>> > (InvalidLocationConstraint): The
specified location-constraint is
>> >>> > not
>> >>> > valid"
>> >>> > on the CLI and [2]this (excerpt
from the end of the request) in the
>> >>> > rgw
>> >>> > log
>> >>> > (debug level 10). "create bucket
location constraint" was not found
>> >>> > in
>> >>> > the
>> >>> > log for successfully creating the
bucket.
>> >>> >
>> >>> >
>> >>> > [1]
>> >>> > 2018-02-26 19:52:36.419251
7f4bc9bc8700 10 cache put:
>> >>> >
>> >>> >
>> >>> >
name=local-atl.rgw.data.root++.bucket.meta.testerton:bef43c26-daf3-47ef-a3a5-e1167e3f88ac.39099765.1
>> >>> > info.flags=0x17
>> >>> > 2018-02-26 19:52:36.419262
7f4bc9bc8700 10 adding
>> >>> >
>> >>> >
>> >>> >
local-atl.rgw.data.root++.bucket.meta.testerton:bef43c26-daf3-47ef-a3a5-e1167e3f88ac.39099765.1
>> >>> > to cache LRU end
>> >>> > 2018-02-26 19:52:36.419266
7f4bc9bc8700 10 updating xattr:
>> >>> > name=user.rgw.acl
>> >>> > bl.length()=141
>> >>> > 2018-02-26 19:52:36.423863
7f4bc9bc8700 10
>> >>> > RGWWatcher::handle_notify()
>> >>> > notify_id 344855809097728 cookie
139963970426880 notifier 39099765
>> >>> > bl.length()=361
>> >>> > 2018-02-26 19:52:36.423875
7f4bc9bc8700 10 cache put:
>> >>> >
name=local-atl.rgw.data.root++testerton info.flags=0x17
>> >>> > 2018-02-26 19:52:36.423882
7f4bc9bc8700 10 adding
>> >>> > local-atl.rgw.data.root++testerton
to cache LRU end
>> >>> >
>> >>> > [2]
>> >>> > 2018-02-26 19:43:37.340289
7f466bbca700 2 req
>> >>> > 428078:0.004204:s3:PUT
>> >>> > /testraint/:create_bucket:executing
>> >>> > 2018-02-26 19:43:37.340366
7f466bbca700 5 NOTICE: call to
>> >>> > do_aws4_auth_completion
>> >>> > 2018-02-26 19:43:37.340472
7f466bbca700 10 v4 auth ok --
>> >>> > do_aws4_auth_completion
>> >>> > 2018-02-26 19:43:37.340715
7f466bbca700 10 create bucket location
>> >>> > constraint: cn
>> >>> > 2018-02-26 19:43:37.340766
7f466bbca700 0 location constraint (cn)
>> >>> > can't be
>> >>> > found.
>> >>> > 2018-02-26 19:43:37.340794
7f466bbca700 2 req
>> >>> > 428078:0.004701:s3:PUT
>> >>> >
/testraint/:create_bucket:completing
>> >>> > 2018-02-26 19:43:37.341782
7f466bbca700 2 req
>> >>> > 428078:0.005689:s3:PUT
>> >>> > /testraint/:create_bucket:op
status=-2208
>> >>> > 2018-02-26 19:43:37.341792
7f466bbca700 2 req
>> >>> > 428078:0.005707:s3:PUT
>> >>> > /testraint/:create_bucket:http
status=400
>> >>> >
>> >>> > On Mon, Feb 26, 2018 at 2:36 PM
Yehuda Sadeh-Weinraub
>> >>> > <yehuda@xxxxxxxxxx>
>> >>> > wrote:
>> >>> >>
>> >>> >> I'm not sure if the rgw logs
(debug rgw = 20) specify explicitly
>> >>> >> why a
>> >>> >> bucket creation is rejected in
these cases, but it might be worth
>> >>> >> trying to look at these. If
not, then a tcpdump of the specific
>> >>> >> failed
>> >>> >> request might shed some light
(would be interesting to look at the
>> >>> >> generated LocationConstraint).
>> >>> >>
>> >>> >> Yehuda
>> >>> >>
>> >>> >> On Mon, Feb 26, 2018 at 11:29
AM, David Turner
>> >>> >> <drakonstein@xxxxxxxxx>
>> >>> >> wrote:
>> >>> >> > Our problem only appeared
to be present in bucket creation.
>> >>> >> > Listing,
>> >>> >> > putting, etc objects in a
bucket work just fine regardless of the
>> >>> >> > bucket_location setting.
I ran this test on a few different
>> >>> >> > realms
>> >>> >> > to
>> >>> >> > see
>> >>> >> > what would happen and only
1 of them had a problem. There isn't
>> >>> >> > an
>> >>> >> > obvious
>> >>> >> > thing that steps out about
it. The 2 local realms do not have
>> >>> >> > multi-site,
>> >>> >> > the internal realm has
multi-site and the operations were
>> >>> >> > performed
>> >>> >> > on
>> >>> >> > the
>> >>> >> > primary zone for the
zonegroup.
>> >>> >> >
>> >>> >> > Worked with non 'US'
bucket_location for s3cmd to create bucket:
>> >>> >> > realm=internal
>> >>> >> > zonegroup=internal-ga
>> >>> >> > zone=internal-atl
>> >>> >> >
>> >>> >> > Failed with non 'US'
bucket_location for s3cmd to create bucket:
>> >>> >> > realm=local-atl
>> >>> >> > zonegroup=local-atl
>> >>> >> > zone=local-atl
>> >>> >> >
>> >>> >> > Worked with non 'US'
bucket_location for s3cmd to create bucket:
>> >>> >> > realm=local
>> >>> >> > zonegroup=local
>> >>> >> > zone=local
>> >>> >> >
>> >>> >> > I was thinking it might
have to do with all of the parts being
>> >>> >> > named
>> >>> >> > the
>> >>> >> > same, but I made sure to
do the last test to confirm.
>> >>> >> > Interestingly
>> >>> >> > it's
>> >>> >> > only bucket creation that
has a problem and it's fine as long as
>> >>> >> > I
>> >>> >> > put
>> >>> >> > 'US'
>> >>> >> > as the bucket_location.
>> >>> >> >
>> >>> >> > On Mon, Feb 19, 2018 at
6:48 PM F21 <f21.groups@xxxxxxxxx>
wrote:
>> >>> >> >>
>> >>> >> >> I am using the
official ceph/daemon docker image. It starts RGW
>> >>> >> >> and
>> >>> >> >> creates a zonegroup
and zone with their names set to an empty
>> >>> >> >> string:
>> >>> >> >>
>> >>> >> >>
>> >>> >> >>
>> >>> >> >>
>> >>> >> >> https://github.com/ceph/ceph-container/blob/master/ceph-releases/luminous/ubuntu/16.04/daemon/start_rgw.sh#L36:54
>> >>> >> >>
>> >>> >> >> $RGW_ZONEGROUP and
$RGW_ZONE are both empty strings by default:
>> >>> >> >>
>> >>> >> >>
>> >>> >> >>
>> >>> >> >>
>> >>> >> >> https://github.com/ceph/ceph-container/blob/master/ceph-releases/luminous/ubuntu/16.04/daemon/variables_entrypoint.sh#L46
>> >>> >> >>
>> >>> >> >> Here's what I get when
I query RGW:
>> >>> >> >>
>> >>> >> >> $ radosgw-admin
zonegroup list
>> >>> >> >> {
>> >>> >> >> "default_info":
"",
>> >>> >> >> "zonegroups": [
>> >>> >> >> "default"
>> >>> >> >> ]
>> >>> >> >> }
>> >>> >> >>
>> >>> >> >> $ radosgw-admin zone
list
>> >>> >> >> {
>> >>> >> >> "default_info":
"",
>> >>> >> >> "zones": [
>> >>> >> >> "default"
>> >>> >> >> ]
>> >>> >> >> }
>> >>> >> >>
>> >>> >> >> On 20/02/2018 10:33
AM, Yehuda Sadeh-Weinraub wrote:
>> >>> >> >> > What is the name
of your zonegroup?
>> >>> >> >> >
>> >>> >> >> > On Mon, Feb 19,
2018 at 3:29 PM, F21 <f21.groups@xxxxxxxxx>
>> >>> >> >> > wrote:
>> >>> >> >> >> I've done
some debugging and the LocationConstraint is not
>> >>> >> >> >> being
>> >>> >> >> >> set
>> >>> >> >> >> by
>> >>> >> >> >> the
>> >>> >> >> >> SDK by
default.
>> >>> >> >> >>
>> >>> >> >> >> I do,
however, need to set the region on the client to
>> >>> >> >> >> us-east-1
>> >>> >> >> >> for
>> >>> >> >> >> it
>> >>> >> >> >> to
>> >>> >> >> >> work.
Anything else will return an InvalidLocationConstraint
>> >>> >> >> >> error.
>> >>> >> >> >>
>> >>> >> >> >> Francis
>> >>> >> >> >>
>> >>> >> >> >>
>> >>> >> >> >> On 20/02/2018
8:40 AM, Yehuda Sadeh-Weinraub wrote:
>> >>> >> >> >>> Sounds
like the go sdk adds a location constraint to
>> >>> >> >> >>> requests
>> >>> >> >> >>> that
>> >>> >> >> >>> don't go
to us-east-1. RGW itself is definitely isn't tied
>> >>> >> >> >>> to
>> >>> >> >> >>>
us-east-1, and does not know anything about it (unless you
>> >>> >> >> >>> happen
>> >>> >> >> >>> to
>> >>> >> >> >>> have a
zonegroup named us-east-1). Maybe there's a way to
>> >>> >> >> >>> configure
>> >>> >> >> >>> the sdk
to avoid doing that?
>> >>> >> >> >>>
>> >>> >> >> >>> Yehuda
>> >>> >> >> >>>
>> >>> >> >> >>> On Sun,
Feb 18, 2018 at 1:54 PM, F21 <f21.groups@xxxxxxxxx>
>> >>> >> >> >>> wrote:
>> >>> >> >> >>>> I am
using the AWS Go SDK v2
>> >>> >> >> >>>> (https://github.com/aws/aws-sdk-go-v2)
>> >>> >> >> >>>> to
>> >>> >> >> >>>> talk
>> >>> >> >> >>>> to my
RGW instance using the s3 interface. I am running
>> >>> >> >> >>>> ceph
>> >>> >> >> >>>> in
>> >>> >> >> >>>>
docker
>> >>> >> >> >>>> using
>> >>> >> >> >>>> the
ceph/daemon docker images in demo mode. The RGW is
>> >>> >> >> >>>>
started
>> >>> >> >> >>>> with
a
>> >>> >> >> >>>>
zonegroup and zone with their names set to an empty string
>> >>> >> >> >>>> by
>> >>> >> >> >>>> the
>> >>> >> >> >>>>
scripts
>> >>> >> >> >>>> in
>> >>> >> >> >>>> the
image.
>> >>> >> >> >>>>
>> >>> >> >> >>>> I
have ForcePathStyle for the client set to true, because I
>> >>> >> >> >>>> want
>> >>> >> >> >>>> to
>> >>> >> >> >>>>
access
>> >>> >> >> >>>> all
my buckets using the path:
>> >>> >> >> >>>>
myrgw.instance:8080/somebucket.
>> >>> >> >> >>>>
>> >>> >> >> >>>> I
noticed that if I set the region for the client to
>> >>> >> >> >>>>
anything
>> >>> >> >> >>>> other
>> >>> >> >> >>>> than
>> >>> >> >> >>>>
us-east-1, I get this error when creating a bucket:
>> >>> >> >> >>>>
InvalidLocationConstraint: The specified
>> >>> >> >> >>>>
location-constraint
>> >>> >> >> >>>> is
>> >>> >> >> >>>> not
>> >>> >> >> >>>>
valid.
>> >>> >> >> >>>>
>> >>> >> >> >>>> If I
set the region in the client to something made up,
>> >>> >> >> >>>> such
>> >>> >> >> >>>> as
>> >>> >> >> >>>>
"ceph"
>> >>> >> >> >>>> and
>> >>> >> >> >>>> the
LocationConstraint to "ceph", I still get the same
>> >>> >> >> >>>>
error.
>> >>> >> >> >>>>
>> >>> >> >> >>>> The
only way to get my buckets to create successfully is to
>> >>> >> >> >>>> set
>> >>> >> >> >>>> the
>> >>> >> >> >>>>
client's
>> >>> >> >> >>>>
region to us-east-1. I have grepped the ceph code base and
>> >>> >> >> >>>>
cannot
>> >>> >> >> >>>> find
>> >>> >> >> >>>> any
>> >>> >> >> >>>>
references to us-east-1. In addition, I looked at the AWS
>> >>> >> >> >>>> docs
>> >>> >> >> >>>> for
>> >>> >> >> >>>>
calculating v4 signatures and us-east-1 is the default
>> >>> >> >> >>>>
region
>> >>> >> >> >>>> but
>> >>> >> >> >>>> I
>> >>> >> >> >>>> can
>> >>> >> >> >>>> see
>> >>> >> >> >>>> that
the region string is used in the calculation (i.e. the
>> >>> >> >> >>>>
region
>> >>> >> >> >>>> is
>> >>> >> >> >>>> not
>> >>> >> >> >>>>
ignored when calculating the signature if it is set to
>> >>> >> >> >>>>
us-east-1).
>> >>> >> >> >>>>
>> >>> >> >> >>>> Why
do my buckets create successfully if I set the region
>> >>> >> >> >>>> in
>> >>> >> >> >>>> my s3
>> >>> >> >> >>>>
client
>> >>> >> >> >>>> to
>> >>> >> >> >>>>
us-east-1, but not otherwise? If I do not want to use
>> >>> >> >> >>>>
us-east-1 as
>> >>> >> >> >>>> my
>> >>> >> >> >>>>
default region, for example, if I want us-west-1 as my
>> >>> >> >> >>>>
default
>> >>> >> >> >>>>
region,
>> >>> >> >> >>>> what
>> >>> >> >> >>>>
should I be configuring in ceph?
>> >>> >> >> >>>>
>> >>> >> >> >>>>
Thanks,
>> >>> >> >> >>>>
>> >>> >> >> >>>>
Francis
>> >>> >> >> >>>>
>> >>> >> >> >>>>
_______________________________________________
>> >>> >> >> >>>>
ceph-users mailing list
>> >>> >> >> >>>> ceph-users@xxxxxxxxxxxxxx
>> >>> >> >> >>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >>> >> >> >>
>> >>> >> >> >>
>> >>> >> >>
>> >>> >> >>
_______________________________________________
>> >>> >> >> ceph-users mailing
list
>> >>> >> >> ceph-users@xxxxxxxxxxxxxx
>> >>> >> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
|