Re: Problem with radosgw in 0.37

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thank you very much. That's solve the problem. I was looking in source
code for that today, and i found i think self documented code for that
too :)

./src/common/config_opts.h:OPTION(debug_rgw, OPT_INT, 20)
   // log level for the Rados gateway
./src/common/config_opts.h:OPTION(rgw_cache_enabled, OPT_BOOL, false)
 // rgw cache enabled
./src/common/config_opts.h:OPTION(rgw_cache_lru_size, OPT_INT, 10000)
 // num of entries in rgw cache
./src/common/config_opts.h:OPTION(rgw_socket_path, OPT_STR, "")   //
path to unix domain socket, if not specified, rgw will not run as
external fcgi
./src/common/config_opts.h:OPTION(rgw_dns_name, OPT_STR, "")
./src/common/config_opts.h:OPTION(rgw_swift_url, OPT_STR, "")              //
./src/common/config_opts.h:OPTION(rgw_swift_url_prefix, OPT_STR, "swift")  //
./src/common/config_opts.h:OPTION(rgw_print_continue, OPT_BOOL, true)
// enable if 100-Continue works
./src/common/config_opts.h:OPTION(rgw_remote_addr_param, OPT_STR,
"REMOTE_ADDR")  // e.g. X-Forwarded-For, if you have a reverse proxy
./src/common/config_opts.h:OPTION(rgw_op_thread_timeout, OPT_INT, 10*60)
./src/common/config_opts.h:OPTION(rgw_op_thread_suicide_timeout, OPT_INT, 60*60)
./src/common/config_opts.h:OPTION(rgw_thread_pool_size, OPT_INT, 100)
./src/common/config_opts.h:OPTION(rgw_maintenance_tick_interval,
OPT_DOUBLE, 10.0)
./src/common/config_opts.h:OPTION(rgw_pools_preallocate_max, OPT_INT, 100)
./src/common/config_opts.h:OPTION(rgw_pools_preallocate_threshold, OPT_INT, 70)
./src/common/config_opts.h:OPTION(rgw_log_nonexistent_bucket, OPT_BOOL, false)
./src/common/config_opts.h:OPTION(rgw_log_object_name, OPT_STR,
"%Y-%m-%d-%H-%i-%n")      // man date to see codes (a subset are
supported)
./src/common/config_opts.h:OPTION(rgw_log_object_name_utc, OPT_BOOL, false)
./src/common/config_opts.h:OPTION(rgw_intent_log_object_name, OPT_STR,
"%Y-%m-%d-%i-%n")  // man date to see codes (a subset are supported)
./src/common/config_opts.h:OPTION(rgw_intent_log_object_name_utc,
OPT_BOOL, false)

Theoreticaly, 100-continue is working with nginx, but i will dig this
soon. Now development can go on again.

But, there is no info in changelog, or I miss something about rgw
cache capability, or maybe many more features, that can be useful in
any other ceph piece in future.

Thanks again for a quick help.


2011/11/8 Yehuda Sadeh Weinraub <yehuda.sadeh@xxxxxxxxxxxxx>:
> I haven't ran rgw over nginx for quite a while, so I'm not sure
> whether it can actually work. The problem that you're seeing might be
> related to the 100-continue processing, which is now enabled by
> default. Try to turn it off, setting the following under the global
> (or client) section in your ceph.conf:
>
>        rgw print continue = false
>
> If it doesn't help we'll need to dig deeper. Thanks,
> Yehuda
>
> 2011/11/8 Sławomir Skowron <szibis@xxxxxxxxx>:
>> Maybe, i have forgot something, but there is no doc about that.
>>
>> I create a configuration with nginx and radosgw for S3.
>>
>> On top of radosgw standing nginx witch cache capability. Everything
>> was ok in version 0.32 of ceph. A have create a new filesystem with a
>> newest 0.37 version, and now i have some problems.
>>
>> I run radosgw like this:
>>
>> radosgw --rgw-socket-path=/var/run/radosgw.sock --conf=/etc/ceph/ceph.conf
>>
>> In nginx i talk to unix socket of radosgw. Everything looks good. In
>> Radosgw-admin i create user, and its ok.
>>
>> { "user_id": "0",
>>  "rados_uid": 0,
>>  "display_name": "ocd",
>>  "email": "",
>>  "suspended": 0,
>>  "subusers": [],
>>  "keys": [
>>        { "user": "0",
>>          "access_key": "CFLZFEPYUAZV4EZ1P8OJ",
>>          "secret_key": "HrWN8SNfjXPhUELPLIbRIA3nCfppQjJ5xV6EnhNM"}],
>>
>> pool name       category                 KB      objects       clones
>>   degraded      unfound           rd        rd KB           wr
>> wr KB
>> .intent-log     -                          1            1            0
>>           0           0            0            0            1
>>    1
>> .log            -                         19           18            0
>>           0           0            0            0           39
>>   39
>> .rgw            -                         20           25            0
>>           0           0            1            0           49
>>   46
>> .rgw.buckets    -                         19           19            0
>>           0           0           32           15           45
>>   23
>> .users          -                          2            2            0
>>           0           0            0            0            2
>>    2
>> .users.email    -                          1            1            0
>>           0           0            0            0            1
>>    1
>> .users.uid      -                          4            4            0
>>           0           0            3            1            5
>>    5
>> data            -                          0            0            0
>>           0           0            0            0            0
>>    0
>> metadata        -                          0            0            0
>>           0           0            0            0            0
>>    0
>> rbd             -                          0            0            0
>>           0           0            0            0            0
>>    0
>>  total used          540000           71
>>  total avail      174920736
>>  total space      175460736
>>
>> But when i testing this with a local s3lib its not working. In nginx acces log:
>>
>> 127.0.0.1 - - - [08/Nov/2011:09:03:48 +0100] "PUT /nodejs/test01
>> HTTP/1.1" rlength: 377 bsent: 270 rtime: 0.006 urtime: 0.004 status:
>> 403 bbsent: 103 httpref: "-" useragent: "Mozilla/4.0 (Compatible; s3;
>> libs3 2.0; Linux x86_64)"
>> 127.0.0.1 - - - [08/Nov/2011:09:03:55 +0100] "PUT /nodejs/test01
>> HTTP/1.1" rlength: 377 bsent: 270 rtime: 0.006 urtime: 0.004 status:
>> 403 bbsent: 103 httpref: "-" useragent: "Mozilla/4.0 (Compatible; s3;
>> libs3 2.0; Linux x86_64)"
>>
>> Request have code 100, waiting for something in radosgw.
>>
>> s3 -u put nodejs/test01 < /usr/src/libs3-2.0/TODO
>>
>> ERROR: ErrorAccessDenied
>>
>> From external s3 client, a have something like that. Somehow bucket
>> was created, but other operations not working with error access
>> denied.
>>
>> <?xml version='1.0' encoding='UTF-8'?>
>> <Error>
>>  <Code>AccessDenied</Code>
>> </Error>
>>
>> I have some data from s3lib, by a many tries:
>>
>>                         Bucket                                 Created
>> --------------------------------------------------------  --------------------
>> nodejs                                                    2011-11-07T13:11:42Z
>> root@vm-10-177-48-24:/usr/src# s3 -u getacl nodejs
>> OwnerID 0 ocd
>>  Type                                      User Identifier
>>                              Permission
>> ------  ------------------------------------------------------------------------------------------
>>  ------------
>> UserID  0 (ocd)
>>                            FULL_CONTROL
>> root@vm-10-177-48-24:/usr/src# s3 -u test nodejs
>>                         Bucket                                  Status
>> --------------------------------------------------------  --------------------
>> nodejs                                                    USA
>> root@vm-10-177-48-24:/usr/src# s3 -u getacl nodejs
>> OwnerID 0 ocd
>>  Type                                      User Identifier
>>                              Permission
>> ------  ------------------------------------------------------------------------------------------
>>  ------------
>> UserID  0 (ocd)
>>                            FULL_CONTROL
>> root@vm-10-177-48-24:/usr/src# s3 -u list
>>                         Bucket                                 Created
>> --------------------------------------------------------  --------------------
>> nodejs                                                    2011-11-07T13:11:42Z
>>
>>
>> Ceph.conf
>>
>> ; global
>> [global]
>>        ; enable secure authentication
>>        auth supported = cephx
>>        keyring = /etc/ceph/keyring.bin
>>
>> ; monitors
>> ;  You need at least one.  You need at least three if you want to
>> ;  tolerate any node failures.  Always create an odd number.
>> [mon]
>>        mon data = /vol0/data/mon.$id
>>
>>        ; some minimal logging (just message traffic) to aid debugging
>>
>>        debug ms = 1     ; see message traffic
>>        debug mon = 0   ; monitor
>>        debug paxos = 0 ; monitor replication
>>        debug auth = 0  ;
>>
>>        mon allowed clock drift = 2
>>
>> [mon.0]
>>        host = vm-10-177-48-24
>>        mon addr = 10.177.48.24:6789
>>
>> ; osd
>> ;  You need at least one.  Two if you want data to be replicated.
>> ;  Define as many as you like.
>> [osd]
>>        ; This is where the btrfs volume will be mounted.
>>        osd data = /vol0/data/osd.$id
>>
>>        ; Ideally, make this a separate disk or partition.  A few GB
>>        ; is usually enough; more if you have fast disks.  You can use
>>        ; a file under the osd data dir if need be
>>        ; (e.g. /data/osd$id/journal), but it will be slower than a
>>        ; separate disk or partition.
>>        osd journal = /vol0/data/osd.$id/journal
>>
>>        ; If the OSD journal is a file, you need to specify the size.
>> This is specified in MB.
>>        osd journal size = 512
>>
>>        filestore journal writeahead = 1
>>        osd heartbeat grace = 5
>>
>>        debug ms = 1         ; message traffic
>>        debug osd = 0
>>        debug filestore = 0 ; local object storage
>>        debug journal = 0   ; local journaling
>>        debug monc = 0
>>        debug rados = 0
>>
>> [osd.0]
>>        host = vm-10-177-48-24
>>        osd data = /vol0/data/osd.0
>>        keyring = /vol0/data/osd.0/keyring
>>
>> [osd.1]
>>        host = vm-10-177-48-24
>>        osd data = /vol0/data/osd.1
>>        keyring = /vol0/data/osd.1/keyring
>>
>>
>> radosgw-admin bucket stats --bucket=nodejs
>> { "bucket": "nodejs",
>>  "pool": ".rgw.buckets",
>>  "id": 10,
>>  "marker": "10",
>>  "owner": "0",
>>  "usage": { "rgw.main": { "size_kb": 4,
>>          "num_objects": 1},
>>      "rgw.shadow": { "size_kb": 4,
>>          "num_objects": 1}}}
>>
>> ceph -s
>> 2011-11-08 09:07:12.546715    pg v594: 460 pgs: 460 active+clean; 68
>> KB data, 527 MB used, 166 GB / 167 GB avail
>> 2011-11-08 09:07:12.547555   mds e1: 0/0/1 up
>> 2011-11-08 09:07:12.547573   osd e12: 2 osds: 2 up, 2 in
>> 2011-11-08 09:07:12.547626   log 2011-11-08 08:31:46.761320 osd.0
>> 10.177.48.24:6800/12063 244 : [INF] 10.3 scrub ok
>> 2011-11-08 09:07:12.547709   mon e1: 1 mons at {0=10.177.48.24:6789/0}
>>
>>
>> --
>> -----
>> Pozdrawiam
>>
>> Sławek "sZiBis" Skowron
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> the body of a message to majordomo@xxxxxxxxxxxxxxx
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>
>



-- 
-----
Pozdrawiam

Sławek "sZiBis" Skowron
��.n��������+%������w��{.n����z��u���ܨ}���Ơz�j:+v�����w����ޙ��&�)ߡ�a����z�ޗ���ݢj��w�f



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux