Re: Two questions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Ok, I will show example:

rados df
pool name                 KB      objects       clones     degraded
  unfound           rd        rd KB           wr        wr KB
.log                  558212            5            0            0
       0            0            0      2844888      2844888
.pool                      1            1            0            0
       0            0            0            8            8
.rgw                       0            6            0            0
       0            0            0            1            0
.users                     1            1            0            0
       0            0            0            1            1
.users.email               1            1            0            0
       0            0            0            1            1
.users.uid                 2            2            0            0
       0            1            0            2            2
data                       0            0            0            0
       0            0            0            0            0
metadata                   0            0            0            0
       0            0            0            0            0
rbd                        0            0            0            0
       0            0            0            0            0
sstest              32244226      2841055            0       653353
       0            0            0     17066724     32370391
  total used       324792996      2841071
  total avail    31083452176
  total space    33043244460

It means I have almost 3mln of objects in sstest.

pg_pool 7 'sstest' pg_pool(rep pg_size 3 crush_ruleset 0 object_hash
rjenkins pg_num 8 pgp_num 8 lpg_num 0 lpgp_num 0 last_change 21 owner
0)

3 copies in this pool.

sstest used 32.244.226 KB + log 558.212 KB = 32.802.438 KB

Total used is 324.792.996 KB and it's almost 10x more.

2011-07-27 12:57:35.541556    pg v54158: 6986 pgs: 8 active, 6978
active+clean; 32104 MB data, 310 GB used, 29642 GB / 31512 GB avail;

I'am putting files beetwen 4-50KB on RADOS via s3 clilent, and radosgw.


Can you explain that to me on this example from real life ??


2011/7/27 Wido den Hollander <wido@xxxxxxxxx>:
> Hi,
>
> On Wed, 2011-07-27 at 12:19 +0200, Sławomir Skowron wrote:
>> Thanks.
>>
>> 2011/7/27 Wido den Hollander <wido@xxxxxxxxx>:
>> > Hi,
>> >
>> > On Wed, 2011-07-27 at 07:58 +0200, Sławomir Skowron wrote:
>> >> Hello. I have some questions.
>> >>
>> >> 1. Is there any chance to change default 4MB object size to for
>> >> example 1MB or less ??
>> >
>> > If you are using the filesystem, you can change the stripe-size per
>> > directory with the "cephfs" tool.
>>
>> Unfortunately i only use rados layer in this case, and i even don't
>> have any mds :) is there any chance to change this in rados layer ??
>
> The RADOS gateway does NOT stripe over multiple objects. If you upload a
> 1GB object through the RADOS gateway it will be a 1GB object.
>
> Only the RBD (Rados Block Device) and the Ceph filesystem do striping
> over RADOS objects.
>
> RADOS itself doesn't stripe and the RADOS gateway is a almost 1:1
> mapping of pools and objects.
>
> Wido
>
>>
>> >>
>> >> 2. I have create cluster of two mons, and 32 osd (1TB each) on two
>> >> machines. At this radosgw with apache2 for test. When i putting data
>> >> from s3 client to rados, everything is ok, but when one of osd is
>> >> going down all s3 clients going to be freeze for some seconds, before
>> >> ceph sign this osd as down, and then everything is starting working
>> >> again wit degraded osd. How can i tune this marking as a down for ms
>> >> or 0 :) and not for seconds freeze ??
>> >
>> > Normally a OSD should be down within about 10 seconds and shouldn't
>> > affect all I/O operations.
>> >
>> > By default it takes 300 seconds before a OSD gets marked as "out", this
>> > is handled by "mon osd down out interval"
>> >
>> > If you want to change the "down" reporting behaviour you could do this
>> > with "osd min down reporters" and "osd min down reports". I'm not sure
>> > you really want to do this, as the default values should be ok.
>>
>> Ok. I will check this.
>>
>> > Wido
>> >
>> >>
>> >> My setup is debian 6.0, kernel 2.6.37.6  x86_64 and ceph version
>> >> 0.31-10 from stable repo.
>> >>
>> >> Best regards
>> >>
>> >> Slawomir Skowron
>> >> --
>> >> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> >> the body of a message to majordomo@xxxxxxxxxxxxxxx
>> >> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>> >
>> >
>> >
>>
>
>
>



-- 
-----
Pozdrawiam

Sławek "sZiBis" Skowron
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux