Re: Two questions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dnia 27 lip 2011 o godz. 18:15 Gregory Farnum
<gregory.farnum@xxxxxxxxxxxxx> napisał(a):

> 2011/7/27 Sławomir Skowron <szibis@xxxxxxxxx>:
>> Ok, I will show example:
>>
>> rados df
>> pool name                 KB      objects       clones     degraded
>>  unfound           rd        rd KB           wr        wr KB
>> .log                  558212            5            0            0
>>       0            0            0      2844888      2844888
>> .pool                      1            1            0            0
>>       0            0            0            8            8
>> .rgw                       0            6            0            0
>>       0            0            0            1            0
>> .users                     1            1            0            0
>>       0            0            0            1            1
>> .users.email               1            1            0            0
>>       0            0            0            1            1
>> .users.uid                 2            2            0            0
>>       0            1            0            2            2
>> data                       0            0            0            0
>>       0            0            0            0            0
>> metadata                   0            0            0            0
>>       0            0            0            0            0
>> rbd                        0            0            0            0
>>       0            0            0            0            0
>> sstest              32244226      2841055            0       653353
>>       0            0            0     17066724     32370391
>>  total used       324792996      2841071
>>  total avail    31083452176
>>  total space    33043244460
>>
>> It means I have almost 3mln of objects in sstest.
>>
>> pg_pool 7 'sstest' pg_pool(rep pg_size 3 crush_ruleset 0 object_hash
>> rjenkins pg_num 8 pgp_num 8 lpg_num 0 lpgp_num 0 last_change 21 owner
>> 0)
>>
>> 3 copies in this pool.
>>
>> sstest used 32.244.226 KB + log 558.212 KB = 32.802.438 KB
>>
>> Total used is 324.792.996 KB and it's almost 10x more.
>>
>> 2011-07-27 12:57:35.541556    pg v54158: 6986 pgs: 8 active, 6978
>> active+clean; 32104 MB data, 310 GB used, 29642 GB / 31512 GB avail;
>>
>> I'am putting files beetwen 4-50KB on RADOS via s3 clilent, and radosgw.
>>
>>
>> Can you explain that to me on this example from real life ??
>
> Hmm, what underlying filesystem are you using? Do you have any logging
> enabled, and what disk is it logging to? Are all your OSDs running
> under the same OS, or are they in virtual machines?

I use ext4. I have loging enabled in osd and mon for most everything in 20.
Every osd is running on same version of debian 6 booted from network.
In my testing case i use two machines and they are using the same
configuration and system.
They are not a VM's.

> If I remember correctly, that "total used" count is generated by
> looking at df or something for the drives in question -- if there's
> other data on the same drive as the OSD, it'll get (admittedly
> incorrectly) counted as part of the "total used" by RADOS even if
> RADOS can't touch it.
> -Greg

Every machine setup is same and looks like this:

2 x 300GB SAS -> hardware RAID1 -> root filesystem and ceph logs in
/var/log/(osd,mon)
12 x 2TB SATA -> hardware RAID5 + 1 Spare -> 16 x 1TB (raid usable
space is bigger) ceph osd mounted as ext4 in /data/osd.(osd id)

For test only i run journal in the same lun as the osd, because two
SAS drives is not enough, but in real scenario system will run on one
SAS drive for OS, journals on one SSD, and rest on SATA drives, or
system and journals on RAID 1 SSD drives.

If i count journals on two machines it 32 x 512MB, and its only 16GB
more in rados df in calculation. Where is more ??

With regards
Slawomir Skowron
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux