Re: Two questions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



2011/7/28 Sławomir Skowron <szibis@xxxxxxxxx>:
> Dnia 27 lip 2011 o godz. 18:15 Gregory Farnum
> <gregory.farnum@xxxxxxxxxxxxx> napisał(a):
>
>> 2011/7/27 Sławomir Skowron <szibis@xxxxxxxxx>:
>>> Ok, I will show example:
>>>
>>> rados df
>>> pool name                 KB      objects       clones     degraded
>>>  unfound           rd        rd KB           wr        wr KB
>>> .log                  558212            5            0            0
>>>       0            0            0      2844888      2844888
>>> .pool                      1            1            0            0
>>>       0            0            0            8            8
>>> .rgw                       0            6            0            0
>>>       0            0            0            1            0
>>> .users                     1            1            0            0
>>>       0            0            0            1            1
>>> .users.email               1            1            0            0
>>>       0            0            0            1            1
>>> .users.uid                 2            2            0            0
>>>       0            1            0            2            2
>>> data                       0            0            0            0
>>>       0            0            0            0            0
>>> metadata                   0            0            0            0
>>>       0            0            0            0            0
>>> rbd                        0            0            0            0
>>>       0            0            0            0            0
>>> sstest              32244226      2841055            0       653353
>>>       0            0            0     17066724     32370391
>>>  total used       324792996      2841071
>>>  total avail    31083452176
>>>  total space    33043244460
>>>
>>> It means I have almost 3mln of objects in sstest.
>>>
>>> pg_pool 7 'sstest' pg_pool(rep pg_size 3 crush_ruleset 0 object_hash
>>> rjenkins pg_num 8 pgp_num 8 lpg_num 0 lpgp_num 0 last_change 21 owner
>>> 0)
>>>
>>> 3 copies in this pool.
>>>
>>> sstest used 32.244.226 KB + log 558.212 KB = 32.802.438 KB
>>>
>>> Total used is 324.792.996 KB and it's almost 10x more.
>>>
>>> 2011-07-27 12:57:35.541556    pg v54158: 6986 pgs: 8 active, 6978
>>> active+clean; 32104 MB data, 310 GB used, 29642 GB / 31512 GB avail;
>>>
>>> I'am putting files beetwen 4-50KB on RADOS via s3 clilent, and radosgw.
>>>
>>>
>>> Can you explain that to me on this example from real life ??
>>
>> Hmm, what underlying filesystem are you using? Do you have any logging
>> enabled, and what disk is it logging to? Are all your OSDs running
>> under the same OS, or are they in virtual machines?
>
> I use ext4. I have loging enabled in osd and mon for most everything in 20.
> Every osd is running on same version of debian 6 booted from network.
> In my testing case i use two machines and they are using the same
> configuration and system.
> They are not a VM's.
>
>> If I remember correctly, that "total used" count is generated by
>> looking at df or something for the drives in question -- if there's
>> other data on the same drive as the OSD, it'll get (admittedly
>> incorrectly) counted as part of the "total used" by RADOS even if
>> RADOS can't touch it.

When you write this i check something and i think it is what you suggest.

Because of my test before i mount ext4 filesystems in /data/osd.(osd
id), but /data was a symlink to /var/data/ and i think total used
space was higher by a size of var, and there are logs, lots of logs
:). Ceph produce many logs in such verbosity. Tell me if im wrong.

Now its look like this, and it's looks better :)

2011-07-28 11:44:08.227278    pg v110939: 6986 pgs: 8 active, 6978
active+clean; 42441 MB data, 223 GB used, 29457 GB / 31240 GB avail

rados df
pool name                 KB      objects       clones     degraded
  unfound           rd        rd KB           wr        wr KB
.log                  694273            6            0            0
       0            0            0      3539909      3539909
.pool                      1            1            0            0
       0            0            0            8            8
.rgw                       0            6            0            0
       0            0            0            1            0
.users                     1            1            0            0
       0            0            0            1            1
.users.email               1            1            0            0
       0            0            0            1            1
.users.uid                 2            2            0            0
       0            1            0            2            2
data                       0            0            0            0
       0            0            0            0            0
metadata                   0            0            0            0
       0            0            0            0            0
rbd                        0            0            0            0
       0            0            0            0            0
sstest              42766318      3483690            0            0
       0            0            0     20922736     42892546
  total used       234415408      3483707
  total avail    30888365376
  total space    32757780072

>> -Greg
>
> Every machine setup is same and looks like this:
>
> 2 x 300GB SAS -> hardware RAID1 -> root filesystem and ceph logs in
> /var/log/(osd,mon)
> 12 x 2TB SATA -> hardware RAID5 + 1 Spare -> 16 x 1TB (raid usable
> space is bigger) ceph osd mounted as ext4 in /data/osd.(osd id)
>
> For test only i run journal in the same lun as the osd, because two
> SAS drives is not enough, but in real scenario system will run on one
> SAS drive for OS, journals on one SSD, and rest on SATA drives, or
> system and journals on RAID 1 SSD drives.
>
> If i count journals on two machines it 32 x 512MB, and its only 16GB
> more in rados df in calculation. Where is more ??
>
> With regards
> Slawomir Skowron
>



-- 
-----
Pozdrawiam

Sławek "sZiBis" Skowron
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux