On 10/06/2013 10:49 AM, Leen Besselink wrote:
maybe its woth mentioning that my OSDs are formatted as btrfs. i
>don't think that btrfs have 13% overhead. or dose it?
>
I would suggest you look at btrfs df, not df (never use df with btrfs)
and btrfs volume list to see what btrfs is doing.
here is the btrfs df, seams like 56GB of metadata are there. which is
56/3000=1.86%
root@cephtest1:/home/ali# btrfs filesystem df /var/lib/ceph/osd/ceph-0
Data: total=2.59TB, used=2.07TB
System, DUP: total=8.00MB, used=292.00KB
System: total=4.00MB, used=0.00
Metadata, DUP: total=56.00GB, used=4.41GB
Metadata: total=8.00MB, used=0.00
root@cephtest1:/home/ali# btrfs filesystem df /var/lib/ceph/osd/ceph-1
Data: total=2.62TB, used=2.31TB
System, DUP: total=8.00MB, used=296.00KB
System: total=4.00MB, used=0.00
Metadata, DUP: total=56.00GB, used=5.56GB
Metadata: total=8.00MB, used=0.00
root@cephtest1:/home/ali# btrfs filesystem df /var/lib/ceph/osd/ceph-2
Data: total=2.62TB, used=2.18TB
System, DUP: total=8.00MB, used=296.00KB
System: total=4.00MB, used=0.00
Metadata, DUP: total=56.00GB, used=4.01GB
Metadata: total=8.00MB, used=0.00
root@cephtest1:/home/ali# btrfs filesystem df /var/lib/ceph/osd/ceph-3
Data: total=2.62TB, used=2.19TB
System, DUP: total=8.00MB, used=296.00KB
System: total=4.00MB, used=0.00
Metadata, DUP: total=56.00GB, used=4.47GB
Metadata: total=8.00MB, used=0.00
If I'm not mistaken Ceph with btrfs uses snapshots as a way to do
transactions instead of using a journal.
and you may be right, there are some snapshuts there,
root@cephtest1:/home/ali# btrfs subvolume list /var/lib/ceph/osd/ceph-0
ID 235758 top level 5 path current
ID 235838 top level 5 path snap_4521823
ID 235839 top level 5 path snap_4521828
root@cephtest1:/home/ali# btrfs subvolume list /var/lib/ceph/osd/ceph-1
ID 238673 top level 5 path current
ID 238765 top level 5 path snap_5100630
ID 238766 top level 5 path snap_5100632
root@cephtest1:/home/ali# btrfs subvolume list /var/lib/ceph/osd/ceph-2
ID 229720 top level 5 path current
ID 229819 top level 5 path snap_4751456
ID 229820 top level 5 path snap_4751459
ID 229821 top level 5 path snap_4751464
root@cephtest1:/home/ali# btrfs subvolume list /var/lib/ceph/osd/ceph-3
ID 238743 top level 5 path current
ID 238828 top level 5 path snap_4873312
ID 238829 top level 5 path snap_4873315
Who knows, maybe something failed and they did't get cleaned up or
something like that, I've never had a look at how it is handled so
I don't know what it looks like normally. But post some information
on the list if you see something unusual, someone probably knows.
any more info that i can provide??
i have increased the nearfull ration to 90% and restarted every thing,
after the restart i waited until its health_ok (while i was writing this
email) and the df -h looks like that now
# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/cephtest1-root 181G 19G 153G 11% /
udev 48G 4.0K 48G 1% /dev
tmpfs 19G 592K 19G 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 48G 0 48G 0% /run/shm
/dev/sde1 228M 27M 189M 13% /boot
/dev/sda 2.8T 1.9T 816G 70% /var/lib/ceph/osd/ceph-0
/dev/sdb 2.8T 2.4T 312G 89% /var/lib/ceph/osd/ceph-1
/dev/sdc 2.8T 1.9T 845G 69% /var/lib/ceph/osd/ceph-2
/dev/sdd 2.8T 2.0T 742G 73% /var/lib/ceph/osd/ceph-3
and
# ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
11178G 3130G 7626G 68.23
POOLS:
NAME ID USED %USED OBJECTS
data 0 0 0 0
metadata 1 40122K 0 30
rbd 2 3703G 33.14 478583
all disks now have less utilization except osd.1 still at 89!!!
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com