Hi,
Inodes looks ok, 9% used.
/dev/sdc1 69752420 6227392 63525028 9% /tmp/sdc1
touch /tmp/sdc1/swfdedefd
touch: cannot touch `/tmp/sdc1/swfdedefd': No space left on device
root@dfs-s2:~# xfs_db -r "-c freesp -s" /dev/sdc1
from to extents blocks pct
1 1 68631 68631 0.22
2 3 220424 548648 1.73
4 7 426549 2370963 7.47
8 15 2224898 28577194 89.99
16 31 8496 189768 0.60
total free extents 2948998
total free blocks 31755204
average free extent size 10.7681
#v-
2013/12/14 Sean Crosby <richardnixonshead@xxxxxxxxx>
Since you are using XFS, you may have run out of inodes on the device and need to enable the inode64 option.What does `df -i` say?
Sean
On 13 December 2013 00:51, Łukasz Jagiełło <jagiello.lukasz@xxxxxxxxx> wrote:Hi,72 OSDs (12 servers with 6 OSD per server) and 2000 placement groups. Replica factor is 3.2013/12/12 Pierre BLONDEAU <pierre.blondeau@xxxxxxxxxx>
Hi,
How many osd did you have ?
It could be a problem of placement group :
http://article.gmane.org/gmane.comp.file-systems.ceph.user/2261/match=pierre+blondeau
Regards.
Le 10/12/2013 23:23, Łukasz Jagiełło a écrit :
_______________________________________________Hi,
Today my ceph cluster suffer of such problem:
#v+
root@dfs-s1:/var/lib/ceph/osd/ceph-1# df -h | grep ceph-1
/dev/sdc1 559G 438G 122G 79% /var/lib/ceph/osd/ceph-1
#v-
Disk report 122GB free space. Looks ok but:
#v+
root@dfs-s1:/var/lib/ceph/osd/ceph-1# touch aaa
touch: cannot touch `aaa': No space left on device
#v-
Few more of data:
#v+
root@dfs-s1:/var/lib/ceph/osd/ceph-1# mount | grep ceph-1
/dev/sdc1 on /var/lib/ceph/osd/ceph-1 type xfs (rw,noatime)
root@dfs-s1:/var/lib/ceph/osd/ceph-1# xfs_db -r "-c freesp -s" /dev/sdc1
from to extents blocks pct
1 1 366476 366476 1.54
2 3 466928 1133786 4.76
4 7 536691 2901804 12.18
8 15 1554873 19423430 81.52
total free extents 2924968
total free blocks 23825496
average free extent size 8.14556
root@dfs-s1:/var/lib/ceph/osd/ceph-1# xfs_db -c frag -r /dev/sdc1
actual 9043587, ideal 8926438, fragmentation factor 1.30%
#v-
Any possible reason of that, and how to avoid that in future ? Someone
earlier mention it's problem with fragmentation but 122GB ?
Best Regards
--
Łukasz Jagiełło
lukasz<at>jagiello<dot>org
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
----------------------------------------------
Pierre BLONDEAU
Administrateur Systèmes & réseaux
Université de Caen
Laboratoire GREYC, Département d'informatique
tel : 02 31 56 75 42
bureau : Campus 2, Science 3, 406
----------------------------------------------
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Łukasz Jagiełło
lukasz<at>jagiello<dot>org
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Łukasz Jagiełło
lukasz<at>jagiello<dot>org
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com