Ceph experiences

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




Ceph newbie (three weeks).

Ceph 0.94.2, CentOS 6.6 x86_64, kernel 2.6.32. Twelve identical OSD's (1 TB each), three MON's, one active MDS and two standby MDS's. 10GbE cluster network, 1GbE public network. Using CephFS on a single client via the 4.1.1 kernel from elrepo; using rsync to copy data to the Ceph file system (mostly small files). Only one client (me). All set up with ceph-deploy.

For this test setup, the OSD's are present on two quad-core 3.16GHz hosts with 16GB memory each; six OSD's on each node. Journals are on the OSD drives for now. The two hosts are not user-accessible, and so are doing mostly OSD duty only (but they have light duty iSCSI targets on them).

First surprise: I have noticed that the OSD drives do not fill at the same rate. For example, when the Ceph file system was 71% full, I had one OSD go into a full state at 95%, while there is another OSD that is only 51% full, and another at 60%.

Second surprise: one full OSD results in ENOSPC for *all* writes, even though there is plenty of space available on other OSD's. I marked the full OSD as out to attempt to rebalance ("ceph osd out ods.0"). This appeared to be working, albeit very slowly. I stopped client writes.

Third surprise: restart client writes after about an hour; data is still being written to the full OSD, but the full condition is no longer recognized; it went to 96% before I stopped the client writes one more. That was yesterday evening; today it is down to 91%. File system is not going to be useable until the rebalance completes (looks like taking days).

I did not expect any of this. Any thoughts?

Steve
--
---------------------------------------------------------------------------- Steve Thompson E-mail: smt AT vgersoft DOT com Voyager Software LLC Web: http://www DOT vgersoft DOT com 39 Smugglers Path VSW Support: support AT vgersoft DOT com Ithaca, NY 14850
  "186,282 miles per second: it's not just a good idea, it's the law"
----------------------------------------------------------------------------
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux