Re: Ceph experiences

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Congratulations on getting your cluster up and running.  Many of us
have seen the distribution issue on smaller clusters. More PGs and
more OSDs help.  A 100 OSD  configuration balances better then a 12
OSD system.

Ceph tries to protect your data, so a single full OSD shuts off
writes. Ceph CRUSH rules for distributing data is repeatable, so
unless you change something, like number of PGs or changing the
weights, it will keep trying to overfill some OSDs and under-filling
others.  Look at the ceph osd reweight-by-utilization command.

I am not sure what is going on with your third issue. My experience is
it stops every time it hits 95% on and OSD if using the default ceph
config values.

Eric

On Sat, Jul 18, 2015 at 5:53 AM, Steve Thompson <smt@xxxxxxxxxxxx> wrote:
>
> Ceph newbie (three weeks).
>
> Ceph 0.94.2, CentOS 6.6 x86_64, kernel 2.6.32. Twelve identical OSD's (1 TB
> each), three MON's, one active MDS and two standby MDS's. 10GbE cluster
> network, 1GbE public network. Using CephFS on a single client via the 4.1.1
> kernel from elrepo; using rsync to copy data to the Ceph file system (mostly
> small files). Only one client (me). All set up with ceph-deploy.
>
> For this test setup, the OSD's are present on two quad-core 3.16GHz hosts
> with 16GB memory each; six OSD's on each node. Journals are on the OSD
> drives for now. The two hosts are not user-accessible, and so are doing
> mostly OSD duty only (but they have light duty iSCSI targets on them).
>
> First surprise: I have noticed that the OSD drives do not fill at the same
> rate. For example, when the Ceph file system was 71% full, I had one OSD go
> into a full state at 95%, while there is another OSD that is only 51% full,
> and another at 60%.
>
> Second surprise: one full OSD results in ENOSPC for *all* writes, even
> though there is plenty of space available on other OSD's. I marked the full
> OSD as out to attempt to rebalance ("ceph osd out ods.0"). This appeared to
> be working, albeit very slowly. I stopped client writes.
>
> Third surprise: restart client writes after about an hour; data is still
> being written to the full OSD, but the full condition is no longer
> recognized; it went to 96% before I stopped the client writes one more. That
> was yesterday evening; today it is down to 91%. File system is not going to
> be useable until the rebalance completes (looks like taking days).
>
> I did not expect any of this. Any thoughts?
>
> Steve
> --
> ----------------------------------------------------------------------------
> Steve Thompson E-mail:  smt AT vgersoft DOT com Voyager Software LLC Web:
> http://www DOT vgersoft DOT com 39 Smugglers Path VSW Support: support AT
> vgersoft DOT com Ithaca, NY 14850
>   "186,282 miles per second: it's not just a good idea, it's the law"
> ----------------------------------------------------------------------------
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux