ceph osd crush tunables optimal AND add new OSD at the same time

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 14 Jul 2014, Udo Lembke wrote:
> Hi,
> which values are all changed with "ceph osd crush tunables optimal"?

There are some brand new crush tunables that fix.. I don't even remember 
off hand.

In general, you probably want to stay away from 'optimal' unless this is a 
fresh cluster and all clients are librados.  Using the 'firefly' tunables 
is probably the safest bet.

Keep in mind that adjusting tunables is going to move a bunch of data and 
client performance will be heavily impacted.  If that's ok, go for it, 
otherise just stick with bobtail tunables unless/until it becomes a 
problem.

sage

> 
> Is it perhaps possible to change some parameter the weekends before the
> upgrade is running, to have more time?
> (depends if the parameter are available in 0.72...).
> 
> The warning told, it's can take days... we have an cluster with 5
> storage node and 12 4TB-osd-disk each (60 osd), replica 2. The cluster
> is 60% filled.
> Networkconnection 10Gb.
> Takes tunables optimal in such an configuration one, two or more days?
> 
> Udo
> 
> On 14.07.2014 18:18, Sage Weil wrote:
> > I've added some additional notes/warnings to the upgrade and release 
> > notes:
> >
> >  https://github.com/ceph/ceph/commit/fc597e5e3473d7db6548405ce347ca7732832451
> >
> > If there is somewhere else where you think a warning flag would be useful, 
> > let me know!
> >
> > Generally speaking, we want to be able to cope with huge data rebalances 
> > without interrupting service.  It's an ongoing process of improving the 
> > recovery vs client prioritization, though, and removing sources of 
> > overhead related to rebalancing... and it's clearly not perfect yet. :/
> >
> > sage
> >
> >
> >
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
> 


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux