ceph osd crush tunables optimal AND add new OSD at the same time

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,
which values are all changed with "ceph osd crush tunables optimal"?

Is it perhaps possible to change some parameter the weekends before the
upgrade is running, to have more time?
(depends if the parameter are available in 0.72...).

The warning told, it's can take days... we have an cluster with 5
storage node and 12 4TB-osd-disk each (60 osd), replica 2. The cluster
is 60% filled.
Networkconnection 10Gb.
Takes tunables optimal in such an configuration one, two or more days?

Udo

On 14.07.2014 18:18, Sage Weil wrote:
> I've added some additional notes/warnings to the upgrade and release 
> notes:
>
>  https://github.com/ceph/ceph/commit/fc597e5e3473d7db6548405ce347ca7732832451
>
> If there is somewhere else where you think a warning flag would be useful, 
> let me know!
>
> Generally speaking, we want to be able to cope with huge data rebalances 
> without interrupting service.  It's an ongoing process of improving the 
> recovery vs client prioritization, though, and removing sources of 
> overhead related to rebalancing... and it's clearly not perfect yet. :/
>
> sage
>
>
>



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux