Re: Successful Upgrade from 14.2.22 to 15.2.14

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Rainer,

On Fri, Sep 24, 2021 at 8:33 AM Rainer Krienke <krienke@xxxxxxxxxxxxxx> wrote:
>
> Hallo Dan,
>
> I am also running a productive  14.2.22 Cluster with 144 HDD-OSDs and I
> am thinking if I should stay with this release or upgrade to octopus. So
> your info is very valuable...
>
> One more question: You described that OSDs do an expected fsck and that
> this took roughly 10min. I guess the fsck is done in parallel for all
> OSDs of one host? So the total down-time for one host regarding fsck
> should not be much more than say 15min, isn't it?

The fsck is done internally in the ceph-osd processes, so indeed they
are done in parallel for all OSDs of one host when you restart
ceph-osd.target.
The bottleneck for the fsck is the speed of the block.db -- in our
case we have block on hdd and then 4 SSDs each holding 6 block.db's.
When we first started the upgrade, we restarted just one ceph-osd to
see how it goes -- you can do the same to get a feeling for the timing
with your data.

Cheers, Dan

>
> Are you using SSDs or HDDs in your cluster?
>
> Thanks
> Rainer
>
> Am 21.09.21 um 12:09 schrieb Dan van der Ster:
> > Dear friends,
> >
> > This morning we upgraded our pre-prod cluster from 14.2.22 to 15.2.14,
> > successfully, following the procedure at
> > https://docs.ceph.com/en/latest/releases/octopus/#upgrading-from-mimic-or-nautilus
> > It's a 400TB cluster which is 10% used with 72 osds (block=hdd,
> > block.db=ssd) and 40M objects.
> >
> > * The mons upgraded cleanly as expected.
> > * One minor surprise was that the mgrs respawned themselves moments
> > after the leader restarted into octopus:
> >
>
> --
> Rainer Krienke, Uni Koblenz, Rechenzentrum, A22, Universitaetsstrasse  1
> 56070 Koblenz, Web: http://www.uni-koblenz.de/~krienke, Tel: +49261287 1312
> PGP: http://www.uni-koblenz.de/~krienke/mypgp.html,     Fax: +49261287
> 1001312
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux