Re: Running Jewel and Luminous mixed for a longer period

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> Op 6 december 2017 om 10:25 schreef Yehuda Sadeh-Weinraub <yehuda@xxxxxxxxxx>:
> 
> 
> Are you using rgw? There are certain compatibility issues that you
> might hit if you run mixed versions.
> 

Yes, it is. So would it hurt if OSDs are running Luminous but the RGW is still Jewel?

Multisite isn't used, it's just a local RGW.

Wido

> Yehuda
> 
> On Tue, Dec 5, 2017 at 3:20 PM, Wido den Hollander <wido@xxxxxxxx> wrote:
> > Hi,
> >
> > I haven't tried this before but I expect it to work, but I wanted to check before proceeding.
> >
> > I have a Ceph cluster which is running with manually formatted FileStore XFS disks, Jewel, sysvinit and Ubuntu 14.04.
> >
> > I would like to upgrade this system to Luminous, but since I have to re-install all servers and re-format all disks I'd like to move it to BlueStore at the same time.
> >
> > This system however has 768 3TB disks and has a utilization of about 60%. You can guess, it will take a long time before all the backfills complete.
> >
> > The idea is to take a machine down, wipe all disks, re-install it with Ubuntu 16.04 and Luminous and re-format the disks with BlueStore.
> >
> > The OSDs get back, start to backfill and we wait.
> >
> > My estimation is that we can do one machine per day, but we have 48 machines to do. Realistically this will take ~60 days to complete.
> >
> > Afaik running Jewel (10.2.10) mixed with Luminous (12.2.2) should work just fine I wanted to check if there are any caveats I don't know about.
> >
> > I'll upgrade the MONs to Luminous first before starting to upgrade the OSDs. Between each machine I'll wait for a HEALTH_OK before proceeding allowing the MONs to trim their datastore.
> >
> > The question is: Does it hurt to run Jewel and Luminous mixed for ~60 days?
> >
> > I think it won't, but I wanted to double-check.
> >
> > Wido
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@xxxxxxxxxxxxxx
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux