Re: Running Jewel and Luminous mixed for a longer period

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> Op 5 december 2017 om 18:39 schreef Richard Hesketh <richard.hesketh@xxxxxxxxxxxx>:
> 
> 
> On 05/12/17 17:10, Graham Allan wrote:
> > On 12/05/2017 07:20 AM, Wido den Hollander wrote:
> >> Hi,
> >>
> >> I haven't tried this before but I expect it to work, but I wanted to
> >> check before proceeding.
> >>
> >> I have a Ceph cluster which is running with manually formatted
> >> FileStore XFS disks, Jewel, sysvinit and Ubuntu 14.04.
> >>
> >> I would like to upgrade this system to Luminous, but since I have to
> >> re-install all servers and re-format all disks I'd like to move it to
> >> BlueStore at the same time.
> > 
> > You don't *have* to update the OS in order to update to Luminous, do you? Luminous is still supported on Ubuntu 14.04 AFAIK.
> > 
> > Though obviously I understand your desire to upgrade; I only ask because I am in the same position (Ubuntu 14.04, xfs, sysvinit), though happily with a smaller cluster. Personally I was planning to upgrade ours entirely to Luminous while still on Ubuntu 14.04, before later going through the same process of decommissioning one machine at a time to reinstall with CentOS 7 and Bluestore. I too don't see any reason the mixed Jewel/Luminous cluster wouldn't work, but still felt less comfortable with extending the upgrade duration.
> > 

Well, the sysvinit part bothers me. This setup uses the 'devs' part in ceph.conf and such. It's all a kind of hacky system.

Most of these systems have run Dumpling on Ubuntu 12.04 and have been upgraded ever since. They are messy.

We'd like to reprovision all disks with ceph-volume while we are at it. It would be one step by doing the OS and Ceph at the same time.

I've never tried to run Luminous under 14.04. Looking at the DEB packages there doesn't seem to be sysvinit support anymore in Luminous either.

> > Graham
> 
> Yes, you can run luminous on Trusty; one of my clusters is currently Luminous/Bluestore/Trusty as I've not had time to sort out doing OS upgrades on it. I second the suggestion that it would be better to do the luminous upgrade first, retaining existing filestore OSDs, and then do the OS upgrade/OSD recreation on each node in sequence. I don't think there should realistically be any problems with running a mixed cluster for a while but doing the jewel->luminous upgrade on the existing installs first shouldn't be significant extra effort/time as you're already predicting at least two months to upgrade everything, and it does minimise the amount of change at any one time in case things do start going horribly wrong.
> 

I agree that less things at once are best. But we will at least automate the whole install/config using Salt, so that part if covered.

The Luminous on Trusty, does that run with sysvinit or with Upstart?

> Also, at 48 nodes, I would've thought you could get away with cycling more than one of them at once. Assuming they're homogenous taking out even 4 at a time should only raise utilisation on the rest of the cluster to a little over 65%, which still seems safe to me, and you'd waste way less time waiting for recovery. (I recognise that depending on the nature of your employment situation this may not actually be desirable...)
> 

We can probably do more then one node at the same time, however, I'm setting up a plan which the admins will execute and we want to take the safe route. Uptime is important as well.

If we screw up a node the damage isn't that big.

But the main question remains: Can you run a mix of Jewel and Luminous for a longer period.

If so, what are the caveats?

Once clusters keep growing they will need to run a mix of versions. I have other clusters which are running Jewel and have 400 nodes. Upgrading them all will take a lof of time as well.

Thanks,

Wido

> Rich
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux