Re: ceph-ansible in Pacific and beyond?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

> Finer grained ability to allocate resources to services. (This process
gets 2g of ram and 1 cpu)

do you really believe this is a benefit? How can it be a benefit to have
crashing or slow OSDs? Sounds cool but doesn't work in most environments I
ever had my hands on.
We often encounter cluster that fall apart or have a meltdown just because
they run out of memory and we use tricks like zram to help them out and
recover their clusters. If I now go and do it per container/osd in a finer
grained way, it will just blow up even more.

--
Martin Verges
Managing director

Mobile: +49 174 9335695
E-Mail: martin.verges@xxxxxxxx
Chat: https://t.me/MartinVerges

croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht Munich HRB 231263

Web: https://croit.io
YouTube: https://goo.gl/PGE1Bx


Am Mi., 17. März 2021 um 18:59 Uhr schrieb Fox, Kevin M <Kevin.Fox@xxxxxxxx
>:

> There are a lot of benefits to containerization that is hard to do without
> it.
> Finer grained ability to allocate resources to services. (This process
> gets 2g of ram and 1 cpu)
> Security is better where only minimal software is available within the
> container so on service compromise its harder to escape.
> Ability to run exactly what was tested / released by upstream. Fewer
> issues with version mismatches. Especially useful across different distros.
> Easier to implement orchestration on top which enables some of the
> advanced features such as easy to allocate iscsi/nfs volumes. Ceph is
> finally doing so now that it is focusing on containers.
> And much more.
>
> ________________________________________
> From: Teoman Onay <tonay@xxxxxxxxxx>
> Sent: Wednesday, March 17, 2021 10:38 AM
> To: Matthew H
> Cc: Matthew Vernon; ceph-users
> Subject:  Re: ceph-ansible in Pacific and beyond?
>
> Check twice before you click! This email originated from outside PNNL.
>
>
> A containerized environment just makes troubleshooting more difficult,
> getting access and retrieving details on Ceph processes isn't as
> straightforward as with a non containerized infrastructure. I am still not
> convinced that containerizing everything brings any benefits except the
> collocation of services.
>
> On Wed, Mar 17, 2021 at 6:27 PM Matthew H <matthew.heler@xxxxxxxxxxx>
> wrote:
>
> > There should not be any performance difference between an
> un-containerized
> > version and a containerized one.
> >
> > The shift to containers makes sense, as this is the general direction
> that
> > the industry as a whole is taking. I would suggest giving cephadm a try,
> > it's relatively straight forward and significantly faster for deployments
> > then ceph-ansible is.
> >
> > ________________________________
> > From: Matthew Vernon <mv3@xxxxxxxxxxxx>
> > Sent: Wednesday, March 17, 2021 12:50 PM
> > To: ceph-users <ceph-users@xxxxxxx>
> > Subject:  ceph-ansible in Pacific and beyond?
> >
> > Hi,
> >
> > I caught up with Sage's talk on what to expect in Pacific (
> >
> https://gcc02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DPVtn53MbxTc&amp;data=04%7C01%7CKevin.Fox%40pnnl.gov%7Cc8375da0c5e949514eae08d8e96beb60%7Cd6faa5f90ae240338c0130048a38deeb%7C0%7C0%7C637515997042609565%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=7uTZ2om6cgMF7wVMY6ujPHdGS%2FltOUbv0C8L%2FKF3BSU%3D&amp;reserved=0
> ) and there was no mention
> > of ceph-ansible at all.
> >
> > Is it going to continue to be supported? We use it (and uncontainerised
> > packages) for all our clusters, so I'd be a bit alarmed if it was going
> > to go away...
> >
> > Regards,
> >
> > Matthew
> >
> >
> > --
> >  The Wellcome Sanger Institute is operated by Genome Research
> >  Limited, a charity registered in England with number 1021457 and a
> >  company registered in England with number 2742969, whose registered
> >  office is 215 Euston Road, London, NW1 2BE.
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >
> >
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux