Re: SUSE POC - Dead in the water

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Ehhh I think they are using the standard ceph versions. Your problem with cephfs is a more a matter of configuring/setup, and you should be able solve that (and have similar results) with any distribution.


> -----Original Message-----
> From: Schweiss, Chip <chip@xxxxxxxxxxxxx>
> Sent: 16 February 2021 17:43
> To: Mark Nelson <mnelson@xxxxxxxxxx>
> Cc: ceph-users@xxxxxxx
> Subject:  Re: SUSE POC - Dead in the water
> 
> Mark,
> 
> We'll see if the problems follow me as I install Croit    They gave a
> very
> impressive impromptu presentation shortly after I sent this call for
> help.
> 
> I'll make sure I post some details about our CephFS endeavor as things
> progress, it will likely help others as they start their Ceph projects.
> 
> -Chip
> 
> On Tue, Feb 16, 2021 at 9:48 AM Mark Nelson <mnelson@xxxxxxxxxx> wrote:
> 
> > Hi Chip,
> >
> >
> > Regarding CephFS performance, it really depends on the io patterns and
> > what you are trying to accomplish.  Can you talk a little bit more
> about
> > what you are seeing?
> >
> >
> > Thanks,
> >
> > Mark
> >
> >
> >
> > On 2/16/21 8:42 AM, Schweiss, Chip wrote:
> > > For the past several months I had been building a sizable Ceph
> cluster
> > that
> > > will be up to 10PB with between 20 and 40 OSD servers this year.
> > >
> > > A few weeks ago I was informed that SUSE is shutting down SES and
> will no
> > > longer be selling it.  We haven't licensed our proof of concept
> cluster
> > > that is currently at 14 OSD nodes, but it looks like SUSE is not
> going to
> > > be the answer here.
> > >
> > > I'm seeking recommendations for consulting help on this project
> since
> > SUSE
> > > has let me down.
> > >
> > > I have Ceph installed and operating, however, I've been struggling
> with
> > > getting the pool configured properly for CephFS and getting very
> poor
> > > performance.   The OSD servers have TLC NVMe for DB, and Optane NVMe
> for
> > > WAL, so I should be seeing decent performance with the current
> cluster.
> > >
> > > I'm not opposed to completely switching OS distributions.  Ceph on
> SUSE
> > was
> > > our first SUSE installation.   Almost everything else we run is on
> > CentOS,
> > > but that may change thanks to IBM cannibalizing CentOS.
> > >
> > > Please reach out to me if you can recommend someone to sell us
> consulting
> > > hours and/or a support contract.
> > >
> > > -Chip Schweiss
> > > chip.schweiss@xxxxxxxxx
> > > Washington University School of Medicine
> > > _______________________________________________
> > > ceph-users mailing list -- ceph-users@xxxxxxx
> > > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> > >
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux