Hi Chip,
Glad to hear it! From an upstream perspective we've got a pretty good
idea of some of the bottlenecks in the MDS and others in the
OSD/Bluestore, but it's always nice to hear what folks are struggling
with out in the field to challenge our assumptions.
Best of luck!
Mark
On 2/16/21 10:42 AM, Schweiss, Chip wrote:
Mark,
We'll see if the problems follow me as I install Croit They gave a
very impressive impromptu presentation shortly after I sent this call
for help.
I'll make sure I post some details about our CephFS endeavor as things
progress, it will likely help others as they start their Ceph projects.
-Chip
On Tue, Feb 16, 2021 at 9:48 AM Mark Nelson <mnelson@xxxxxxxxxx
<mailto:mnelson@xxxxxxxxxx>> wrote:
Hi Chip,
Regarding CephFS performance, it really depends on the io patterns
and
what you are trying to accomplish. Can you talk a little bit more
about
what you are seeing?
Thanks,
Mark
On 2/16/21 8:42 AM, Schweiss, Chip wrote:
> For the past several months I had been building a sizable Ceph
cluster that
> will be up to 10PB with between 20 and 40 OSD servers this year.
>
> A few weeks ago I was informed that SUSE is shutting down SES
and will no
> longer be selling it. We haven't licensed our proof of concept
cluster
> that is currently at 14 OSD nodes, but it looks like SUSE is not
going to
> be the answer here.
>
> I'm seeking recommendations for consulting help on this project
since SUSE
> has let me down.
>
> I have Ceph installed and operating, however, I've been
struggling with
> getting the pool configured properly for CephFS and getting very
poor
> performance. The OSD servers have TLC NVMe for DB, and Optane
NVMe for
> WAL, so I should be seeing decent performance with the current
cluster.
>
> I'm not opposed to completely switching OS distributions. Ceph
on SUSE was
> our first SUSE installation. Almost everything else we run is
on CentOS,
> but that may change thanks to IBM cannibalizing CentOS.
>
> Please reach out to me if you can recommend someone to sell us
consulting
> hours and/or a support contract.
>
> -Chip Schweiss
> chip.schweiss@xxxxxxxxx <mailto:chip.schweiss@xxxxxxxxx>
> Washington University School of Medicine
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
<mailto:ceph-users@xxxxxxx>
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
<mailto:ceph-users-leave@xxxxxxx>
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
<mailto:ceph-users@xxxxxxx>
To unsubscribe send an email to ceph-users-leave@xxxxxxx
<mailto:ceph-users-leave@xxxxxxx>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx