Re: Suggestion to build ceph storage

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks Christophe,

On Mon, Jun 20, 2022 at 11:45 AM Christophe BAILLON <cb@xxxxxxx> wrote:

> Hi
>
> We have 20 ceph node, each with 12 x 18Tb, 2 x nvme 1Tb
>
> I try this method to create osd
>
> ceph orch apply -i osd_spec.yaml
>
> with this conf
>
> osd_spec.yaml
> service_type: osd
> service_id: osd_spec_default
> placement:
>   host_pattern: '*'
> data_devices:
>   rotational: 1
> db_devices:
>   paths:
>     - /dev/nvme0n1
>     - /dev/nvme1n1
>
> this created 6 osd with wal/db on /dev/nvme0n1 and 6 on /dev/nvme1n1 per
> node
>
>
Does cephadm automatically create partitions for Wal/DB or is it something
I have to define in config?  ( sorry i am new to cephadm because we are
using ceph-ansible and i heard cephadm will replace ceph-ansible soon, is
that correct?)


> but when I do a lvs, I see only 6 x 80Go partitions on each nvme...
>
> I think this is  dynamic sizing, but I'm not sure, I don't know how to
> check it...
>
> Our cluster will only host couple of files, a small one and a big one ~2GB
> for cephfs only use, and with only 8 users accessing datas
>


How many MDS nodes do you have for your cluster size? Do you have dedicated
or shared MDS with OSDs?


> I don't know if this is optimum, we are in testing process...
>
> ----- Mail original -----
> > De: "Stefan Kooman" <stefan@xxxxxx>
> > À: "Jake Grimmett" <jog@xxxxxxxxxxxxxxxxx>, "Christian Wuerdig" <
> christian.wuerdig@xxxxxxxxx>, "Satish Patel"
> > <satish.txt@xxxxxxxxx>
> > Cc: "ceph-users" <ceph-users@xxxxxxx>
> > Envoyé: Lundi 20 Juin 2022 16:59:58
> > Objet:  Re: Suggestion to build ceph storage
>
> > On 6/20/22 16:47, Jake Grimmett wrote:
> >> Hi Stefan
> >>
> >> We use cephfs for our 7200CPU/224GPU HPC cluster, for our use-case
> >> (large-ish image files) it works well.
> >>
> >> We have 36 ceph nodes, each with 12 x 12TB HDD, 2 x 1.92TB NVMe, plus a
> >> 240GB System disk. Four dedicated nodes have NVMe for metadata pool, and
> >> provide mon,mgr and MDS service.
> >>
> >> I'm not sure you need 4% of OSD for wal/db, search this mailing list
> >> archive for a definitive answer, but my personal notes are as follows:
> >>
> >> "If you expect lots of small files: go for a DB that's > ~300 GB
> >> For mostly large files you are probably fine with a 60 GB DB.
> >> 266 GB is the same as 60 GB, due to the way the cache multiplies at each
> >> level, spills over during compaction."
> >
> > There is (experimental ...) support for dynamic sizing in Pacific [1].
> > Not sure if it's stable yet in Quincy.
> >
> > Gr. Stefan
> >
> > [1]:
> >
> https://docs.ceph.com/en/quincy/rados/configuration/bluestore-config-ref/#sizing
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
> --
> Christophe BAILLON
> Mobile :: +336 16 400 522
> Work :: https://eyona.com
> Twitter :: https://twitter.com/ctof
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux