Re: Performance of volume size, not a block size

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Anthony-san,

Thank you for your advice. I confirm my settings of my ceph cluster.
Autoscaler mode is on, so I had thought it's the best PGs. But the
autoscaler feature doesn't affect OSD's PGs. It's just for PG_NUM in
storage pools. Is that right?

Regards,
--
Mitsumasa KONDO


2024年4月15日(月) 22:58 Anthony D'Atri <anthony.datri@xxxxxxxxx>:

> If you're using SATA/SAS SSDs I would aim for 150-200 PGs per OSD as shown
> by `ceph osd df`.
> If NVMe, 200-300 unless you're starved for RAM.
>
>
> > On Apr 15, 2024, at 07:07, Mitsumasa KONDO <kondo.mitsumasa@xxxxxxxxx>
> wrote:
> >
> > Hi Menguy-san,
> >
> > Thank you for your reply. Users who use large IO with tiny volumes are a
> > nuisance to cloud providers.
> >
> > I confirmed my ceph cluster with 40 SSDs. Each OSD on 1TB SSD has about
> 50
> > placement groups in my cluster. Therefore, each PG has approximately 20GB
> > of space.
> > If we create a small 8GB volume, I had a feeling it wouldn't be
> distributed
> > well, but it will be distributed well.
> >
> > Regards,
> > --
> > Mitsumasa KONDO
> >
> > 2024年4月15日(月) 15:29 Etienne Menguy <etienne.menguy@xxxxxxxxxxx>:
> >
> >> Hi,
> >>
> >> Volume size doesn't affect performance, cloud providers apply a limit to
> >> ensure they can deliver expected performances to all their customers.
> >>
> >> Étienne
> >> ------------------------------
> >> *From:* Mitsumasa KONDO <kondo.mitsumasa@xxxxxxxxx>
> >> *Sent:* Monday, 15 April 2024 06:06
> >> *To:* ceph-users@xxxxxxx <ceph-users@xxxxxxx>
> >> *Subject:*  Performance of volume size, not a block size
> >>
> >> [Some people who received this message don't often get email from
> >> kondo.mitsumasa@xxxxxxxxx. Learn why this is important at
> >> https://aka.ms/LearnAboutSenderIdentification ]
> >>
> >> Hi,
> >>
> >> In AWS EBS gp3, AWS says that small volume size cannot achieve best
> >> performance. I think it's a feature or tendency of general
> >> distributed storages including Ceph. Is that right in Ceph block
> storage? I
> >> read many docs on ceph community. I never heard of Ceph storage.
> >>
> >>
> >>
> https://can01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdocs.aws.amazon.com%2Febs%2Flatest%2Fuserguide%2Fgeneral-purpose.html&data=05%7C02%7Cetienne.menguy%40ubisoft.com%7C3076825a4d2a4897074208dc5d017852%7Ce01bd386fa514210a2a429e5ab6f7ab1%7C0%7C0%7C638487508098942744%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=wOQKqG41uccTbyNHDIps62ojcTFBZYlyxxp3TzccsJI%3D&reserved=0
> >> <https://docs.aws.amazon.com/ebs/latest/userguide/general-purpose.html>
> >>
> >> Regard,
> >> --
> >> Mitsumasa KONDO
> >> _______________________________________________
> >> ceph-users mailing list -- ceph-users@xxxxxxx
> >> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >>
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux