Re: What is the problem with many PGs per OSD

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> But not, I suspect, nearly as many tentacles.

No, that's the really annoying part. It just works.
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14

________________________________________
From: Anthony D'Atri <anthony.datri@xxxxxxxxx>
Sent: Thursday, October 10, 2024 2:13 PM
To: Frank Schilder
Cc: Peter Grandi; list Linux fs Ceph
Subject: Re:  What is the problem with many PGs per OSD



I'm afraid nobody will build a 100PB cluster with 1T drives. That's just absurd

Check the archives for the panoply of absurdity that I’ve encountered ;)

So, the sharp increase of per-device capacity has to be taken into account. Specifically as the same development is happening with SSDs. There is no way around 100TB drives in the near future and a system like ceph is either able to handle that or will die

Agreed.  I expect 122TB QLC in 1H2025.  With NVMe and PCI-e Gen 5 one might experiment with slicing each into two OSDs.  But for archival and object workloads latency usually isn’t so big a deal, so we may increasingly see a strategy adapted to the workloads.

10 higher aggregated sustained IOP/s performance compared with a similarly sized ceph cluster

But not, I suspect, nearly as many tentacles.



_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux