Re: Question if WAL/block.db partition will benefit us

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Not sure how much it would help the performance with osd's backed with ssd db and wal devices. Even if you go this route with one ssd per 10 hdd, you might want to set the failure domain per host in crush rules in case ssd is out of service. But from the practice ssd will not help too much to boost the performance especially for sharing it between 10 hdds. We use nvme db+wal per osd and separate nvme specifically for metadata pools. There will be a lot of I/O on bucket.index pool and rgw pool which stores user, bucket metadata. So you might want to put them into separate fast storage.  Also if there will not be too much objects, like huge objects but not tens-hundreds million of them then bucket index will have less presure and ssd might be okay for metadata pools in that case.Надіслано з пристрою Galaxy
-------- Оригінальне повідомлення --------Від: Boris Behrens <bb@xxxxxxxxx> Дата: 08.11.21  13:08  (GMT+02:00) Кому: ceph-users@xxxxxxx Тема:  Question if WAL/block.db partition will benefit us Hi,we run a larger octopus s3 cluster with only rotating disks.1.3 PiB with 177 OSDs, some with a SSD block.db and some without.We have a ton of spare 2TB disks and we just wondered if we can bring theto good use.For every 10 spinning disks we could add one 2TB SSD and we would createtwo partitions per OSD (130GB for block.db and 20GB for block.wal). Thiswould leave some empty space on the SSD for waer leveling.The question now is: would we benefit from this? Most of the data that iswritten to the cluster is very large (50GB and above). This would take alot of work into restructuring the cluster and also two other clusters.And does it make a different to have only a block.db partition or ablock.db and a block.wal partition?Cheers Boris_______________________________________________ceph-users mailing list -- ceph-users@xxxxxxxxx unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux