Re: Question if WAL/block.db partition will benefit us

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Stefan,

for a 6:1 or 3:1 ration we do not have enough slots (I think).
There is some read but I don't know if this is a lot.
    client:   27 MiB/s rd, 289 MiB/s wr, 1.07k op/s rd, 261 op/s wr

Putting the to use for some special rgw pools also came to my mind.
But would this make a lot of difference?
.rgw.root                         1    64  150 KiB      142   26 MiB      0
    42 TiB
eu-central-1.rgw.control          2    64      0 B        8      0 B      0
    42 TiB
eu-central-1.rgw.data.root        3    64  1.2 MiB    3.96k  743 MiB      0
    42 TiB
eu-central-1.rgw.gc               4    64  329 MiB      128  998 MiB      0
    42 TiB
eu-central-1.rgw.log              5    64  939 KiB      370  3.1 MiB      0
    42 TiB
eu-central-1.rgw.users.uid        6    64   12 MiB    7.10k  1.2 GiB      0
    42 TiB
eu-central-1.rgw.users.keys       7    64  297 KiB    7.40k  1.4 GiB      0
    42 TiB
eu-central-1.rgw.meta             8    64  392 KiB       1k  191 MiB      0
    42 TiB
eu-central-1.rgw.users.email      9    64     40 B        1  192 KiB      0
    42 TiB
eu-central-1.rgw.buckets.index   10    64   22 GiB    2.55k   67 GiB   0.05
    42 TiB
eu-central-1.rgw.buckets.data    11  2048  318 TiB  132.31M  961 TiB  88.38
    42 TiB
eu-central-1.rgw.buckets.non-ec  12    64  467 MiB   13.28k  2.4 GiB      0
    42 TiB
eu-central-1.rgw.usage           13    64  767 MiB       32  2.2 GiB      0
    42 TiB

I would have put the rgw.buckets.index and maybe the rgw.meta pools on it,
but it looks like a waste of space. Having a 2TB OSD in evey chassis that
only handles 23GB of data.

Am Mo., 8. Nov. 2021 um 12:30 Uhr schrieb Stefan Kooman <stefan@xxxxxx>:

> On 11/8/21 12:07, Boris Behrens wrote:
> > Hi,
> > we run a larger octopus s3 cluster with only rotating disks.
> > 1.3 PiB with 177 OSDs, some with a SSD block.db and some without.
> >
> > We have a ton of spare 2TB disks and we just wondered if we can bring the
> > to good use.
> > For every 10 spinning disks we could add one 2TB SSD and we would create
> > two partitions per OSD (130GB for block.db and 20GB for block.wal). This
> > would leave some empty space on the SSD for waer leveling.
>
> A 10:1 ratio looks rather high. Discussions on this list indicate this
> ratio normally is in the 3:1 up to 6:1 range (for high end NVMe / SSD).
>
> >
> > The question now is: would we benefit from this? Most of the data that is
> > written to the cluster is very large (50GB and above). This would take a
> > lot of work into restructuring the cluster and also two other clusters.
> >
> > And does it make a different to have only a block.db partition or a
> > block.db and a block.wal partition?
>
> Does this cluster also gets a lot of reads? I wonder if using the SSD
> drives for S3 metadata pools would make more sense. And also be a lot
> less work.
>
> Gr. Stefan
>


-- 
Die Selbsthilfegruppe "UTF-8-Probleme" trifft sich diesmal abweichend im
groüen Saal.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux