Storage-class split objects

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello all!

We have a cluster where there are HDDs for data and NVMEs for journals and
indexes. We recently added pure SSD hosts, and created a storage class SSD.
To do this, we create a default.rgw.hot.data pool, associate a crush rule
using SSD and create a HOT storage class in the placement-target. The
problem is when we send an object to use a HOT storage class, it is in both
the STANDARD storage class pool and the HOT pool.

STANDARD pool:
# rados -p default.rgw.buckets.data ls
d86dade5-d401-427b-870a-0670ec3ecb65.385198.4_LICENSE

# rados -p default.rgw.buckets.data stat
d86dade5-d401-427b-870a-0670ec3ecb65.385198.4_LICENSE
default.rgw.buckets.data/d86dade5-d401-427b-870a-0670ec3ecb65.385198.4_LICENSE
mtime 2021-02-09 14: 54: 14.000000, size 0


HOT pool:
# rados -p default.rgw.hot.data ls
d86dade5-d401-427b-870a-0670ec3ecb65.385198.4__shadow_.rmpla1NTgArcUQdSLpW4qEgTDlbhn9f_0


# rados -p default.rgw.hot.data stat
d86dade5-d401-427b-870a-0670ec3ecb65.385198.4__shadow_.rmpla1NTgArcUQdSLpW4qEgTDlbhn9f_0
default.rgw.hot.data/d86dade5-d401-427b-870a-0670ec3ecb65.385198.4__shadow_.rmpla1NTgArcUQdSLpW4qEgTDlbhn9f_0
mtime 2021-02-09 14: 54: 14.000000, size 15220

The object itself is in the HOT pool, however it creates this other object
similar to an index in the STANDARD pool. Monitoring with iostat we noticed
that this behavior generates an unnecessary IO on disks that do not need to
be touched.

Why this behavior? Are there any ways around it?

Thanks, Marcelo
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux