Re: Mixed mode ssd and hdd issue

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

We need more info to be able to help you.

What CPU and network in nodes?
What model of SSD?

Cheers

El 13/3/23 a las 16:27, xadhoom76@xxxxxxxxx escribió:
Hi, we have a cluster with 3 nodes . Each node has 4 HDD and 1 SSD
We would like to have a pool only on ssd and a pool only on hdd, using class feature.
here is the setup
# buckets
host ceph01s3 {
         id -3           # do not change unnecessarily
         id -4 class hdd         # do not change unnecessarily
         id -21 class ssd                # do not change unnecessarily
         # weight 34.561
         alg straw2
         hash 0  # rjenkins1
         item osd.0 weight 10.914
         item osd.5 weight 10.914
         item osd.8 weight 10.914
         item osd.9 weight 1.819
}
host ceph02s3 {
         id -5           # do not change unnecessarily
         id -6 class hdd         # do not change unnecessarily
         id -22 class ssd                # do not change unnecessarily
         # weight 34.561
         alg straw2
         hash 0  # rjenkins1
         item osd.1 weight 10.914
         item osd.3 weight 10.914
         item osd.7 weight 10.914
         item osd.10 weight 1.819
}
host ceph03s3 {
         id -7           # do not change unnecessarily
         id -8 class hdd         # do not change unnecessarily
         id -23 class ssd                # do not change unnecessarily
         # weight 34.561
         alg straw2
         hash 0  # rjenkins1
         item osd.2 weight 10.914
         item osd.4 weight 10.914
         item osd.6 weight 10.914
         item osd.11 weight 1.819
}
root default {
         id -1           # do not change unnecessarily
         id -2 class hdd         # do not change unnecessarily
         id -24 class ssd                # do not change unnecessarily
         # weight 103.683
         alg straw2
         hash 0  # rjenkins1
         item ceph01s3 weight 34.561
         item ceph02s3 weight 34.561
         item ceph03s3 weight 34.561
}

# rules
rule replicated_rule {
         id 0
         type replicated
         min_size 1
         max_size 10
         step take default class hdd
         step chooseleaf firstn 0 type host
         step emit
}
rule erasure-code {
         id 1
         type erasure
         min_size 3
         max_size 4
         step take default class hdd
         step set_chooseleaf_tries 5
         step set_choose_tries 100
         step chooseleaf indep 0 type host
         step emit
}
rule erasure2_1 {
         id 2
         type erasure
         min_size 3
         max_size 3
         step take default class hdd
         step set_chooseleaf_tries 5
         step set_choose_tries 100
         step chooseleaf indep 0 type host
         step emit
}
rule erasure-pool.meta {
         id 3
         type erasure
         min_size 3
         max_size 3
         step take default class hdd
         step set_chooseleaf_tries 5
         step set_choose_tries 100
         step chooseleaf indep 0 type host
         step emit
}
rule erasure-pool.data {
         id 4
         type erasure
         min_size 3
         max_size 3
         step take default class hdd
         step set_chooseleaf_tries 5
         step set_choose_tries 100
         step chooseleaf indep 0 type host
         step emit
}
rule replicated_rule_ssd {
         id 5
         type replicated
         min_size 1
         max_size 10
         step take default class ssd
         step chooseleaf firstn 0 type host
         step emit
}

# end crush map

pool 1 'device_health_metrics' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode on last_change 1669 flags hashpspool stripe_width 0 pg_num_max 32 pg_num_min 1 application mgr_devicehealth
pool 5 'Datapool' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 2749 lfor 0/0/321 flags hashpspool,selfmanaged_snaps stripe_width 0 application rbd
pool 7 'erasure-pool.data' erasure profile k2m1 size 3 min_size 2 crush_rule 4 object_hash rjenkins pg_num 128 pgp_num 126 pgp_num_target 128 autoscale_mode on last_change 2780 lfor 0/0/1676 flags hashpspool,ec_overwrites stripe_width 8192 application cephfs
pool 8 'erasure-pool.meta' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 344 flags hashpspool stripe_width 0 pg_autoscale_bias 4 pg_num_min 16 recovery_priority 5 application cephfs
pool 9 '.rgw.root' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 592 flags hashpspool stripe_width 0 application rgw
pool 10 'brescia-ovest.rgw.log' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 595 flags hashpspool stripe_width 0 application rgw
pool 11 'brescia-ovest.rgw.control' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 597 flags hashpspool stripe_width 0 application rgw
pool 12 'brescia-ovest.rgw.meta' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode on last_change 719 lfor 0/719/717 flags hashpspool stripe_width 0 pg_autoscale_bias 4 pg_num_min 8 application rgw
pool 13 'brescia-ovest.rgw.buckets.index' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode on last_change 845 lfor 0/845/843 flags hashpspool stripe_width 0 pg_autoscale_bias 4 pg_num_min 8 application rgw
pool 14 'brescia-ovest.rgw.buckets.data' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 739 flags hashpspool stripe_width 0 application rgw
pool 15 'brescia-ovest.rgw.buckets.non-ec' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 849 flags hashpspool stripe_width 0 application rgw
pool 17 'ssd_pool' replicated size 3 min_size 2 crush_rule 5 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 2774 lfor 0/0/2653 flags hashpspool stripe_width 0 application rbd


rados bench -p ssd_pool 10 write --no-cleanup
Object prefix: benchmark_data_ceph01s3.itservicenet.net_268
   sec Cur ops   started  finished  avg MB/s  cur MB/s last lat(s)  avg lat(s)
     0       0         0         0         0         0           -           0
     1      16        32        16   63.9957        64    0.937457    0.917873
     2      16        48        32   63.9936        64    0.882778    0.898646
     3      16        64        48   63.9929        64    0.949337    0.903045
     4      16        81        65   64.9925        68    0.515819    0.897401
     5      16        97        81   64.7919        64     1.00908    0.918797
     6      16       114        98   65.3248        68     0.99787    0.922301
     7      16       130       114   65.1339        64    0.794492    0.903341
     8      16       147       131   65.4909        68    0.770237    0.892833
     9      16       173       157   69.7677       104    0.976005    0.878237
    10      16       195       179   71.5891        88    0.755363    0.869603

That is very poor !
Why ?
Thanks
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

Eneko Lacunza
Zuzendari teknikoa | Director técnico
Binovo IT Human Project

Tel. +34 943 569 206 | https://www.binovo.es
Astigarragako Bidea, 2 - 2º izda. Oficina 10-11, 20180 Oiartzun

https://www.youtube.com/user/CANALBINOVO
https://www.linkedin.com/company/37269706/
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux