Re: Performance optimization

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello

Thanks for this first input, I already found, at least one of those 6TB Disks is a WD Blue WD60EZAZ which is according to WD with SMR.
I will replace everything with SMR in it, but in the process of replacing hardware, should I replace all disks with for example all 3TB disks?
And what do you think about having the os on one of the disks, used by ceph?

Thanks in advance,
Simon


________________________________
Von: Kai Börnert <kai.boernert@xxxxxxxxx>
Gesendet: Montag, 6. September 2021 10:54:24
An: ceph-users@xxxxxxx
Betreff:  Re: Performance optimization

Hi,

are any of those old disks SMR ones? Because they will absolutely
destroy any kind of performance (ceph does not use writecaches due to
powerloss concerns, so they kinda do their whole magic for each
writerequest).

Greetings

On 9/6/21 10:47 AM, Simon Sutter wrote:
> Hello everyone!
>
> I have built two clusters with old hardware, which is lying around, the possibility to upgrade is there.
> The clusters main usecase is hot backup. This means it's getting written 24/7 where 99% is writing and 1% is reading.
>
>
> It should be based on harddisks.
>
>
>
> At the moment, the nodes look like this:
> 8 Nodes
> Worst CPU: i7-3930K (up to i7-6850K)
>
> Worst ammount of RAM: 24GB (up to 64GB)
> HDD Layout:
> 1x 1TB
> 4x 2TB
> 1x 6TB
> all sata, some just 5400rpm
>
> I had to put the OS on the 6TB HDDs, because there are no more sata connections on the motherboard.
>
> The servers, which have to be backed up, have mounted the ceph with cephfs.
> 99% of the files, that have to be backed up, are harddisk images, so sizes from 5GB to 1TB.
>
> All files are written to an erasure-coded pool with k=6 m=2, compression is on passive snappy, default settings.
>
> I'm getting really bad performace with this setup.
> This is a bench, run with: "rados -p ec_test bench -b 524288 60 write" while normal operations:
>
> Total time run:         63.4957
> Total writes made:      459
> Write size:             524288
> Object size:            524288
> Bandwidth (MB/sec):     3.61442
> Stddev Bandwidth:       3.30073
> Max bandwidth (MB/sec): 16
> Min bandwidth (MB/sec): 0
> Average IOPS:           7
> Stddev IOPS:            6.6061
> Max IOPS:               32
> Min IOPS:               0
> Average Latency(s):     2.151
> Stddev Latency(s):      2.3661
> Max latency(s):         14.0916
> Min latency(s):         0.0420954
> Cleaning up (deleting benchmark objects)
> Removed 459 objects
> Clean up completed and total clean up time :35.6908
>
> [root@testnode01 ~]# ceph osd perf
> osd  commit_latency(ms)  apply_latency(ms)
>    6                 655                655
>    9                  13                 13
>   11                  15                 15
>    7                  17                 17
>   10                  19                 19
>    8                  12                 12
>   24                 153                153
>   25                  22                 22
>   47                  20                 20
>   46                  23                 23
>   45                  43                 43
>   44                   8                  8
>   16                  26                 26
>   15                  18                 18
>   14                  14                 14
>   13                  23                 23
>   12                  47                 47
>   18                 595                595
>    1                  20                 20
>   38                  25                 25
>   17                  17                 17
>    0                 317                317
>   37                  19                 19
>   19                  14                 14
>    2                  16                 16
>   39                   9                  9
>   20                  16                 16
>    3                  18                 18
>   40                  10                 10
>   21                  23                 23
>    4                  17                 17
>   41                  29                 29
>    5                  18                 18
>   42                  16                 16
>   22                  16                 16
>   23                  13                 13
>   26                  20                 20
>   27                  10                 10
>   28                  28                 28
>   29                  13                 13
>   30                  34                 34
>   31                  10                 10
>   32                  31                 31
>   33                  44                 44
>   34                  21                 21
>   35                  22                 22
>   36                 295                295
>   43                   9                  9
>
>
>
> What do you think is the most obvious Problem?
>
> - The one 6TB disk, per node?
> - The OS on the 6TB disk?
>
> What would you suggest?
>
> What I hope to replace with this setup:
> 6 servers, each with 4x3TB disks, with lvm, no redundancy. (two times, that's why I have set up two clusters)
>
> Thanks in advance
>
> Simon
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux