Re: ceph 12.2.5 - atop DB/WAL SSD usage 0%

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Could we infer from this if the usage model is large object sizes  rather than small I/Os the benefit of offloading WAL/DB is questionable given that the failure of the SSD (assuming shared amongst HDDs) could take down a number of OSDs and in this case a best practice would be to collocate?

-----Original Message-----
From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Serkan Çoban
Sent: Friday, April 27, 2018 10:05 AM
To: Steven Vacaroaia <stef97@xxxxxxxxx>
Cc: ceph-users <ceph-users@xxxxxxxxxxxxxx>
Subject: Re:  ceph 12.2.5 - atop DB/WAL SSD usage 0%

rados bench is using 4MB block size for io. Try with with io size 4KB, you will see ssd will be used for write operations.

On Fri, Apr 27, 2018 at 4:54 PM, Steven Vacaroaia <stef97@xxxxxxxxx> wrote:
> Hi
>
> During rados bench tests, I noticed that HDD usage goes to 100% but 
> SSD stays at ( or very close to 0)
>
> Since I created OSD with BLOCK/WAL on SSD, shouldnt  I see some "activity'
> on SSD ?
>
> How can I be sure CEPH is actually using SSD for WAL /DB ?
>
>
> Note
> I only have 2 HDD and one SSD per server for now
>
>
> Comands used
>
> rados bench -p rbd 50 write -t 32 --no-cleanup && rados bench -p rbd 
> -t 32
> 50 rand
>
>
> /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data 
> /dev/sdc --block.wal 
> /dev/disk/by-partuuid/32ffde6f-7249-40b9-9bc5-2b70f0c3f7ad
> --block.db /dev/disk/by-partuuid/2d9ab913-7553-46fc-8f96-5ffee028098a
>
> ( partitions are on SSD ...see below)
>
>  sgdisk -p /dev/sda
> Disk /dev/sda: 780140544 sectors, 372.0 GiB Logical sector size: 512 
> bytes Disk identifier (GUID): 5FE0EA74-7E65-45B8-A356-62240333491E
> Partition table holds up to 128 entries First usable sector is 34, 
> last usable sector is 780140510 Partitions will be aligned on 
> 2048-sector boundaries Total free space is 520093629 sectors (248.0 
> GiB)
>
> Number  Start (sector)    End (sector)  Size       Code  Name
>    1       251660288       253757439   1024.0 MiB  FFFF  ceph WAL
>    2            2048        62916607   30.0 GiB    FFFF  ceph DB
>    3       253757440       255854591   1024.0 MiB  FFFF  ceph WAL
>    4        62916608       125831167   30.0 GiB    FFFF  ceph DB
>    5       255854592       257951743   1024.0 MiB  FFFF  ceph WAL
>    6       125831168       188745727   30.0 GiB    FFFF  ceph DB
>    7       257951744       260048895   1024.0 MiB  FFFF  ceph WAL
>    8       188745728       251660287   30.0 GiB    FFFF  ceph DB
> [root@osd04 ~]# ls -al /dev/disk/by-partuuid/ total 0 drwxr-xr-x 2 
> root root 200 Apr 26 15:39 .
> drwxr-xr-x 8 root root 160 Apr 27 08:45 ..
> lrwxrwxrwx 1 root root  10 Apr 27 09:38 
> 0baf986d-f786-4c1a-8962-834743b33e3a
> -> ../../sda8
> lrwxrwxrwx 1 root root  10 Apr 27 09:38 
> 2d9ab913-7553-46fc-8f96-5ffee028098a
> -> ../../sda2
> lrwxrwxrwx 1 root root  10 Apr 27 09:38 
> 32ffde6f-7249-40b9-9bc5-2b70f0c3f7ad
> -> ../../sda3
> lrwxrwxrwx 1 root root  10 Apr 27 09:38 
> 3f4e2d47-d553-4809-9d4e-06ba37b4c384
> -> ../../sda6
> lrwxrwxrwx 1 root root  10 Apr 27 09:38 
> 3fc98512-a92e-4e3b-9de7-556b8e206786
> -> ../../sda1
> lrwxrwxrwx 1 root root  10 Apr 27 09:38 
> 64b8ae66-cf37-4676-bf9f-9c4894788a7f
> -> ../../sda7
> lrwxrwxrwx 1 root root  10 Apr 27 09:38 
> 96254af9-7fe4-4ce0-886e-2e25356eff81
> -> ../../sda5
> lrwxrwxrwx 1 root root  10 Apr 27 09:38 
> ae616b82-35ab-4f7f-9e6f-3c65326d76a8
> -> ../../sda4
>
>
>
>
>
>
>  dm-0 |  busy     90% |              | read    2516  | write      0 |
> |  KiB/r    512 | KiB/w      0 |               | MBr/s  125.8 | MBw/s    0.0
> |               | avq    10.65 | avio 3.57 ms  |              |
> LVM |         dm-1 |  busy     80% |              | read    2406  | write
> 0 |              |  KiB/r    512 | KiB/w      0 |               | MBr/s
> 120.3 | MBw/s    0.0 |               | avq    12.59 | avio 3.30 ms  |
> |
> DSK |          sdc |  busy     90% |              | read    5044  | write
> 0 |              |  KiB/r    256 | KiB/w      0 |               | MBr/s
> 126.1 | MBw/s    0.0 |               | avq    19.53 | avio 1.78 ms  |
> |
> DSK |          sdd |  busy     80% |              | read    4805  | write
> 0 |              |  KiB/r    256 | KiB/w      0 |               | MBr/s
> 120.1 | MBw/s    0.0 |               | avq    23.97 | avio 1.65 ms  |
> |
> DSK |          sda |  busy      0% |              | read       0  | write
> 7 |              |  KiB/r      0 | KiB/w     10 |               | MBr/s
> 0.0 | MBw/s    0.0 |               | avq     0.00 | avio 0.00 ms  |
> |
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.ceph.com_lis
> tinfo.cgi_ceph-2Dusers-2Dceph.com&d=DwICAg&c=4DxX-JX0i28X6V65hK0ft5M-1
> rZQeWgdMry9v8-eNr4&r=eqMv5yFFe6-lAM9jJfUusNFzzcFAGwmoAez_acfPOtw&m=Gkb
> AzUQpHU6F0PQW4cXglhdQN00DLmI75Ge2zPFqeeQ&s=R5UDTadunkDZPcYZfMoWS_0Vead
> oXB5jfcy-FKfJYPM&e=
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.ceph.com_listinfo.cgi_ceph-2Dusers-2Dceph.com&d=DwICAg&c=4DxX-JX0i28X6V65hK0ft5M-1rZQeWgdMry9v8-eNr4&r=eqMv5yFFe6-lAM9jJfUusNFzzcFAGwmoAez_acfPOtw&m=GkbAzUQpHU6F0PQW4cXglhdQN00DLmI75Ge2zPFqeeQ&s=R5UDTadunkDZPcYZfMoWS_0VeadoXB5jfcy-FKfJYPM&e=
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux