Re: Mixing SSD and HDD disks for data in ceph cluster deployment

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

it appears that the SSDs were used as db devices (/dev/sd[efgh]). According to [1] (I don't use ansible) the simple case is that:

[...] most of the decisions on how devices are configured to provision an OSD are made by the Ceph tooling (ceph-volume lvm batch in this case).

And I assume that this exactly what happened, ceph-volume batch deployed the SSDs as rocksDB, not sure how to prevent ansible from doing that, but there are probably several threads out there that explain it.

Regards,
Eugen

[1] https://docs.ceph.com/projects/ceph-ansible/en/latest/osds/scenarios.html



Zitat von Michel Niyoyita <micou12@xxxxxxxxx>:

Hello team

 I have an issue on ceph-deployment using ceph-ansible . we have two
categories of disk , HDD and SSD , while deploying ceph only HDD are
appearing no SSD appearing . the cluster is running on ubuntu OS 20.04 ,
unfortunately no errors appearing , did I miss something in configuration?
hdd: 7,2936 T
ssd: 7 T

Kindly advise and help

below is how the cluster is behavior, actually we have 20 disks per each
host (16 hdd and 4 ssd), from /dev/sda to /dev/sdt  but if you look after
deployment we have 48 osds instead of 60 osds which are missing are ssd
according to ceph osd crush class ls command .

root@ceph-mon1:~# ceph osd crush class ls
[
    "hdd"
]

root@ceph-mon1:~# ceph -s
  cluster:
    id:     02786875-6dca-46e6-8590-dba92c27e6f8
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum ceph-mon1,ceph-mon2,ceph-mon3 (age 36m)
    mgr: ceph-mon1(active, since 5m), standbys: ceph-mon3, ceph-mon2
    osd: 48 osds: 48 up (since 30m), 48 in (since 31m)
    rgw: 3 daemons active (3 hosts, 1 zones)

  data:
    pools:   5 pools, 105 pgs
    objects: 195 objects, 7.7 KiB
    usage:   21 TiB used, 349 TiB / 370 TiB avail
    pgs:     105 active+clean



and below is the output  of lsblk command which shows that there is LVM
created on ssd but its size doesn't appear . from /dev/sde to /dev/sdh are
ssd .

sda
                            8:0    0   7.3T  0 disk
└─ceph--d856437a--0af5--48e1--a99d--b9f8f5b74165-osd--block--2856935f--a293--47b2--a917--7671e91e207d
253:0    0   7.3T  0 lvm
sdb
                            8:16   0   7.3T  0 disk
└─ceph--60119ee2--6537--4129--88b2--82bc0994766f-osd--block--c87a88ca--2c34--4a71--bffa--55789475e950
253:2    0   7.3T  0 lvm
sdc
                            8:32   0   7.3T  0 disk
└─ceph--1667e64f--8327--4a13--886e--23bbd8d4fb73-osd--block--1b9da829--4210--4956--b3c3--32042c719879
253:5    0   7.3T  0 lvm
sdd
                            8:48   0   7.3T  0 disk
└─ceph--4b9a897f--253e--4078--85fc--0133baad23c6-osd--block--fc9d3bb0--0d63--4201--a49a--6204a928bbd4
253:7    0   7.3T  0 lvm
sde
                            8:64   0     7T  0 disk
├─ceph--54bc00ad--d575--4269--b637--a15e4aed88d2-osd--db--21ccd262--0488--49b4--9994--9cf8a61b7bbd
   253:26   0 447.1G  0 lvm
├─ceph--54bc00ad--d575--4269--b637--a15e4aed88d2-osd--db--404cefe2--c24c--4955--89c9--28e8d7c44db9
   253:28   0 447.1G  0 lvm
├─ceph--54bc00ad--d575--4269--b637--a15e4aed88d2-osd--db--6bea87e8--d883--46a8--b9de--e4b7b311ce20
   253:30   0 447.1G  0 lvm
└─ceph--54bc00ad--d575--4269--b637--a15e4aed88d2-osd--db--6767b370--a63f--402e--864c--0085bc589baa
   253:32   0 447.1G  0 lvm
sdf
                            8:80   0     7T  0 disk
├─ceph--7461bded--a2c2--4874--a08f--fca940f3a511-osd--db--b62845ed--ce46--4afd--8c48--79afd89409ca
   253:18   0 447.1G  0 lvm
├─ceph--7461bded--a2c2--4874--a08f--fca940f3a511-osd--db--a40a1938--ea8b--4c9b--bf10--01ca097719ca
   253:20   0 447.1G  0 lvm
├─ceph--7461bded--a2c2--4874--a08f--fca940f3a511-osd--db--abfa052f--07e4--4504--9cf8--eee3b527489c
   253:22   0 447.1G  0 lvm
└─ceph--7461bded--a2c2--4874--a08f--fca940f3a511-osd--db--5eef2536--829f--4893--9b94--51ae55bfd742
   253:24   0 447.1G  0 lvm
sdg
                            8:96   0     7T  0 disk
├─ceph--fb912c50--2c94--403b--b690--73ef6f4eda69-osd--db--bd3fff8a--40d2--4ed9--b15e--52812da3a15d
   253:10   0 447.1G  0 lvm
├─ceph--fb912c50--2c94--403b--b690--73ef6f4eda69-osd--db--1bd422bf--5d7e--468f--a7ca--3c70d38dcd5b
   253:12   0 447.1G  0 lvm
├─ceph--fb912c50--2c94--403b--b690--73ef6f4eda69-osd--db--cecccf90--6b03--47a1--9ce1--8b6d403816e1
   253:14   0 447.1G  0 lvm
└─ceph--fb912c50--2c94--403b--b690--73ef6f4eda69-osd--db--b8181faf--595b--4118--a35a--e51a527d6e57
   253:16   0 447.1G  0 lvm
sdh
                            8:112  0     7T  0 disk
├─ceph--bc347c07--8636--462e--82fd--d81ed149ed2c-osd--db--9d14b346--ff65--4045--95db--4c5cb43e7829
   253:1    0 447.1G  0 lvm
├─ceph--bc347c07--8636--462e--82fd--d81ed149ed2c-osd--db--ec81234e--f12f--414d--8210--bf9f6ccd5d7e
   253:3    0 447.1G  0 lvm
├─ceph--bc347c07--8636--462e--82fd--d81ed149ed2c-osd--db--c8135b3f--42c5--44f9--885c--6413a16b86b4
   253:6    0 447.1G  0 lvm
└─ceph--bc347c07--8636--462e--82fd--d81ed149ed2c-osd--db--14ba1fee--a896--4cdf--8ded--ba2e08a4c0d5
   253:8    0 447.1G  0 lvm
sdi
                            8:128  0   7.3T  0 disk
└─ceph--5737bd78--e275--44f0--af42--20c70f6ecce0-osd--block--3aa9e13f--fee6--4e6f--8696--4dfa855def35
253:9    0   7.3T  0 lvm
sdj
                            8:144  0   7.3T  0 disk
└─ceph--5420bf20--6fa4--4d70--8020--3df67e6a664a-osd--block--74157cb4--74f9--43dc--af00--9f180cda1dc9
253:11   0   7.3T  0 lvm
sdk
                            8:160  0   7.3T  0 disk
└─ceph--726e1a55--a71b--4fe7--baa2--c48e80a7cabd-osd--block--461ca84d--1bdc--4df8--8235--7b8d7cfd9c86
253:13   0   7.3T  0 lvm
sdl
                            8:176  0   7.3T  0 disk
└─ceph--f4dfe3a2--6494--4ccf--9062--991524995bfb-osd--block--f8bd6329--9c62--4b75--8bb9--c34271b4a09b
253:15   0   7.3T  0 lvm
sdm
                            8:192  0   7.3T  0 disk
└─ceph--53e9c97b--714b--415c--90bd--7f5faf7d7389-osd--block--8917684c--129a--4754--a0d7--3e347e07de21
253:17   0   7.3T  0 lvm
sdn
                            8:208  0   7.3T  0 disk
└─ceph--9f14efa0--208a--4b6c--9063--04b73151eb02-osd--block--a9d289d8--ed05--4e41--8d52--598d3cbaa61d
253:19   0   7.3T  0 lvm
sdo
                            8:224  0   7.3T  0 disk
└─ceph--c396bc2b--bf57--4096--816a--58f383dda801-osd--block--cba8794d--8ab4--4765--9df1--9510927d02eb
253:21   0   7.3T  0 lvm
sdp
                            8:240  0   7.3T  0 disk
└─ceph--97b6b983--a0e7--4521--b2b4--c8ab8d251ef7-osd--block--616e2606--aa7b--4d36--9996--3ac188b1fb39
253:23   0   7.3T  0 lvm
sdq
                           65:0    0   7.3T  0 disk
└─ceph--e36dad3b--b6d2--4949--9e18--920f65807a6a-osd--block--58d102ba--bad8--47eb--8c3f--4ece53ab5d6a
253:25   0   7.3T  0 lvm
sdr
                           65:16   0   7.3T  0 disk
└─ceph--9a24f150--d275--456c--8d85--f5db51800557-osd--block--5e751e55--00ed--497c--b6e7--b802108bbb1e
253:27   0   7.3T  0 lvm
sds
                           65:32   0   7.3T  0 disk
└─ceph--2daaf0ec--1483--4ffa--b831--846bc2ed3fe5-osd--block--e2a23118--41f4--4285--8b1c--de25419ff56e
253:29   0   7.3T  0 lvm
sdt
                           65:48   0   7.3T  0 disk
└─ceph--660fb368--1dad--4275--8868--522f4e60354a-osd--block--e8731791--4560--4b22--aa4f--a650893b100d
253:31   0   7.3T  0 lvm

Best Regards

Michel
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux