Re: Luminous can't seem to provision more than 32 OSDs per server

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



 
What about not using deploy?




-----Original Message-----
From: Sean Sullivan [mailto:lookcrabs@xxxxxxxxx] 
Sent: donderdag 19 oktober 2017 2:28
To: ceph-users@xxxxxxxxxxxxxx
Subject:  Luminous can't seem to provision more than 32 OSDs 
per server

I am trying to install Ceph luminous (ceph version 12.2.1) on 4 ubuntu 
16.04 servers each with 74 disks, 60 of which are HGST 7200rpm sas 
drives::


HGST HUS724040AL sdbv  sas
root@kg15-2:~# lsblk --output MODEL,KNAME,TRAN | grep HGST | wc -l 60

I am trying to deploy them all with ::
a line like the following::
ceph-deploy osd zap kg15-2:(sas_disk)
ceph-deploy osd create --dmcrypt --bluestore --block-db (ssd_partition) 
kg15-2:(sas_disk)

This didn't seem to work at all so I am now trying to troubleshoot by 
just provisioning the sas disks::
ceph-deploy osd create --dmcrypt --bluestore kg15-2:(sas_disk)

Across all 4 hosts I can only seem to get 32 OSDs up and after that the 
rest fail::
root@kg15-1:~# ps faux | grep [c]eph-osd' | wc -l
32
root@kg15-2:~# ps faux | grep [c]eph-osd' | wc -l
32

root@kg15-3:~# ps faux | grep [c]eph-osd' | wc -l
32

The ceph-deploy tool doesn't seem to log or notice any failure but the 
host itself shows the following in the osd log:


2017-10-17 23:05:43.121016 7f8ca75c9e00  0 set uid:gid to 64045:64045 
(ceph:ceph)
2017-10-17 23:05:43.121040 7f8ca75c9e00  0 ceph version 12.2.1 
(3e7492b9ada8bdc9a5cd0feafd42fbca27f9c38e) luminous (stable), process 
(unknown), pid 69926
2017-10-17 23:05:43.123939 7f8ca75c9e00  1 
bluestore(/var/lib/ceph/tmp/mnt.8oIc5b) mkfs path 
/var/lib/ceph/tmp/mnt.8oIc5b
2017-10-17 23:05:43.124037 7f8ca75c9e00  1 bdev create path 
/var/lib/ceph/tmp/mnt.8oIc5b/block type kernel
2017-10-17 23:05:43.124045 7f8ca75c9e00  1 bdev(0x564b7a05e900 
/var/lib/ceph/tmp/mnt.8oIc5b/block) open path 
/var/lib/ceph/tmp/mnt.8oIc5b/block
2017-10-17 23:05:43.124231 7f8ca75c9e00  1 bdev(0x564b7a05e900 
/var/lib/ceph/tmp/mnt.8oIc5b/block) open size 4000668520448 
(0x3a37a6d1000, 3725 GB) block_size 4096 (4096 B) rotational
2017-10-17 23:05:43.124296 7f8ca75c9e00  1 
bluestore(/var/lib/ceph/tmp/mnt.8oIc5b) _set_cache_sizes max 0.5 < ratio 
0.99
2017-10-17 23:05:43.124313 7f8ca75c9e00  1 
bluestore(/var/lib/ceph/tmp/mnt.8oIc5b) _set_cache_sizes cache_size 
1073741824 meta 0.5 kv 0.5 data 0
2017-10-17 23:05:43.124349 7f8ca75c9e00 -1 
bluestore(/var/lib/ceph/tmp/mnt.8oIc5b) _open_db 
/var/lib/ceph/tmp/mnt.8oIc5b/block.db link target doesn't exist
2017-10-17 23:05:43.124368 7f8ca75c9e00  1 bdev(0x564b7a05e900 
/var/lib/ceph/tmp/mnt.8oIc5b/block) close
2017-10-17 23:05:43.402165 7f8ca75c9e00 -1 
bluestore(/var/lib/ceph/tmp/mnt.8oIc5b) mkfs failed, (2) No such file or 
directory
2017-10-17 23:05:43.402185 7f8ca75c9e00 -1 OSD::mkfs: ObjectStore::mkfs 
failed with error (2) No such file or directory
2017-10-17 23:05:43.402258 7f8ca75c9e00 -1  ** ERROR: error creating 
empty object store in /var/lib/ceph/tmp/mnt.8oIc5b: (2) No such file or 
directory


I have a few questions. I am not sure where to start troubleshooting so 
I have a few questions.

1.) Anyone have any idea on why 32?
2.) Is there a good guide / outline on how to get the benefit of storing 
the keys in the monitor while still having ceph more or less manage the 
drives but provisioning the drives without ceph-deploy? I looked at the 
manual deployment long and short form and it doesn't mention dmcrypt or 
bluestore at all. I know I can use crypttab and cryptsetup to do this 
and then give ceph-disk the path to the mapped device but I would prefer 
to keep as much management in ceph as possible if I could.  (mailing 
list thread :: 
https://www.mail-archive.com/ceph-users@xxxxxxxxxxxxxx/msg38575.html 
<https://www.mail-archive.com/ceph-users@xxxxxxxxxxxxxx/msg38575.html>  )

3.) Ideally I would like to provision the drives with the DB on the SSD. (or would it be better to make a cache tier? I read on a reddit 
thread that the tiering in ceph isn't being developed anymore is it 
still worth it?)

Sorry for the bother and thanks for all the help!!!


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux