Re: ceph boostrap initialization :: nvme drives not empty after >12h

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 3/12/21 1:26 PM, Andrew Walker-Brown wrote:
Hi Adrian,
Hi!

If you’re just using this for test/familiarity and performance isn’t an issue, then I’d create 3 x VMs on your host server and use them for Ceph.
why? i kind of want to be close to a deployment scenario, which most certainly will not involve virtual machines (i'm very satisfied with the current workings of cephadm/podman)

Moreover it's also a matter of resources: i have 2 nvme drives bought specific for this reason (as i will want to also deploy vms on the rbd block devices)

It’ll work fine, just don’t expect Gb/s in transfer speeds 😊
well, over virtio why not? (the vms would be hosted on the same host. in the end my desktop have only 1 gbit connection so of course i will not do stuff over network, at least at this point) and the testing of mds also will be done mostly on the same host (well, i will try also external clients but of course i will be capped to the theoretical 120 MiB/s which i'm curios if i can touch (9k mtu))

Thanks!!
Adrian



A>

Sent from Mail <https://go.microsoft.com/fwlink/?LinkId=550986> for Windows 10

*From: *Adrian Sevcenco <mailto:Adrian.Sevcenco@xxxxxxx>
*Sent: *12 March 2021 11:22
*To: *ceph-users@xxxxxxx <mailto:ceph-users@xxxxxxx>
*Subject: * Re: ceph boostrap initialization :: nvme drives not empty after >12h

On 3/12/21 12:31 PM, Eneko Lacunza wrote:
 > Hi Adrian,
Hi!

 > El 12/3/21 a las 11:26, Adrian Sevcenco escribió:
 >> Hi! yesterday i bootstrapped (with cephadm) my first ceph installation
 >> and things looked somehow ok .. but today the osds are not yet ready
 >> and i have in dashboard this warnings:
 >> MDS_SLOW_METADATA_IO: 1 MDSs report slow metadata IOs
 >> PG_AVAILABILITY: Reduced data availability: 64 pgs inactive
 >> PG_DEGRADED: Degraded data redundancy: 2/14 objects degraded
 >> (14.286%), 66 pgs undersized
 >> TOO_FEW_OSDS: OSD count 2 < osd_pool_default_size 3
 >
 > This is the issue. You only have 2 OSDs, but the pool default size is 3.
it should not as i changed the values:
ceph osd pool ls detail
pool 1 'NVME' replicated size 2 min_size 1 crush_rule 0 object_hash
rjenkins pg_num 128 pgp_num 1 pgp_num_target 128 autoscale_mode on
last_change 69 lfor 0/0/54 flags hashpspool,selfmanaged_snaps
stripe_width 0 pg_num_min 64 application cephfs,rbd
pool 2 'device_health_metrics' replicated size 2 min_size 1 crush_rule 0
object_hash rjenkins pg_num 2 pgp_num 1 pgp_num_target 2 autoscale_mode
on last_change 76 lfor 0/0/60 flags hashpspool stripe_width 0 pg_num_min
2 application mgr_devicehealth
pool 3 'cephfs.sev-ceph.meta' replicated size 2 min_size 1 crush_rule 0
object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change
77 flags hashpspool stripe_width 0 pg_autoscale_bias 4 pg_num_min 16
recovery_priority 5 application cephfs
pool 4 'cephfs.sev-ceph.data' replicated size 2 min_size 1 crush_rule 0
object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change
79 flags hashpspool stripe_width 0 application cephfs

 >>
 >> and in logs:
 >> 3/12/21 12:18:19 PM
 >> [INF]
 >> OSD <1> is not empty yet. Waiting a bit more
 >>
 >> 3/12/21 12:18:19 PM
 >> [INF]
 >> OSD <0> is not empty yet. Waiting a bit more
 >>
 >> 3/12/21 12:18:19 PM
 >> [INF]
 >> Can't even stop one OSD. Cluster is probably busy. Retrying later..
 >>
 >> 3/12/21 12:18:19 PM
 >> [ERR]
 >> cmd: osd ok-to-stop failed with: 31 PGs are already too degraded,
 >> would become too degraded or might become unavailable. (errno:-16)
 >>
 >> this is a single node, whole package ceph install with 2 local nvme
 >> drives as osds (to be used 2x replicated like a raid1 array)
 >>
 >> So, can anyone tell me what is going on?
 > I don't think you should use Ceph for this config. The bare minimum you
 > should use is 3 nodes, because default failure domain is host.
ooooh ... how can i change this to device?

 > Maybe you can explain what your goal is, so people can recommend setups.
so, this is my first encounter with ceph so i just want to have a single
node installation of ceph, so i could get familiar with both server
administration and with client rbd and mds usage

Thank you!
Adrian



--
----------------------------------------------
Adrian Sevcenco, Ph.D.                       |
Institute of Space Science - ISS, Romania    |
adrian.sevcenco at {cern.ch,spacescience.ro} |
----------------------------------------------

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux