Re: Add ssd's to hdd cluster, crush map class hdd update necessary?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



 

Yes thanks I know, I will change it when I get extra an extra node.



-----Original Message-----
From: Paul Emmerich [mailto:paul.emmerich@xxxxxxxx] 
Sent: woensdag 13 juni 2018 16:33
To: Marc Roos
Cc: ceph-users; k0ste
Subject: Re:  Add ssd's to hdd cluster, crush map class hdd 
update necessary?


2018-06-13 7:13 GMT+02:00 Marc Roos <M.Roos@xxxxxxxxxxxxxxxxx>:


	I just added here 'class hdd'
	
	rule fs_data.ec21 {
	        id 4
	        type erasure
	        min_size 3
	        max_size 3
	        step set_chooseleaf_tries 5
	        step set_choose_tries 100
	        step take default class hdd
	        step choose indep 0 type osd
	        step emit
	}
	


somewhat off-topic, but: 2/1 erasure coding is usually a bad idea for 
the same reasons that size = 2 replicated pools are a bad idea.



Paul

 

	
	
	-----Original Message-----
	From: Konstantin Shalygin [mailto:k0ste@xxxxxxxx] 
	Sent: woensdag 13 juni 2018 12:30
	To: Marc Roos; ceph-users
	Subject: *****SPAM***** Re: *****SPAM***** Re:  Add 
ssd's to 
	hdd cluster, crush map class hdd update necessary?
	
	On 06/13/2018 12:06 PM, Marc Roos wrote:
	> Shit, I added this class and now everything start backfilling 
(10%) 
	> How is this possible, I only have hdd's?
	
	This is normal when you change your crush and placement rules.
	Post your output, I will take a look
	
	ceph osd crush tree
	ceph osd crush dump
	ceph osd pool ls detail
	





	k
	
	
	_______________________________________________
	ceph-users mailing list
	ceph-users@xxxxxxxxxxxxxx
	http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
<http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com> 
	




-- 

Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux