Re: testing ceph

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Karan/All,

Thanks.I guess Joao Eduardo missunderstood me too.


1. yes there is 1 SSD on each server. There is also 4 data drive on each server. so i will have 4 OSD's on each server.

2. I want to know if its a good idea to use 1 partition on the SSD for the journals of the 4 OSD's.

3. considering if the SSD FAILS WHAT HAPPENS TO THE OSD's? if i have 3 replicas set, what is a good PLACEMENT GROUP NUMBER for the data pool, THAT ENSURES the 3 replicas don't end up on the same node. 
because if it does then crush map etc can never recover the my data.

Any idea?

/regards

Charles.

________________________________
> Date: Thu, 31 Oct 2013 16:27:02 +0200 
> From: ksingh@xxxxxx 
> To: charlesboy009@xxxxxxxxxxx 
> CC: majordomo@xxxxxxxxxxxxxxx; ceph-users@xxxxxxxx 
> Subject: Re:  testing ceph 
> 
> Hello Charles 
> 
> Need some more clarification with your setup , Did you mean 
> 
> 1) There is 1 SSD ( 60 GB ) on each server i.e 6 SSD on all 6 servers ? 
> 
> 2) your osd.3 , osd.4 , osd.5 uses same journal ( /dev/sdf2 ) ? 
> 
> Regards 
> Karan Singh 
> 
> ________________________________ 
> From: "charles L" <charlesboy009@xxxxxxxxxxx> 
> To: "ceph dev" <majordomo@xxxxxxxxxxxxxxx>, ceph-users@xxxxxxxx 
> Sent: Thursday, 31 October, 2013 6:24:13 AM 
> Subject:  testing ceph 
> 
> Hi, 
> Pls is this a good setup for a production environment test of ceph? My 
> focus is on the SSD ... should it be partitioned(sdf1,2 ,3,4) and 
> shared by the four OSDs on a host? or is this a better configuration 
> for the SSD to be just one partition(sdf1) while all osd uses that one 
> partition? 
> my setup: 
> - 6 Servers with one 250gb boot disk for OS(sda), 
> four-2Tb Disks each for the OSDs i.e Total disks = 6x4 = 24 disks (sdb -sde) 
> and one-60GB SSD for Osd Journal(sdf). 
> -RAM = 32GB on each server with 2 GB network link. 
> hostname for servers: Server1 -Server6 
> 
> [osd.0] 
> host = server1 
> devs = /dev/sdb 
> osd journal = /dev/sdf1 
> [osd.1] 
> host = server1 
> devs = /dev/sdc 
> osd journal = /dev/sdf2 
> 
> [osd.3] 
> host = server1 
> devs = /dev/sdd 
> osd journal = /dev/sdf2 
> 
> [osd.4] 
> host = server1 
> devs = /dev/sde 
> osd journal = /dev/sdf2 
> [osd.5] 
> host = server2 
> devs = /dev/sdb 
> osd journal = /dev/sdf2 
> ... 
> [osd.23] 
> host = server6 
> devs = /dev/sde 
> osd journal = /dev/sdf2 
> 
> Thanks. 
> 
> _______________________________________________ 
> ceph-users mailing list 
> ceph-users@xxxxxxxxxxxxxx 
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
> 		 	   		  
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux