Hello Charles
Need some more clarification with your setup , Did you mean
1) There is 1 SSD ( 60 GB ) on each server i.e 6 SSD on all 6 servers ?
2) your osd.3 , osd.4 , osd.5 uses same journal ( /dev/sdf2 ) ?
Regards
Karan Singh
From: "charles L" <charlesboy009@xxxxxxxxxxx>
To: "ceph dev" <majordomo@xxxxxxxxxxxxxxx>, ceph-users@xxxxxxxx
Sent: Thursday, 31 October, 2013 6:24:13 AM
Subject: [ceph-users] testing ceph
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
To: "ceph dev" <majordomo@xxxxxxxxxxxxxxx>, ceph-users@xxxxxxxx
Sent: Thursday, 31 October, 2013 6:24:13 AM
Subject: [ceph-users] testing ceph
Hi,
Pls is this a good setup for a production environment test of ceph? My focus is on the SSD ... should it be partitioned(sdf1,2 ,3,4) and shared by the four OSDs on a host? or is this a better configuration for the SSD to be just one partition(sdf1) while all osd uses that one partition?
my setup:
- 6 Servers with one 250gb boot disk for OS(sda),
four-2Tb Disks each for the OSDs i.e Total disks = 6x4 = 24 disks (sdb -sde)
and one-60GB SSD for Osd Journal(sdf).
-RAM = 32GB on each server with 2 GB network link.
hostname for servers: Server1 -Server6
[osd.0]
host = server1
devs = /dev/sdb
osd journal = /dev/sdf1
[osd.1]
host = server1
devs = /dev/sdc
osd journal = /dev/sdf2
[osd.3]
host = server1
devs = /dev/sdd
osd journal = /dev/sdf2
[osd.4]
host = server1
devs = /dev/sde
osd journal = /dev/sdf2
[osd.5]
host = server2
devs = /dev/sdb
osd journal = /dev/sdf2
...
[osd.23]
host = server6
devs = /dev/sde
osd journal = /dev/sdf2
Thanks.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com