thanks for ur reply. in ur case, u deploy 3 osds in one server. my case is that 3 osds in 3 server. how to do ? 2014-07-21 17:59 GMT+07:00 Iban Cabrillo <cabrillo at ifca.unican.es>: > Dear, > I am not an expert, but Yes This is possible. > I have RAID1 SAS disk journal for 3 journal SATA osds (maybe this is not > the smartest solution) > > When you preparere the OSDs for example: > > ceph-deploy --verbose osd prepare cephosd01:/dev/"sdd_device":"path_to > journal_ssddisk_X" > > path_to journal_ssddisk_X must exists (mkdir -p /var/ceph/osd1; touch > /var/ceph/osd1/journal) > for example: > > ceph-deploy --verbose osd prepare > cephosd01:/dev/sdg:/var/ceph/osd1/journal > ceph-deploy --verbose osd prepare > cephosd01:/dev/sdf:/var/ceph/osd2/journal > ceph-deploy --verbose osd prepare > cephosd01:/dev/sdh:/var/ceph/osd3/journal > > Then activate the OSDs... > > ceph-deploy --verbose osd activate > cephosd01:/dev/sdg1:/var/ceph/osd1/journal > ceph-deploy --verbose osd activate > cephosd01:/dev/sdf1:/var/ceph/osd2/journal > ceph-deploy --verbose osd activate > cephosd01:/dev/sdh1:/var/ceph/osd3/journal > > regards, I > > > 2014-07-21 12:30 GMT+02:00 ???? <onlydebian at gmail.com>: > >> i have only one SSD want to improve Ceph perfermnace. >> Is possible to use one SSD journal hard disk for 3 OSD ? >> >> if it is possible ,how to config it ? >> many thanks >> >> _______________________________________________ >> ceph-users mailing list >> ceph-users at lists.ceph.com >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >> >> > > > -- > > ############################################################################ > Iban Cabrillo Bartolome > Instituto de Fisica de Cantabria (IFCA) > Santander, Spain > Tel: +34942200969 > PGP PUBLIC KEY: > http://pgp.mit.edu/pks/lookup?op=get&search=0xD9DF0B3D6C8C08AC > > ############################################################################ > Bertrand Russell: > *"El problema con el mundo es que los est?pidos est?n seguros de todo y > los inteligentes est?n llenos de dudas*" > -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140721/d0d7aa8d/attachment.htm>