Wido, Thanks for the great information. I do have a follow up question. When I create my 3rd and 4th OSD nodes, how do I go about mounting that data? So for example, I am on a client and want to mount the data on the 3 and 4th nodes (different placement groups) and not the data on the 1st and 2nd OSDs. Am I looking at this incorrectly? To do what I am looking for, I would probably have to create a new FS with a msd and mon, etc? I was able to mount the subdirectory as you suggested on a Fedora 14 box (mount -t ceph x.x.x.x:/subdir1 /mnt/ceph), but when I do a ls I get permission denied. Thank you, Mark Nigh -----Original Message----- From: Wido den Hollander [mailto:wido@xxxxxxxxx] Sent: Friday, February 18, 2011 8:25 AM To: Mark Nigh Cc: ceph-devel@xxxxxxxxxxxxxxx Subject: Re: A Few Questions about Ceph Hi Mark, On Fri, 2011-02-18 at 04:39 +0000, Mark Nigh wrote: > I have been doing significant research and testing of Ceph as a long-term > solution for our Cloud Storage Solution. > While every new user is welcome to Ceph, please do understand that is still under heavy development and is NOT production ready. > 1. I have a 3 node cluster. the first cluster is running the mon, msd and osd0 > daemons. The second node is running the osd1 daemon only. I am able to mount the > x.x.x.x:/ /mnt/ceph just fine. I have added a 3 node and am running the osd > daemon only. I don't want to add this node to the placement group of the 1st 2 > nodes, but rather add a 4th and have the 3rd and 4th in a placement group. This > would be helpful for different customers or different applications that may or > may not need replication to a second data center. Yes, that is possible. You can create a CRUSH-map ( http://ceph.newdream.net/wiki/Custom_data_placement_with_CRUSH ). Every RADOS pool has a cruhsrule attribute, here you can device which rule to use. When using the Ceph filesystem you can also specify a CRUSH rule per directory, this can be done with the "cephfs" command: I haven't used the tool myself yet, but I think something like this should be possible: $ cephfs -p data-on-ssd /mnt/ceph/dir1/sub2 The "data-on-ssd" pool then has a CRUSH rule which only uses OSD's with SSD's in t hem. > > 2. Is there anyway to create sub-directories in the client mount for example, > mount -t ceph x.x.x.x:/subdirectory /mnt/ceph Yes, you can mount every subdirectory you want. Simply mount the root first, create the directories you want and then start mounting those new subdirectories. > > I am looking forward to placing ceph into our production network as our backup > target. I'm just going to repeat myself. Ceph is great, but it is still under heavy development. Testing and feedback is needed and very welcome, but right now it's not suitable for production. Regards, Wido > > Thank you, > > Mark Nigh > > -- > To unsubscribe from this list: send the line "unsubscribe ceph-devel" in > the body of a message to majordomo@xxxxxxxxxxxxxxx > More majordomo info at http://vger.kernel.org/majordomo-info.html This transmission and any attached files are privileged, confidential or otherwise the exclusive property of the intended recipient or Netelligent Corporation. If you are not the intended recipient, any disclosure, copying, distribution or use of any of the information contained in or attached to this transmission is strictly prohibited. If you have received this transmission in error, please contact us immediately by responding to this message or by telephone (314-392-6900) and promptly destroy the original transmission and its attachments. ÿô.nÇ·®+%˱é¥wÿº{.nÇ·zÿuëø¡Ü}©²ÆzÚj:+v¨þø®w¥þàÞ¨è&¢)ß«a¶Úÿûz¹ÞúÝjÿwèf