If you're intent is to learn Ceph, then I suggest that you set up three or four VMs to learn how all the components work together. Then you will know better how to put different components together and you can decide which combination works better for you. I don't like any of those components in the same OS because they can interfere with each other pretty bad. Putting them in VMs gets around some of the possible deadlocks but then there is usually not enough disk IO.
That is my $0.02.
Robert LeBlanc
Sent from a mobile device please excuse any typos.
On Dec 23, 2014 6:12 AM, "Debashish Das" <deba.daz@xxxxxxxxx> wrote:
Hi,Thanks for the replies, I have some more queries now :-)1. I have one 64 bit Physical Server (4 GB RAM, QuadCore & 250 GB HDD) & One VM (not a high end one).I want to install ceph-mon, ceph-osd & ceph RBD (Rados Block Device).Can you please tell me if it is possible to only install ceph-mon & ceph RBD in one VM & ceph-osd in Physical Machine?Or do you have any other idea how to proceed with my current hardware resources?Please also let me know any reference links which I can refer for this kind of installation.I am not sure which component (mon/osd/RBD) should I install in which setup ( VM/Physical Server).Your expert opinion would be of great help for me.Thank You.Kind RegardsDebashish DasOn Sat, Dec 20, 2014 at 12:00 AM, Craig Lewis <clewis@xxxxxxxxxxxxxxxxxx> wrote:I've done single nodes. I have a couple VMs for RadosGW Federation testing. It has a single virtual network, with both "clusters" on the same network.Because I'm only using a single OSD on a single host, I had to update the crushmap to handle that. My Chef recipe runs:ceph osd getcrushmap -o /tmp/compiled-crushmap.oldcrushtool -d /tmp/compiled-crushmap.old -o /tmp/decompiled-crushmap.oldsed -e '/step chooseleaf firstn 0 type/s/host/osd/' /tmp/decompiled-crushmap.old > /tmp/decompiled-crushmap.newcrushtool -c /tmp/decompiled-crushmap.new -o /tmp/compiled-crushmap.newceph osd setcrushmap -i /tmp/compiled-crushmap.newThose are the only extra commands I run for a single node cluster. Otherwise, it looks the same as my production nodes that run mon, osd, and rgw.Here's my single node's ceph.conf:[global]fsid = a7798848-1d31-421b-8f3c-5a34d60f6579mon initial members = test0-ceph0mon host = 172.16.205.143:6789auth client required = noneauth cluster required = noneauth service required = nonemon warn on legacy crush tunables = falseosd crush chooseleaf type = 0osd pool default flag hashpspool = trueosd pool default min size = 1osd pool default size = 1public network = 172.16.205.0/24[osd]osd journal size = 1000osd mkfs options xfs = -s size=4096osd mkfs type = xfsosd mount options xfs = rw,noatime,nodiratime,nosuid,noexec,inode64osd_scrub_sleep = 1.0osd_snap_trim_sleep = 1.0[client.radosgw.test0-ceph0]host = test0-ceph0rgw socket path = /var/run/ceph/radosgw.test0-ceph0keyring = /etc/ceph/ceph.client.radosgw.test0-ceph0.keyringlog file = /var/log/ceph/radosgw.logadmin socket = /var/run/ceph/radosgw.asokrgw dns name = test0-cephrgw region = usrgw region root pool = .us.rgw.rootrgw zone = us-westrgw zone root pool = .us-west.rgw.rootOn Thu, Dec 18, 2014 at 11:23 PM, Debashish Das <deba.daz@xxxxxxxxx> wrote:Kind RegardsAgain thanks guys !!So I would request if anyone has installed Ceph in single node, please share the link or document which i can refer to install Ceph in my local server.The challenge which i see is there is no clear documentation for single node installation.Hi Team,Thank for the insight & the replies, as I understood from the mails - running Ceph cluster in a single node is possible but definitely not recommended.
Debashish DasOn Fri, Dec 19, 2014 at 6:08 AM, Robert LeBlanc <robert@xxxxxxxxxxxxx> wrote:Thanks, I'll look into these.On Thu, Dec 18, 2014 at 5:12 PM, Craig Lewis <clewis@xxxxxxxxxxxxxxxxxx> wrote:I think this is it: https://engage.redhat.com/inktank-ceph-reference-architecture-s-201409080939You can also check out a presentation on Cern's Ceph cluster: http://www.slideshare.net/Inktank_Ceph/scaling-ceph-at-cernAt large scale, the biggest problem will likely be network I/O on the inter-switch links.On Thu, Dec 18, 2014 at 3:29 PM, Robert LeBlanc <robert@xxxxxxxxxxxxx> wrote:I'm interested to know if there is a reference to this reference architecture. It would help alleviate some of the fears we have about scaling this thing to a massive scale (10,000's OSDs).Thanks,Robert LeBlancOn Thu, Dec 18, 2014 at 3:43 PM, Craig Lewis <clewis@xxxxxxxxxxxxxxxxxx> wrote:_______________________________________________On Thu, Dec 18, 2014 at 5:16 AM, Patrick McGarry <patrick@xxxxxxxxxxx> wrote:
> 2. What should be the minimum hardware requirement of the server (CPU,
> Memory, NIC etc)
There is no real "minimum" to run Ceph, it's all about what your
workload will look like and what kind of performance you need. We have
seen Ceph run on Raspberry Pis.Technically, the smallest cluster is a single node with a 10 GiB disk. Anything smaller won't work.That said, Ceph was envisioned to run on large clusters. IIRC, the reference architecture has 7 rows, each row having 10 racks, all full.Those of us running small clusters (less than 10 nodes) are noticing that it doesn't work quite as well. We have to significantly scale back the amount of backfilling and recovery that is allowed. I try to keep all backfill/recovery operations touching less than 20% of my OSDs. In the reference architecture, it could lose a whole row, and still keep under that limit. My 5 nodes cluster is noticeably better better than the 3 node cluster. It's faster, has lower latency, and latency doesn't increase as much during recovery operations.
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com