RE: [ceph-users] Deploy a Ceph cluster to play around with

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



If you are just playing around, you could roll everything onto a single server. Or, if you wanted, put the MON and OSD on a single server and the radosgw on a different server. You can accomplish this in a virtual machine if you don't have all the hardware you would like to test with.

> -----Original Message-----
> From: ceph-users-bounces@xxxxxxxxxxxxxx [mailto:ceph-users-
> bounces@xxxxxxxxxxxxxx] On Behalf Of Guang
> Sent: Monday, September 16, 2013 6:14 AM
> To: ceph-users@xxxxxxxxxxxxxx; Ceph Development
> Subject: [ceph-users] Deploy a Ceph cluster to play around with
> 
> Hello ceph-users, ceph-devel,
> Nice to meet you in the community!
> Today I tried to deploy a Ceph cluster to play around with the API, and during
> the deployment, i have a couple of questions which may need you help:
>   1) How many hosts do I need if I want to deploy a cluster with RadosGW (so
> that I can try with the S3 API)? Is it 3 OSD + 1 Mon + 1 GW =  5 hosts on
> minimum?
> 
>   2) I have a list of hardwares, however, my host only have 1 disk with two
> partitions, one for boot and another for LVM members, is it possible to
> deploy an OSD on such hardware (e.g. make a partition with ext4)? Or I will
> need another disk to do so?
> 
> -bash-4.1$ ceph-deploy disk list myserver.com [ceph_deploy.osd][INFO  ]
> Distro info: RedHatEnterpriseServer 6.3 Santiago [ceph_deploy.osd][DEBUG ]
> Listing disks on myserver.com...
> [repl101.mobstor.gq1.yahoo.com][INFO  ] Running command: ceph-disk list
> [repl101.mobstor.gq1.yahoo.com][INFO  ] /dev/sda :
> [repl101.mobstor.gq1.yahoo.com][INFO  ]  /dev/sda1 other, ext4, mounted
> on /boot [repl101.mobstor.gq1.yahoo.com][INFO  ]  /dev/sda2 other,
> LVM2_member
> 
> Thanks,
> Guang
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux