Re: Single node cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hey,

I just updated your command with : crushtool -d /tmp/cm -o /tmp/cm.txt (instead of -i)

It works fine, thank you so much.

Cheers,
k.


> Le 18 mars 2015 à 00:42, LOPEZ Jean-Charles <jelopez@xxxxxxxxxx> a écrit :
> 
> Hi,
> 
> just make sure you modify your CRUSH map so that each copy of the objects are just dispatched on different OSDs rather than on different hosts.
> 
> Follow these steps:
> ceph osd getcrushmap -o /tmp/cm
> crushtool -i /tmp/cm -o /tmp/cm.txt
> 
> Edit the /tmp/cm.txt file. Locate the crush rule ID 0 at the end of the text file and replace "chooseleaf firstn 0 type host" with "chooseleaf firstn 0 type osd"
> 
> crushtool -c /tmp/cm.txt -o /tmp/cm.new
> ceph osd setcrushmap -i /tmp/cm.new
> 
> And this should do the trick.
> 
> Cheers
> JC
> 
>> On 18 Mar 2015, at 09:57, Khalid Ahsein <kahsein@xxxxxxxxx> wrote:
>> 
>> Hello everybody,
>> 
>> I want to build a new architecture with Ceph for storage backend.
>> For the moment I’ve got only one server with this specs : 
>> 
>> 1 RAID-1 SSD : OS + OSD journals
>> 12x 4To : OSD daemons.
>> 
>> I never reached the « clean state » on my cluster and I’m always in HEALTH_WARN mode like this :
>> 	health HEALTH_WARN 25 pgs degraded; 24 pgs incomplete; 24 pgs stuck inactive; 64 pgs stuck unclean; 25 pgs undersized
>> 
>> I tried to add 3 —> 12 OSD but it’s always the same problem.
>> 
>> What is the right configuration to have a valid cluster please ?
>> 
>> # cat ceph.conf
>> [global]
>> fsid = 588595a0-3570-44bb-af77-3c0eaa28fbdb
>> mon_initial_members = drt-marco
>> mon_host = 172.16.21.4
>> auth_cluster_required = cephx
>> auth_service_required = cephx
>> auth_client_required = cephx
>> filestore_xattr_use_omap = true
>> public network = 172.16.21.0/24
>> 
>> [osd]
>> osd journal size = 10000
>> osd crush chooseleaf type = 0
>> osd pool default size = 1
>> 
>> NB : I use ceph-deploy for debian wheezy to deploy the services.
>> 
>> Thank you so much for your help !
>> k.
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux