Single node cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello everybody,

I want to build a new architecture with Ceph for storage backend.
For the moment I’ve got only one server with this specs : 

1 RAID-1 SSD : OS + OSD journals
12x 4To : OSD daemons.

I never reached the « clean state » on my cluster and I’m always in HEALTH_WARN mode like this :
	health HEALTH_WARN 25 pgs degraded; 24 pgs incomplete; 24 pgs stuck inactive; 64 pgs stuck unclean; 25 pgs undersized

I tried to add 3 —> 12 OSD but it’s always the same problem.

What is the right configuration to have a valid cluster please ?

# cat ceph.conf
[global]
fsid = 588595a0-3570-44bb-af77-3c0eaa28fbdb
mon_initial_members = drt-marco
mon_host = 172.16.21.4
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
filestore_xattr_use_omap = true
public network = 172.16.21.0/24

[osd]
osd journal size = 10000
osd crush chooseleaf type = 0
osd pool default size = 1

NB : I use ceph-deploy for debian wheezy to deploy the services.

Thank you so much for your help !
k.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux