On 05/06/2013 04:51 PM, Guido Winkelmann wrote:
Am Montag, 6. Mai 2013, 16:41:43 schrieb Wido den Hollander:
On 05/06/2013 04:15 PM, Guido Winkelmann wrote:
Am Montag, 6. Mai 2013, 16:05:31 schrieb Wido den Hollander:
On 05/06/2013 04:00 PM, Guido Winkelmann wrote:
Hi,
How do I run a 1-node cluster with no replication?
I'm trying to run a small 1-node cluster on my local workstation and
another on my notebook for experimentation/development purposes, but
since I only have on OSD, I'm always getting HEALTH_WARN as the cluster
status from ceph -s. Can I somehow tell ceph to just not bother with
replication for this cluster?
Have you set min_size to 1 for all the pools?
You mean in the crushmap?
No, it's pool setting.
See: http://ceph.com/docs/master/rados/operations/pools/#set-pool-values
Hm, I set that to 1 now, and nothing changed:
Have you also set "size" to 1? Meaning no replication.
Both size and min_size should be set to 1.
Wido
$ ceph osd lspools
0 data,1 metadata,2 rbd,
$ ceph osd pool set data min_size 1
set pool 0 min_size to 1
$ ceph osd pool set rbd min_size 1
set pool 2 min_size to 1
$ ceph osd pool set metadata min_size 1
set pool 1 min_size to 1
$ ceph -s
health HEALTH_WARN 384 pgs degraded; 384 pgs stuck unclean; recovery
1552/3104 degraded (50.000%)
monmap e1: 1 mons at {alpha=x.x.x.x:6789/0}, election epoch 1, quorum 0
alpha
osdmap e16: 1 osds: 1 up, 1 in
pgmap v104: 384 pgs: 384 active+degraded; 6112 MB data, 7144 MB used, 456
GB / 465 GB avail; 1552/3104 degraded (50.000%)
mdsmap e14: 1/1/1 up {0=a=up:active}
Are you sure this is a different setting than what you can see and change in
the crushmap? Because that would be quite confusing...
Guido
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Wido den Hollander
42on B.V.
Phone: +31 (0)20 700 9902
Skype: contact42on
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com