Hi,
I noticed this message after shutting down the other node. You might
be right that I need 3 monitors.
2015-01-01 15:47:35.990260 7f22858dd700 0 monclient: hunting for
new mon
But what is quite unexpected is that you cannot run even "ceph
status" on the running node t find out the state of the cluster.
Thx Jiri
On 1/01/2015 15:46, Jiri Kanicky wrote:
Hi,
I have:
- 2 monitors, one on each node
- 4 OSDs, two on each node
- 2 MDS, one on each node
Yes, all pools are set with size=2 and min_size=1
cephadmin@ceph1:~$ ceph osd dump
epoch 88
fsid bce2ff4d-e03b-4b75-9b17-8a48ee4d7788
created 2014-12-27 23:38:00.455097
modified 2014-12-30 20:45:51.343217
flags
pool 0 'rbd' replicated size 2 min_size 1 crush_ruleset 0
object_hash rjenkins p g_num 256 pgp_num 256
last_change 86 flags hashpspool stripe_width 0
pool 1 'media' replicated size 2 min_size 1 crush_ruleset
0 object_hash rjenkins pg_num 256 pgp_num 256
last_change 60 flags hashpspool stripe_width 0
pool 2 'data' replicated size 2 min_size 1 crush_ruleset 0
object_hash rjenkins pg_num 256 pgp_num 256
last_change 63 flags hashpspool stripe_width 0
pool 3 'cephfs_test' replicated size 2 min_size 1
crush_ruleset 0 object_hash rj enkins pg_num 256
pgp_num 256 last_change 71 flags hashpspool
crash_replay_inter val 45 stripe_width 0
pool 4 'cephfs_metadata' replicated size 2 min_size 1
crush_ruleset 0 object_has h rjenkins pg_num 256
pgp_num 256 last_change 69 flags hashpspool stripe_width 0
max_osd 4
osd.0 up in weight 1 up_from 55 up_thru 86 down_at 51
last_clean_interval [39 ,50)
192.168.30.21:6800/17319 10.1.1.21:6800/17319 10.1.1.21:6801/17319
192.168. 30.21:6801/17319 exists,up
4f3172e1-adb8-4ca3-94af-6f0b8fcce35a
osd.1 up in weight 1 up_from 57 up_thru 86 down_at 53
last_clean_interval [41 ,52)
192.168.30.21:6803/17684 10.1.1.21:6802/17684 10.1.1.21:6804/17684
192.168. 30.21:6805/17684 exists,up
1790347a-94fa-4b81-b429-1e7c7f11d3fd
osd.2 up in weight 1 up_from 79 up_thru 86 down_at 74
last_clean_interval [13 ,73)
192.168.30.22:6801/3178 10.1.1.22:6800/3178 10.1.1.22:6801/3178
192.168.30. 22:6802/3178 exists,up
5520835f-c411-4750-974b-34e9aea2585d
osd.3 up in weight 1 up_from 81 up_thru 86 down_at 72
last_clean_interval [20 ,71)
192.168.30.22:6804/3414 10.1.1.22:6802/3414 10.1.1.22:6803/3414
192.168.30. 22:6805/3414 exists,up
25e62059-6392-4a69-99c9-214ae335004
Thx Jiri
On 1/01/2015 15:21, Lindsay Mathieson
wrote:
On Thu, 1 Jan 2015 02:59:05 PM Jiri Kanicky wrote:
I would expect that if I shut down one node, the system will keep
running. But when I tested it, I cannot even execute "ceph status"
command on the running node.
2 osd Nodes, 3 Mon nodes here, works perfectly for me.
How many monitors do you have?
Maybe you need a third monitor only node for quorum?
I set "osd_pool_default_size = 2" (min_size=1) on all pools, so I
thought that each copy will reside on each node. Which means that if 1
node goes down the second one will be still operational.
does:
ceph osd pool get {pool name} size
return 2
ceph osd pool get {pool name} min_size
return 1
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
|
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com