VM shutdown because of PG increase

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi, Cephers.

Our ceph version is Hammer(0.94.7).

I implemented ceph with OpenStack, all instances use block storage as a local volume.

After increasing the PG number from 256 to 768, many vms are shutdown.

That was very strange case for me.

Below vm's is libvirt error log.

osd/osd_types.cc: In function 'bool pg_t::is_split(unsigned int, unsigned int, std::set<pg_t>*) const' thread 7fc4c01b9700 time 2016-06-28 14:17:35.004480
osd/osd_types.cc: 459: FAILED assert(m_seed < old_pg_num)
 ceph version 0.94.1 (e4bfad3a3c51054df7e537a724c8d0bf9be972ff)
 1: (()+0x15374b) [0x7fc4d1ca674b]
 2: (()+0x222f01) [0x7fc4d1d75f01]
 3: (()+0x222fdd) [0x7fc4d1d75fdd]
 4: (()+0xc5339) [0x7fc4d1c18339]
 5: (()+0xdc3e5) [0x7fc4d1c2f3e5]
 6: (()+0xdcc4a) [0x7fc4d1c2fc4a]
 7: (()+0xde1b2) [0x7fc4d1c311b2]
 8: (()+0xe3fbf) [0x7fc4d1c36fbf]
 9: (()+0x2c3b99) [0x7fc4d1e16b99]
 10: (()+0x2f160d) [0x7fc4d1e4460d]
 11: (()+0x80a5) [0x7fc4cd7aa0a5]
 12: (clone()+0x6d) [0x7fc4cd4d7cfd]
 NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
terminate called after throwing an instance of 'ceph::FailedAssertion'
2016-06-28 05:17:36.557+0000: shutting down


Could you anybody explain this?

Thank you.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux