osd down due to disk full

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

I have suddenly let 2 OSDs in our small 2 node cluster to be filled.
Reading from the docs, i move 2 pgs dirs to another disk, so that free some disk space.
Unfortunately after this the osd cannot start.
Please advice! This happened before the 2:2 replication end, so it is absolutely needed to get the data back.

Thank you very much!

Here's the log from the osd from which i moved directories:

ceph version 0.67.4 (ad85b8bfafea6232d64cb7ba76a8b6e8252fa0c7)
 1: /usr/bin/ceph-osd() [0x8fe702]
 2: (()+0xf030) [0x7f90be6d3030]
 3: (gsignal()+0x35) [0x7f90bcb7d475]
 4: (abort()+0x180) [0x7f90bcb806f0]
 5: (__gnu_cxx::__verbose_terminate_handler()+0x11d) [0x7f90bd3d289d]
 6: (()+0x63996) [0x7f90bd3d0996]
 7: (()+0x639c3) [0x7f90bd3d09c3]
 8: (()+0x63bee) [0x7f90bd3d0bee]
 9: (ceph::buffer::list::iterator::copy(unsigned int, char*)+0x127) [0x9c16a7]
 10: (PG::peek_map_epoch(ObjectStore*, coll_t, hobject_t&, ceph::buffer::list*)+0x11a) [0x7f7fda]
 11: (OSD::load_pgs()+0x57d) [0x79b73d]
 12: (OSD::init()+0xd96) [0x79f396]
 13: (main()+0x2251) [0x6bc1c1]
 14: (__libc_start_main()+0xfd) [0x7f90bcb69ead]
 15: /usr/bin/ceph-osd() [0x6bf0e9]
 NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

--- logging levels ---
   0/ 5 none
   0/ 1 lockdep
   0/ 1 context
   1/ 1 crush
   1/ 5 mds
   1/ 5 mds_balancer
   1/ 5 mds_locker
   1/ 5 mds_log
   1/ 5 mds_log_expire
   1/ 5 mds_migrator
   0/ 1 buffer
   0/ 1 timer
   0/ 1 filer
   0/ 1 striper
   0/ 1 objecter
   0/ 5 rados
   0/ 5 rbd
   0/ 5 journaler
   0/ 5 objectcacher
   0/ 5 client
   0/ 5 osd
   0/ 5 optracker
   0/ 5 objclass
   1/ 3 filestore
   1/ 3 journal
   0/ 5 ms
   1/ 5 mon
   0/10 monc
   1/ 5 paxos
   0/ 5 tp
   1/ 5 auth
   1/ 5 crypto
   1/ 1 finisher
   1/ 5 heartbeatmap
   1/ 5 perfcounter
   1/ 5 rgw
   1/ 5 hadoop
   1/ 5 javaclient
   1/ 5 asok
   1/ 1 throttle
  -2/-2 (syslog threshold)
  -1/-1 (stderr threshold)
  max_recent     10000
  max_new         1000
  log_file /var/log/ceph/ceph-osd.3.log
--- end dump of recent events ---

Best regards,
Kalin.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux