1 pg unknown (from cephfs data pool)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I had a 1x replicated fs data test pool, when a osd died I had '1 pg 
stale+active+clean of cephfs'[1] after a cluster reboot this turned into 
'1 pg unknown'

ceph pg repair did not fix anything (for stale and unknown state)

I recreated the pg with:
ceph osd force-create-pg pg.id --yes-i-really-mean-it 

Question now is: Say I had one or two files in this pool/pg, is this 
still administered in the mds server? Do I need to fix something in the 
mds?

PS. This was just performance testing pool, so at most there could be a 
few testing images on it, nothing important.




[1]
https://www.mail-archive.com/ceph-users@xxxxxxx/msg03147.html

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux