rbd after removal still 10 TB used.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,
I have this issue to submit you.It is with ceph emperor version 0.72 i don t know if in firefly it is solved. I didnt see in the changelog any change to that issue.

First of all ceph is storage ogre. Let me explain. In an global 40TB disk the real data i can use is 40 TB /2 - 25% = around 15TB. But as I delete data and store new data I notice the replicas are never overwritten. logically a pg has its mirror and so if the pg is updated then the mirrored pg corresponding is updated too. or better said if a pg is overwritten with new data then its related pg mirror is overwritten too with that new data. Experience in real life shown to me that it is not the case. Simply pg are overwriten and a new pg mirror is created to contain the new data letting the old pg mirror remain. So as i delete and overwrite data to my rbd image I notice the ever growing effect that lead to a forever pgs stuck to backfill osds. So slowly ceph is stopping to accept new data. More osd are in near_full ratio then full ratio.

Still after I do a rbd rm myimagename I notice that because some pgs where stuck to backfill then i still have 10 TB locked. Only way to retrieve that data is to completly clear the ceph cluster and reinstall a new version. I don t think most people using ceph will affort an ever growing ceph system, because yes beleive it or not new disks and new nodes have a cost, and that it is not wise to have replicas that over match more than 3 times the data they backup on a replica = 2 environement.


At the time ceph is willing to extend it seems we are all oblivious to the core problems of ceph. No data triming no data replicas triming.

Probably triming data is ressource consuming. So we should have at least a "lets plan and do a trim for that day" possibility. The other way around would be to have a better replica management since that is where the problem is orginated.

Best regards

-------
Alphe Salas
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux