Cephfs/Hadoop/HBase

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,
I'm experimenting with running hbase using the hadoop-ceph java
filesystem implementation, and I'm having an issue with space usage.

With the hbase daemons running, The amount of data in the 'data' pool
grows continuously, at a much higher rate than expected. Doing a du,
or ls -lh on a mounted copy shows a usage of ~16GB. But the data pool
has grown to consume ~160GB at times. When I restart the daemons,
shortly thereafter the data pool shrinks rapidly. If I restart all of
them it comes down to match the actual space usage.

My current hypothesis is that the MDS isn't deleting the objects for
some reason, possibly because there's still an open filehandle?

My question is, how can I get a report from the MDS on which objects
aren't visible from the filesystem / why it's not deleted them yet /
what open filehandles there are etc.

Cheers
Mike

--
Mike Bryant | Systems Administrator | Ocado Technology
mike.bryant@xxxxxxxxx | 01707 382148 | www.ocado.com

-- 
Notice:  This email is confidential and may contain copyright material of 
Ocado Limited (the "Company"). Opinions and views expressed in this message 
may not necessarily reflect the opinions and views of the Company.

If you are not the intended recipient, please notify us immediately and 
delete all copies of this message. Please note that it is your 
responsibility to scan this message for viruses.

Company reg. no. 3875000.

Ocado Limited
Titan Court
3 Bishops Square
Hatfield Business Park
Hatfield
Herts
AL10 9NE
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux