Re: repeated writes of same file uses all space in an osd?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Joe,

Health is HEALTH_OK. I create an rbd0 device on a client machine. I moved the journal to another device (by symlinking).

I think you're expected to specify the location of the journal in ceph.conf instead of symlinking.

2013-02-19 15:40:23.632048 mon.0 [INF] pgmap v604: 768 pgs: 768 active+clean; 197 MB data, 1016 MB used, 13524 MB / 15308 MB avail
(so my first question is why is 1016 MB used? Its not the journal, right?

I assume there's some overhead associated with initializing the OSDs. It just seems like a lot because you have a very small amount of disk space.

Is this expected? I don’t think most systems run out of space by repeatedly writing the same file over and over?

You're not just writing the same file over and over.

What you have actually done is to create a 4 GB block device on top of Ceph. On that device you have formatted the ext4 file system. Formatting alone will take up disk space.

You are then writing a file again and again to that ext4 file system. I haven't read up on the details on how ext4 does block allocation, but there is no guarantee at all that writing the same data to the same filename means that the same blocks will be written to.

All in all this means that when you create a 4 GB file system and excercise it, you should expect at some point that all blocks have been touched. This means that it will show up in Ceph as using 4 GB of data, corresponding to for example approx 8 GB of on disk space with replication set to 2.

If you want to avoid this, you will want to enable TRIM support. This means mounting the ext4 file system with the discard option, making it possible for the filesystem to tell the underlying block device that it has stopped using specific blocks for data storage.

I don't know if the kernel rbd mounter actually supports TRIM at the moment. It is supported by librbd (for example from qemu).

--
Jens Kristian Søgaard, Mermaid Consulting ApS,
jens@xxxxxxxxxxxxxxxxxxxx,
http://www.mermaidconsulting.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux