Re: Disk allocation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Sorry to jump into the converstation, how slow can the deletion of
files actually be?

One of the tests I ran a few weeks ago had me generating files,
deleting them and then writing them again from a number of clients. I
noticed that the space would never freed up again. I have my OSD's and
their journals  on dedicated partions.

I had planned on asking more on this once I had a stable system again.



On Mon, Mar 21, 2011 at 3:17 PM, Gregory Farnum
<gregory.farnum@xxxxxxxxxxxxx> wrote:
> On Sat, Mar 19, 2011 at 11:43 PM, Martin Wilderoth
> <martin.wilderoth@xxxxxxxxxx> wrote:
>> I have a small ceph cluster with 4 osd ( 2 disks on 2 hosts).
>>
>> I have been adding and removing files from the file system, mounted as ceph on an other host.
>>
>> Now I have removed most of the data on the file system, so I only have 300 MB left plus two snapshots.
>>
>> The problem is that looking at the disks the are allocating 88G of data
>> on the ceph filesystem.
> There are a few possibilities:
> 1) You've hosted your OSDs on a partition that's shared with the rest
> of the computer. In that case the reported used space will include
> whatever else is on the partition, not just the Ceph files. (This can
> include Ceph debug logs, so even if nothing used to be there but you
> were logging on that partition that can build up pretty quickly.)
> 2) You deleted the files quickly and just haven't given enough time
> for the file deletion to propagate to the OSDs. Because the POSIX
> filesystem is layered over an object store, this can take some time.
> 3) Your snapshots contain a lot of files, so nothing (or very little)
> actually got deleted. Snapshots are pretty cool but they aren't
> miraculous disk space!
> Given the uneven distribution of disk space I suspect option #2, but I
> could be mistaken. :) Let us know!
> -Greg
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux