Re: Ceph copy-on-write

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Tommi,

Thank you for your answer and filling bug.

I think it is a great performance already. Of course copy-on-write will be better.

Extra free space losses easy to explain. I am using 2 copies for redundancy and df update slowly.

I have another question. Looks like repository version 0.37 for ubuntu oneric has brocken dependencies
for ceph-client-tools.
Can you advice how can I install it?

Max

On 11/04/2011 01:10 PM, Tommi Virtanen wrote:
On Thu, Nov 3, 2011 at 20:09, Maxim Mikheev<mikhmv@xxxxxxxxx>  wrote:
Is Ceph copy on write system?
Ceph uses btrfs's copy-on-write properties internally, for cheap
snapshots and journaling speed.

As far as I know, Ceph does not currently expose reflink-style
functionality to clients, and there's no common API either.

I created a ticket http://tracker.newdream.net/issues/1680 to track
this feature request.

If so, I think I do something wrong.
Files copying take too much time and disks space in my Ceph installation.
Could you like to help me resolve my problem?
It seems you copied a 3075MB file from Ceph to Ceph, saw the free
space in df drop 3395MB, and it took 109 seconds without sync, and
you're asking why?

First of all, the cephfs df statistics are allowed to update slowly,
to improve performance. So that's not a reliable measure in the first
place. Second, the df reports the underlying space free that ceph-osd
sees, and that can be affected by other things such as journaling and
unrelated files stored on the same filesystem. Also remember that ceph
normally stores multiple copies of your data.

As for writing at 31 MB/s, that's 310 Mbit/s, and if you have an OSD
with just a single GigE network interface, the outgoing replication
data stream uses the same network link. I've seen plenty of cheap GigE
adapters max out at 600 Mbit/s, plus there's TCP and IP overhead. Lots
of disks also max out at 40 MB/s. To troubleshoot further, you'll need
to describe your cluster; how many OSDs, what kind of hardware and
networking, what kind of performance do you see from the disks when
accessed directly.
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux