Re: CephFS and Samba hang on copy of large file

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Aug 16, 2016 at 6:44 AM, Wido den Hollander <wido@xxxxxxxx> wrote:
>
>> Op 16 augustus 2016 om 12:38 schreef Ira Cooper <ira@xxxxxxxxxxx>:
>>
>>
>> On Tue, Aug 16, 2016 at 6:32 AM, Wido den Hollander <wido@xxxxxxxx> wrote:
>> >
>> >> Op 15 augustus 2016 om 23:36 schreef Milosz Tanski <milosz@xxxxxxxxx>:
>> >>
>> >>
>> >> On Mon, Aug 15, 2016 at 9:58 AM, Wido den Hollander <wido@xxxxxxxx> wrote:
>> >> >
>> >> >> Op 15 augustus 2016 om 15:43 schreef Ira Cooper <ira@xxxxxxxxxxx>:
>> >> >>
>> >> >>
>> >> >> On Mon, Aug 15, 2016 at 9:35 AM, Wido den Hollander <wido@xxxxxxxx> wrote:
>> >> >> >
>> >> >> >> Op 15 augustus 2016 om 15:27 schreef Ira Cooper <ira@xxxxxxxxxxx>:
>> >> >> >>
>> >> >> >>
>> >> >> >> Have you tried the ceph VFS module in Samba yet?
>> >> >> >>
>> >> >> >
>> >> >> > Yes. That works, but the performance is a lot lower. So for that we are testing/using the CephFS kernel client.
>> >> >> >
>> >> >> > I would also like to see the VFS module upstream in Samba, you still need to manually patch it in.
>> >> >>
>> >> >> It is in upstream Samba.
>> >> >>
>> >> >> https://git.samba.org/?p=samba.git;a=blob;f=source3/modules/vfs_ceph.c;h=59e9b9cf9b3e8e5313a20823994fcacf9e4b4168;hb=f1b42ec778e08875e076df7fdf67dd69bf9b2757
>> >> >>
>> >> >> It's been there a while now.
>> >> >>
>> >> >> I'm curious what you are patching in, and what the performance numbers are. :)
>> >> >
>> >> > No, sorry for that confusion. I meant DEB and/or RPM packages. You have to compile manually which not all companies like.
>> >> >
>> >> > With the kernel client we see about 200MB/sec and with VFS about 50MB/sec.
>> >>
>> >> I'm willing to bet that at lest some of it is page cache & read ahead.
>> >>
>> >
>> > Probably indeed. Tested with Jewel and VFS and we see a much higher throughput right now.
>> >
>> > We had to disable sendfile in Samba though.
>> >
>> > Still, it's not good that Samba locked up and stayed in status D. That should not happen.
>>
>> Sendfile doesn't make much sense with a userspace filesystem like vfs_ceph.
>>
>
> True, but it was still in the config. Had to debug that.
>
>> You said performance improved, how much?
>>
>
> We went from 50MB/sec to 150 ~ 200MB/sec write speed.
>
> Writing directly to CephFS (kernel) goes with 900MB/sec.
>
>> Also are you using copy, robocopy, the windows explorer?  And which
>> version of Windows?
>>
>
> A wide range of Operating Systems. In this case it was a Ubuntu 16.04 desktop, but the clients are also Windows 7, 8 and Windows 10. Just using the Explorer in Windows's case.

Were you talking read or write performance before?

Also any chance you could run these tests from a Windows 7+ machine?

I know it is an odd request on a Linux list, but for debugging Samba,
it helps at times.

Thanks,

-Ira / ira@(samba.org|redhat.com|wakeful.net)

Technical Lead / Red Hat Storage - SMB (Samba) Team
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux