Hi Lucian,
So I've now copied 32TB of data from a Windows server to our test ceph
cluster, without any crashes.
Average speed over one 16TB copy was 468MB/s, i.e. just under 11 hours
to transfer 16TB, with a data set that has 1,144,495 files.
The transfer was from Windows 10 server (hardware RAID) > Windows 10 VM
(with the cephfs driver installed) > cephfs test cluster.
If you can get the user option working, I'll put the driver on a
physical Windows 10 system, and see how fast a direct transfer is.
One other thing that would be useful, is over-quota error handling.
If I try writing and exceed the quota, the mount just hangs.
If I increase the quota, the mount recovers, but it would be nice if we
had a quota error in Windows.
Test cluster consists of 6 Dell C2100 nodes, bought in 2012.
Each has 10 x 900GB 10k HDD, we use EC 4+2 for the cephfs pool, metadata
is on 3 x NVMe. Dual Xeon X5650, 12 threads, 96GB RAM, 2x10GB bond.
Ceph 15.2.8, Scientific Linux 7.9.
best regards,
Jake
On 2/23/21 11:49 AM, Lucian Petrut wrote:
Hi,
That’s great, thanks for the confirmation.
I hope I’ll get to push a larger change by the end of the week, covering
the user option as well as a few other fixes.
Regards,
Lucian
*From: *Jake Grimmett <mailto:jog@xxxxxxxxxxxxxxxxx>
*Sent: *Tuesday, February 23, 2021 11:48 AM
*To: *dev@xxxxxxx <mailto:dev@xxxxxxx>; Lucian Petrut
<mailto:lpetrut@xxxxxxxxxxxxxxxxxxxxxx>
*Subject: *Re: Windows port
Hi Lucian,
Good news - your fix works; I was able to write 7TB from a Windows 10 VM
to cephfs last night.
I'll carry on testing the Windows client with large workloads using
robocopy.
thanks again for working on this, a cephx "user" option should provide
the core requirements we need for a usable system :)
best regards,
Jake
Note: I am working from home until further notice.
For help, contact unixadmin@xxxxxxxxxxxxxxxxx
--
Dr Jake Grimmett
Head Of Scientific Computing
MRC Laboratory of Molecular Biology
Francis Crick Avenue,
Cambridge CB2 0QH, UK.
Phone 01223 267019
Mobile 0776 9886539
_______________________________________________
Dev mailing list -- dev@xxxxxxx
To unsubscribe send an email to dev-leave@xxxxxxx