Hi,
Thanks for trying it out. Indeed, this option is currently missing from ceph-dokan but it’s an easy thing to add. We’ll take care of it as soon as possible, hopefully it will be included in the Pacific release.
Please let us know if you have any other suggestions.
Regards,
Lucian Petrut
Hi,
We have been testing ceph-dokan, based on the guide here:
<https://documentation.suse.com/ses/7/single-html/ses-windows/index.html#windows-cephfs>
And watching <https://www.youtube.com/watch?v=BWZIwXLcNts&ab_channel=SUSE>
Initial tests on a Windows 10 VM show good write speed - around 600MB/s,
which is faster than our samba server.
What worries us, is using the "root" ceph.client.admin.keyring on a
Windows system, as it gives access to the entire cephfs cluster - which
in our case is 5PB.
I'd really like this to work, as it would let user administrated Windows
systems that control microscopes to save data directly to cephfs, so
that we can process the data on our HPC cluster.
I'd normally use cephx, and make a key that allows access to a directory
off the root.
e.g.
[root@ceph-s1 users]# ceph auth get client.x_lab
exported keyring for client.x_lab
[client.x_lab]
key = xXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX==
caps mds = "allow r path=/users/, allow rw path=/users/x_lab"
caps mon = "allow r"
caps osd = "allow class-read object_prefix rbd_children, allow rw
pool=ec82pool"
The real key works fine on linux, but when we try this key with
ceph-dokan, and specify the ceph directory (x_lab) as a ceph path, there
is no option to specify the user - is this hard-coded as admin?
Have I just missed something? Or is this a missing feature?
anyhow, ceph-dokan looks like it could be quite useful,
thank you Cloudbase :)
best regards,
Jake
--
Dr Jake Grimmett
Head Of Scientific Computing
MRC Laboratory of Molecular Biology
Francis Crick Avenue,
Cambridge CB2 0QH, UK.
On 4/30/20 1:54 PM, Lucian Petrut wrote:
> Hi,
>
> We’ve just pushed the final part of the Windows PR series[1], allowing
> RBD images as well as CephFS to be mounted on Windows.
>
> There’s a comprehensive guide[2], describing the build, installation,
> configuration and usage steps.
>
> 2 out of 12 PRs have been merged already, we look forward to merging the
> others as well.
>
> Lucian Petrut
>
> Cloudbase Solutions
>
> [1] https://github.com/ceph/ceph/pull/34859
> <https://github.com/ceph/ceph/pull/34859>
>
> [2]
>
https://github.com/petrutlucian94/ceph/blob/windows.12/README.windows.rst <https://github.com/petrutlucian94/ceph/blob/windows.12/README.windows.rst>
>
> *From: *Lucian Petrut
> <mailto:/O=CLOUDBASE/OU=EXCHANGE%20ADMINISTRATIVE%20GROUP%20(FYDIBOHF23SPDLT)/CN=RECIPIENTS/CN=LUCIAN%20PETRUT77C>
> *Sent: *Monday, December 16, 2019 10:12 AM
> *To: *dev@xxxxxxx <mailto:dev@xxxxxxx>
> *Subject: *Windows port
>
> Hi,
>
> We're happy to announce that a couple of weeks ago, we've submitted a
> few Github pull requests[1][2][3] adding initial Windows support. A big
> thank you to the people that have already reviewed the patches.
>
> To bring some context about the scope and current status of our work:
> we're mostly targeting the client side, allowing Windows hosts to
> consume rados, rbd and cephfs resources.
>
> We have Windows binaries capable of writing to rados pools[4]. We're
> using mingw to build the ceph components, mostly due to the fact that it
> requires the minimum amount of changes to cross compile ceph for
> Windows. However, we're soon going to switch to MSVC/Clang due to mingw
> limitations and long standing bugs[5][6]. Porting the unit tests is also
> something that we're currently working on.
>
> The next step will be implementing a virtual miniport driver so that RBD
> volumes can be exposed to Windows hosts and Hyper-V guests. We're hoping
> to leverage librbd as much as possible as part of a daemon that will
> communicate with the driver. We're also aiming at cephfs and considering
> using Dokan, which is FUSE compatible.
>
> Merging the open PRs would allow us to move forward, focusing on the
> drivers and avoiding rebase issues. Any help on that is greatly appreciated.
>
> Last but not least, I'd like to thank Suse, who's sponsoring this effort!
>
> Lucian Petrut
>
> Cloudbase Solutions
>
> [1] https://github.com/ceph/ceph/pull/31981
>
> [2] https://github.com/ceph/ceph/pull/32027
>
> [3] https://github.com/ceph/rocksdb/pull/42
>
> [4] http://paste.openstack.org/raw/787534/
>
> [5] https://sourceforge.net/p/mingw-w64/bugs/816/
>
> [6] https://sourceforge.net/p/mingw-w64/bugs/527/
>
>
> _______________________________________________
> Dev mailing list -- dev@xxxxxxx
> To unsubscribe send an email to dev-leave@xxxxxxx
>