RE: Windows port

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

 

Nevermind, I managed to get a crash after increasing the number of threads: http://paste.openstack.org/raw/802827/

 

I’ll try to get a fix as soon as possible. Thanks again for all the info!

 

Lucian

 

From: Lucian Petrut
Sent: Friday, February 19, 2021 3:56 PM
To: Jake Grimmett; dev@xxxxxxx
Subject: RE: Windows port

 

Hi,

 

I couldn’t reproduce the issue yet. Judging by the exception message, it seems to be related to libcephfs’s inode handling.

 

The latest msi installer includes debug symbols. Could you please send us an archive with a crash dump [1] and ideally the msi installer as well?

 

Thanks,

Lucian

 

[1] https://docs.microsoft.com/en-us/windows/win32/wer/collecting-user-mode-dumps

 

From: Jake Grimmett
Sent: Wednesday, February 17, 2021 4:42 PM
To: Lucian Petrut; dev@xxxxxxx
Subject: Re: Windows port

 

Hi Lucian,

Many thanks for looking into this, completely understand that it is just a PoC, but just to let you know that the crash still occurs with your nightly build.

C:\Users\unixadmin>ceph-dokan -l x -o
ceph_conf_read_file OK
2021-02-17T12:40:59.637GMT Standard Time 1 -1 asok(0xb36ef20) AdminSocketConfigObs::init: failed: AdminSocket::bind_and_listen: failed to bind the UNIX domain socket to 'C:/ProgramData/ceph/client.admin.4136.asok': (13) Permission denied
ceph_mount OK
ceph_getcwd [/]
../src/include/interval_set.h: In function 'void interval_set<T, C>::erase(T, T, std::function<bool(T, T)>) [with T = short unsigned int; C = std::map]' thread 11 time 2021-02-17T13:58:21.387427GMT Standard Time
../src/include/interval_set.h: 538: FAILED ceph_assert(p->first <= start)
 ceph version 15.0.0-21905-g8f11930301 (8f11930301a9ecd5315a8a69a9874eb4428f9366) pacific (rc)

best regards,

Jake

On 17/02/2021 10:56, Lucian Petrut wrote:

Hi,

 

That crash is indeed a concern. We’ll look into it as soon as possible.

 

A small disclaimer: our main focus was RBD, ceph-dokan was more of a PoC, as mentioned by the docs. Nevertheless, it seems to be highly demanded so we’re going to invest more time in it.

 

FWIW, here’s our nightly Ceph MSI build: https://cloudbase.it/downloads/ceph_v16_0_0_beta.msi. It provides some important RBD fixes (though you’re probably more interested in cephfs).

 

I’ll come with a follow up as soon as those issues are fixed.

 

Thanks,

Lucian

 

From: Jake Grimmett
Sent: Wednesday, February 17, 2021 12:25 PM
To: Lucian Petrut; dev@xxxxxxx
Subject: Re: Windows port

 

Hi Lucian,

 

Thanks for your reply :)

 

Our two main requirements are:

 

1) Security (...it's great news you can fix this.)

 

2) Stability (which is understandably harder)

 

Our testing so far has been on a Windows 10 Pro VM (8 cores, 8GB RAM)

 

We mount one share from a real WIndows 10 system as Z (hardware RAID, battery backed 400TB)

Then use ceph-dokan to mount /cephfs on the VM as X

 

We then use robosync to copy data from the Windows server to the cephfs mount

 

C:\>robocopy z:\TestDatasets "X:\jog\Kates Data" /np /mt:128 /log:c:/Users/unixadmin/desktop/robolog.txt /E

 

After copying 5TB from the WIndows server to /cephfs the ceph-dokan mount crashes.

 

C:\WINDOWS\system32>ceph-dokan -l x -o
ceph_conf_read_file OK
ceph_mount OK
ceph_getcwd [/]
/home/abuild/rpmbuild/BUILD/ceph/src/include/interval_set.h: In function 'void interval_set<T, C>::erase(T, T, std::function<bool(T, T)>) [with T = short unsigned int; C = std::map]' thread 12 time 2021-02-11T16:56:04.874874GMT Standard Time
/home/abuild/rpmbuild/BUILD/ceph/src/include/interval_set.h: 527: FAILED ceph_assert(p->first <= start)
 ceph version IT-NOTFOUND (f762f7c3be560c11ea0dd51896c976c45137f5ed) pacific (dev)

 

restarting robocopy with a lower thread count results in another crash.

 

Finally, one other feature request: ceph-dokan reports it's version as "16.0.0" perhaps a "-V" or "--version" switch would be useful?

 

We are using https://github.com/dokan-dev/dokany/releases/download/v1.4.1.1000/DokanSetup.exe

 

best regards,

 

Jake

 

On 15/02/2021 15:09, Lucian Petrut wrote:

Hi,

 

Thanks for trying it out. Indeed, this option is currently missing from ceph-dokan but it’s an easy thing to add. We’ll take care of it as soon as possible, hopefully it will be included in the Pacific release.

 

Please let us know if you have any other suggestions.

 

Regards,

Lucian Petrut

 

From: Jake Grimmett
Sent: Thursday, February 11, 2021 3:00 PM
To: Lucian Petrut; Lucian Petrut; dev@xxxxxxx
Subject: Re: Windows port

 

Hi,

We have been testing ceph-dokan, based on the guide here:
<https://documentation.suse.com/ses/7/single-html/ses-windows/index.html#windows-cephfs>

And watching <https://www.youtube.com/watch?v=BWZIwXLcNts&ab_channel=SUSE>

Initial tests on a Windows 10 VM show good write speed - around 600MB/s,
which is faster than our samba server.

What worries us, is using the "root" ceph.client.admin.keyring on a
Windows system, as it gives access to the entire cephfs cluster - which
in our case is 5PB.

I'd really like this to work, as it would let user administrated Windows
systems that control microscopes to save data directly to cephfs, so
that we can process the data on our HPC cluster.

I'd normally use cephx, and make a key that allows access to a directory
off the root.

e.g.

[root@ceph-s1 users]# ceph auth get client.x_lab
exported keyring for client.x_lab
[client.x_lab]
        key = xXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX==
        caps mds = "allow r path=/users/, allow rw path=/users/x_lab"
        caps mon = "allow r"
        caps osd = "allow class-read object_prefix rbd_children, allow rw
pool=ec82pool"

The real key works fine on linux, but when we try this key with
ceph-dokan, and specify the ceph directory (x_lab) as a ceph path, there
is no option to specify the user - is this hard-coded as admin?

Have I just missed something? Or is this a missing feature?

anyhow, ceph-dokan looks like it could be quite useful,
thank you Cloudbase :)

best regards,

Jake

--
Dr Jake Grimmett
Head Of Scientific Computing
MRC Laboratory of Molecular Biology
Francis Crick Avenue,
Cambridge CB2 0QH, UK.

On 4/30/20 1:54 PM, Lucian Petrut wrote:
> Hi,
>
> We’ve just pushed the final part of the Windows PR series[1], allowing
> RBD images as well as CephFS to be mounted on Windows.
>
> There’s a comprehensive guide[2], describing the build, installation,
> configuration and usage steps.
>
> 2 out of 12 PRs have been merged already, we look forward to merging the
> others as well.
>
> Lucian Petrut
>
> Cloudbase Solutions
>
> [1] https://github.com/ceph/ceph/pull/34859
> <https://github.com/ceph/ceph/pull/34859>
>
> [2]
> https://github.com/petrutlucian94/ceph/blob/windows.12/README.windows.rst <https://github.com/petrutlucian94/ceph/blob/windows.12/README.windows.rst>
>
> *From: *Lucian Petrut
> <mailto:/O=CLOUDBASE/OU=EXCHANGE%20ADMINISTRATIVE%20GROUP%20(FYDIBOHF23SPDLT)/CN=RECIPIENTS/CN=LUCIAN%20PETRUT77C>
> *Sent: *Monday, December 16, 2019 10:12 AM
> *To: *dev@xxxxxxx <mailto:dev@xxxxxxx>
> *Subject: *Windows port
>
> Hi,
>
> We're happy to announce that a couple of weeks ago, we've submitted a
> few Github pull requests[1][2][3] adding initial Windows support. A big
> thank you to the people that have already reviewed the patches.
>
> To bring some context about the scope and current status of our work:
> we're mostly targeting the client side, allowing Windows hosts to
> consume rados, rbd and cephfs resources.
>
> We have Windows binaries capable of writing to rados pools[4]. We're
> using mingw to build the ceph components, mostly due to the fact that it
> requires the minimum amount of changes to cross compile ceph for
> Windows. However, we're soon going to switch to MSVC/Clang due to mingw
> limitations and long standing bugs[5][6]. Porting the unit tests is also
> something that we're currently working on.
>
> The next step will be implementing a virtual miniport driver so that RBD
> volumes can be exposed to Windows hosts and Hyper-V guests. We're hoping
> to leverage librbd as much as possible as part of a daemon that will
> communicate with the driver. We're also aiming at cephfs and considering
> using Dokan, which is FUSE compatible.
>
> Merging the open PRs would allow us to move forward, focusing on the
> drivers and avoiding rebase issues. Any help on that is greatly appreciated.
>
> Last but not least, I'd like to thank Suse, who's sponsoring this effort!
>
> Lucian Petrut
>
> Cloudbase Solutions
>
> [1] https://github.com/ceph/ceph/pull/31981
>
> [2] https://github.com/ceph/ceph/pull/32027
>
> [3] https://github.com/ceph/rocksdb/pull/42
>
> [4] http://paste.openstack.org/raw/787534/
>
> [5] https://sourceforge.net/p/mingw-w64/bugs/816/
>
> [6] https://sourceforge.net/p/mingw-w64/bugs/527/
>
>
> _______________________________________________
> Dev mailing list -- dev@xxxxxxx
> To unsubscribe send an email to dev-leave@xxxxxxx
>

 

Note: I am working from home until further notice.
For help, contact unixadmin@xxxxxxxxxxxxxxxxx
-- 
Dr Jake Grimmett
Head Of Scientific Computing
MRC Laboratory of Molecular Biology
Francis Crick Avenue,
Cambridge CB2 0QH, UK.
Phone 01223 267019
Mobile 0776 9886539

 

Note: I am working from home until further notice.
For help, contact unixadmin@xxxxxxxxxxxxxxxxx
-- 
Dr Jake Grimmett
Head Of Scientific Computing
MRC Laboratory of Molecular Biology
Francis Crick Avenue,
Cambridge CB2 0QH, UK.
Phone 01223 267019
Mobile 0776 9886539

 

 

_______________________________________________
Dev mailing list -- dev@xxxxxxx
To unsubscribe send an email to dev-leave@xxxxxxx

[Index of Archives]     [CEPH Users]     [Ceph Devel]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux