Hi, > “Clients run program written by them, which generates files of various sizes - from 1 KB to 200 GB” If the clients are running custom software on Windows and if at all possible, I would consider using
librados. The library is available for C/C++, Java, PHP and Python. The object API is fairly simple and would lift the CephFS requirement. Using Rados your client will be able to talk directly to the cluster (OSDs).
Some other options to access Ceph form Windows, but require a gateway (CephFS to NFS/Samba or RBD to NFS/Samba) which usually ends up being a bottleneck and a SPOF. Regarding the performance, you mentioned 160GB/min, so that is 2.7 GB/s. That shouldn’t be too difficult to reach with Journals on SSDs. In a previous thread you mentioned 468 OSDs. Doing a
quick napkin calculation with a Journal:OSD ratio of 1:6 (usually 1:4 to 1:6), that should be 78 Journals, if you estimate 400MB/s (like the Intel S3710 serie) Journal write speed and a replica factor of 3, you have a maximum
theoretical write speed of ~10GB/s. Say you get ~50% (I usually reach 50~60% of the theoretical write speed) of the theoretical write speed you are still above your target of 2.7 GB/s. Regards Maxime G. From:
ceph-users <ceph-users-bounces@xxxxxxxxxxxxxx> on behalf of Nick Fisk <nick@xxxxxxxxxx> I’m not sure how stable that ceph dokan is, I would imagine the best way to present ceph-fs to windows users would be through samba. From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx]
On Behalf Of ????????? ???????? Hello, I continue to design high-performance cluster Ceph, petascale. Scheduled to purchase a high-performance server, OS Windows 2016, for clients. Clients are in the Dockers. Clients run program written by them, which generates files of various sizes - from 1 KB to 200 GB (yes, creepy single file size). Network planning to use Infiniband 40 GB/s between clients and Ceph. Clients work with Ceph always on one,
and always only either for record or for reading. While I do not understand what Ceph technology is appropriate to use? Object, block, or file storage CephFS.
|
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com