I am not sure why you want to layer a clustered file system (OCFS2) on top of
Ceph RBD. Seems like a huge overhead and a ton of complexity.
Better to use CephFS if you want Ceph at the bottom or to just use iSCSI luns
under ocfs2.
Regards,
Ric
On 01/04/2016 10:28 AM, Srinivasula Maram wrote:
My point is rbd device should support SCSI reservation, so that OCFS can take
write lock while write on particular client to avoid corruption.
Thanks,
Srinivas
*From:*gjprabu [mailto:gjprabu@xxxxxxxxxxxx]
*Sent:* Monday, January 04, 2016 1:40 PM
*To:* Srinivasula Maram
*Cc:* Somnath Roy; ceph-users; Siva Sokkumuthu
*Subject:* RE: OSD size and performance
Hi Srinivas,
Our cause OCFS2 is not directly interacting with SCSI. Here we have ceph
Storage that is mounted to many client system using OCFS2. More ever ocfs2
support SCSI.
https://blogs.oracle.com/wim/entry/what_s_up_with_ocfs2
http://www.linux-mag.com/id/7809/
Regards
Prabu
---- On Mon, 04 Jan 2016 12:46:48 +0530 *Srinivasula Maram
<Srinivasula.Maram@xxxxxxxxxxx>*wrote ----
I doubt rbd driver will not support SCSI Reservation to mount the same rbd
across multiple clients with OCFS ?
Generally underlying devices(here rbd) should have SCSI reservation
support for cluster file system.
Thanks,
Srinivas
*From:*ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx
<mailto:ceph-users-bounces@xxxxxxxxxxxxxx>] *On Behalf Of *Somnath Roy
*Sent:* Monday, January 04, 2016 12:29 PM
*To:* gjprabu
*Cc:* ceph-users; Siva Sokkumuthu
*Subject:* Re: OSD size and performance
Hi Prabu,
Check the krbd version (and libceph) running in the kernel..You can try
building the latest krbd source for the 7.1 kernel if this is an option
for you.
As I mentioned in my earlier mail, please isolate problem the way I
suggested if that seems reasonable to you.
Thanks & Regards
Somnath
*From:*gjprabu [mailto:gjprabu@xxxxxxxxxxxx <mailto:gjprabu@xxxxxxxxxxxx>]
*Sent:* Sunday, January 03, 2016 10:53 PM
*To:* gjprabu
*Cc:* Somnath Roy; ceph-users; Siva Sokkumuthu
*Subject:* Re: OSD size and performance
Hi Somnath,
Just check the below details and let us know do you need any
other information.
Regards
Prabu
---- On Sat, 02 Jan 2016 08:47:05 +0530 *gjprabu <gjprabu@xxxxxxxxxxxx
<mailto:gjprabu@xxxxxxxxxxxx>>*wrote ----
Hi Somnath,
Please check the details and help me on this issue.
Regards
Prabu
---- On Thu, 31 Dec 2015 12:50:36 +0530 *gjprabu <gjprabu@xxxxxxxxxxxx
<mailto:gjprabu@xxxxxxxxxxxx>>*wrote ----
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi Somnath,
We are using RBD, please find linux and rbd versions. I agree
this is related to client side issue. My though gone to backup
because weekly once will take full backup not incremental at the
time we found issue once but not sure.
*Linux version *
*CentOS Linux release 7.1.1503 (Core) *
*Kernel : - 3.10.91*
*rbd --version*
*ceph version 0.94.5 (9764da52395923e0b32908d83a9f7304401fee43)*
*rbd showmapped*
*id pool image snap device *
*1 rbd downloads - /dev/rbd1 *
*rbd ls*
*downloads*
*Client server RBD mounted using ocfs2 file system.*
*/dev/rbd1 ocfs2 9.6T 2.6T 7.0T 27% /data/downloads*
Client level cluster configuration done with 5 clients and We are
using below procedure in client node.
1) rbd map downloads --pool rbd --name client.admin -m
192.168.112.192,192.168.112.193,192.168.112.194 -k
/etc/ceph/ceph.client.admin.keyring
2) Formatting rbd with ocfs2
mkfs.ocfs2 -b4K -C 4K -L label -T mail -N5 /dev/rbd/rbd/downloads
3) We have do ocfs2 client level configuration and start ocfs2
service.
4) mount /dev/rbd/rbd/downloads /data/downloads
Please let me know do you need any other information.
Regards
Prabu
---- On Thu, 31 Dec 2015 01:04:39 +0530 *Somnath Roy
<Somnath.Roy@xxxxxxxxxxx <mailto:Somnath.Roy@xxxxxxxxxxx>>*wrote ----
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Prabu,
I assume you are using krbd then..Could you please let us know
the Linux version/flavor you are using ?
Krbd had some hang issues and supposed to be fixed with the
latest versions available..Also, it could be due to
OCFS2->krbd integration as well (?) ..Handling data
consistency is the responsibility of OCFS as krbd doesn’t
guarantee that..So, I would suggest to do the following to
root cause if your cluster is not into production.
1. Do a synthetic fio run on krbd alone (or creating a
filesystem on top) and see if you can reproduce the hang
2. Try building the latest krbd or upgrade your Linux version
to get a newer krbd and see if it is still happening.
<<Also we are taking backup from client, we feel that could be
the reason for this hang
I assume this is regular filesystem back up ? Why do you think
this could be a problem ?
I think it is a client side issue , I doubt it could be
because of large OSD size..
Thanks & Regards
Somnath
*From:*gjprabu [mailto:gjprabu@xxxxxxxxxxxx
<mailto:gjprabu@xxxxxxxxxxxx>]
*Sent:* Wednesday, December 30, 2015 4:29 AM
*To:* gjprabu
*Cc:* Somnath Roy; ceph-users; Siva Sokkumuthu
*Subject:* Re: OSD size and performance
Hi Somnath,
Thanks for your reply. Current setup we are having client
hang issue and its hang frequently and after reboot it is
working, Client used to mount with OCFS2 file system for
multiple concurrent client access for same data. Also we are
taking backup from client, we feel that could be the reason
for this hang.
Regards
Prabu
---- On Wed, 30 Dec 2015 11:33:20 +0530 *Somnath Roy
<Somnath.Roy@xxxxxxxxxxx
<mailto:Somnath.Roy@xxxxxxxxxxx>>*wrote ----
FYI , we are using 8TB SSD drive as OSD and not seeing
any problem so far. Failure domain could be a concern
for bigger OSDs.
Thanks & Regards
Somnath
*From:*ceph-users
[mailto:ceph-users-bounces@xxxxxxxxxxxxxx
<mailto:ceph-users-bounces@xxxxxxxxxxxxxx>] *On Behalf
Of *gjprabu
*Sent:* Tuesday, December 29, 2015 9:38 PM
*To:* ceph-users
*Cc:* Siva Sokkumuthu
*Subject:* Re: OSD size and performance
Hi Team,
Anybody please clarify the below queries.
Regards
Prabu
---- On Tue, 29 Dec 2015 13:03:45 +0530 *gjprabu
<gjprabu@xxxxxxxxxxxx
<mailto:gjprabu@xxxxxxxxxxxx>>*wrote ----
Hi Team,
We are using ceph with 3 osd and 2 replicas. Each
osd size is 13TB and current data is reached to
2.5TB (each osd). Because of this huge size do we
face any problem.
OSD server configuration
Hard disk -- 13TB
RAM -- 96GB
CPU -- 2 CPU with multi 8 core processor.
Regards
Prabu
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
<mailto:ceph-users@xxxxxxxxxxxxxx>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com