Re: iscsi doubt

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



ESGLinux wrote:

    On Mon, 20 Apr 2009 18:18:22 +0200, ESGLinux <esggrupos@xxxxxxxxx
    <mailto:esggrupos@xxxxxxxxx>> wrote:
     > Hello,
     >
     > first, thanks for your answer,
     >
     > I suspected it but why can i do it with NFS. ?

    Not sure I understand your question. NFS is a network file system
    like CIFS
    specifically designed to be mounted from multiple clients
    simultaneously.
    ext3 is only designed with a single accessor in mind.


I´ll try to explain myself

I have a partition /dev/sda

/dev/sda on /iscsivol type ext3 (rw)

but this partition is a target iscsi on another server. I format the partition with ext3 but its not a local disk, is a target iscsi.

with this configuration I have the filesystem corrupted.

second scenario

I have
192.168.1.198:/nfsexport/ 6983168 2839168 3783552 43% /mnt

but the parttion 192.168.1.198:/nfsexport/ is again ext3 the diference is that I use nfs as network protocol instead of iscsi.

You first need to understand how iSCSI works and what it is. It is a block device (like a physical hard disk, only virtualized). There is no arbitration of file access that NFS has, it is up to the file system layer to sort it out. ext3 (or any other non-shared non-clustered file system) cannot do this, because it wasn't designed to do it. What your iSCSI volume file is backed on is irrelevant. You can export a raw partition using iSCSI, it's all the same to it.

     > the nodes never are going to be active at the same time so I can
    mount
    the
     > shares via NFS. With NFS when I create a file in a share
    automatically i
     > got it in the shared mounted by all the clients.

    I still don't understand your question - that is what NFS is
    designed for.


Yes I agree with you, but I thought with iscsi i can do the same as with NFS.

No. The two are about as different in concept as you can get.

     > In this case I don´t need to write to the share concurrently
     >
     > can be this configuration a problem?

    No, it's fundamentally impossible. In order to have a FS that can be
    mounted simultaneously from multiple nodes, it has to be aware of
    multiple
    nodes accessing it, which means that it needs coherent caching.
    Local file
    systems like ext3 don't have this. When one node writes to the ext3 file
    system, the other node will have cached the inodes as they were
    originally,
    and it won't bother hitting the disk to re-read the contents, it'll just
    return what it already has cached. And almost certainly corrupt the file
    system in the process.

    You cannot have a shared ext3 volume with writing enabled. Period.


ok understand it,

but (always there is a but ...)

I only want to share a directory in which one node writes at one and when it fails the other node has the diretory mounted with the data and can write to it.

You can't share it simultaneously. You can have it fail over - primary node gets fenced (powered off), and the secondary can then mount the FS on it's own. But they can't both mount it at the same time. That will instantly trash your file system.

Before I have known about cluster my decission would been to mount the shares with NFS. Now I want to be more sofisticated and want to use cluster tools, so I thought to mount it with iSCSI instead of NFS, but always with the ext3 as the underlying filesystem.

If you want shared storage, you can export NFS from a single node (backed with whatever you want, including ext3) or use GFS on top of iSCSI.

If you are doing this for redundancy, ask yourself what the point is in shifting the single point of failure around. If you don't have mirrored SANs (and even most of the high end SANs with a 6-figure price tag can't handle that feature), you might want to consider something like DRBD+GFS. If you just want performance and redundancy is less relevant, you'll probably find that NFS beats any other solution out there for use cases involding lots of small files and lots of rear-write load.

Gordan

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux