Re: can not mount GFS, "no such device"

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I have started clvmd  on all nodes, and changed locking_type. anyway,
I will keep an eye on this random error.

On Wed, Dec 30, 2009 at 4:07 PM, Ian Hayes <cthulhucalling@xxxxxxxxx> wrote:
>
>
> On Tue, Dec 29, 2009 at 11:23 PM, Gordan Bobic <gordan@xxxxxxxxxx> wrote:
>>
>> Ian Hayes wrote:
>>
>>> I had a similar problem in my Redhat Clustering and Storage Management
>>> class the other week. I believe the problem was with a couple of mistakes I
>>> made while playing around in one of the labs. I know once it was because I
>>> was trying to mount the block device instead of the logical volume.
>>
>> I'm assuming you mean that you were mkfs-ing one and then trying to mount
>> the other. I'm vehemently against putting everything on lvm just for the
>> sake of it, but I've never had a problem with mkfs-ing or mount-ing either,
>> as long as it's consistent. I tend not to partition iSCSI and DRBD volumes,
>> so I know that working direct with the whole block device works just fine.
>
> Well, the good thing about being in a RH class is that you can do all kinds
> of sick, twisted evil things just to see what happens. I've also made the
> mistake of doing things like not changing the locking_type in lvm.conf to 3
> and forgetting to start clvmd. Any of those can lead to strange and exciting
> times with GFS.
>
>
>>> IIRC, gfs2 is still under development and considered experimental.
>>> There's tons of documentation for production-quality GFS and I imagine once
>>> gfs2 gets more mainlined, this will be the case also.
>>
>> Don't quite me on this, but I'm pretty sure GFS2 is deemed stable as of
>> RHEL 5.4 (or was it 5.3?). Having said that, I haven't yet deployed any GFS2
>> volumes in production, and don't plan on doing so imminently, so draw
>> whatever conclusions you see fit from that. ;)
>
> We're fine with GFS where we are. I've done some benchmarking on GFS2 and
> it's performance didn't come anywhere near what we could do with GFS.
>
> --
> Linux-cluster mailing list
> Linux-cluster@xxxxxxxxxx
> https://www.redhat.com/mailman/listinfo/linux-cluster
>

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux