Re: Re: Strange Problem with dm-0

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]



On Mon, 2008-01-07 at 17:22 -0500, James B. Byrne wrote:
> # ls -l /dev/mapper /dev/dm*
> brw-r-----  1 root root 253, 0 Jan  7 16:42 /dev/dm-0
> brw-r-----  1 root root 253, 1 Jan  7 16:42 /dev/dm-1
> brw-r-----  1 root root 253, 2 Jan  7 16:42 /dev/dm-2
> brw-r-----  1 root root 253, 3 Jan  7 16:42 /dev/dm-3
> brw-r-----  1 root root 253, 4 Jan  7 16:42 /dev/dm-4
> brw-r-----  1 root root 253, 5 Jan  7 16:42 /dev/dm-5
> brw-r-----  1 root root 253, 6 Jan  7 16:42 /dev/dm-6
> 
> /dev/mapper:
> total 0
> crw-------  1 root root  10, 63 Jan  7 16:42 control
> brw-rw----  1 root disk 253,  0 Jan  7 16:42 VolGroup00-LogVol00
> brw-rw----  1 root disk 253,  2 Jan  7 16:42 VolGroup00-LogVol01
> brw-rw----  1 root disk 253,  1 Jan  7 16:42 VolGroup00-LogVol02
> brw-rw----  1 root disk 253,  3 Jan  7 16:42 VolGroup00-lv--IMAP
> brw-rw----  1 root disk 253,  6 Jan  7 16:42 VolGroup00-lv--IMAP--2
> brw-rw----  1 root disk 253,  4 Jan  7 16:42 VolGroup00-lv--MailMan
> brw-rw----  1 root disk 253,  5 Jan  7 16:42 VolGroup00-lv--webfax
> 
> I infer that dm-0 ===> VolGroup00-LogVol00 and that
> VolGroup00-LogVol00 ===> /
> 
> so df / gives
> 
> # df /
> Filesystem           1K-blocks      Used Available Use% Mounted on
> /dev/mapper/VolGroup00-LogVol00
>                        8256952   6677880   1159644  86% /
> 
> 
> I am guessing that the yum update caused the file system to fill and to
> precipitate this problem.  Is the the probable cause?

With appx 1.19GB available, by itself I don't think so. There are
underlying tmpfs files systems associated with the LVs. A stat on those
will show a different set of numbers. Maybe one of these filled?

    # stat --filesystem /dev/mapper/VolGroup00-LogVol00
      File: "/dev/mapper/VolGroup00-LogVol00"
      ID: 0        Namelen: 255     Type: tmpfs
          Blocks: Total: 194473     Free: 194415
          Available:   194415     Size: 4096
          Inodes: Total: 194473     Free: 194075
    # stat --filesystem /dev/mapper/VolGroup01-Home01
      File: "/dev/mapper/VolGroup01-Home01"
      ID: 0        Namelen: 255     Type: tmpfs
      Blocks: Total: 194473     Free: 194415
      Available: 194415     Size: 4096
      Inodes: Total: 194473     Free: 194075
    # df -H /
      Filesystem             Size   Used  Avail Use% Mounted on
          /dev/mapper/VolGroup00-LogVol00
                              19G    11G   7.4G  59% /
I *guess* that when doing the update, there were some components that
were in use and could not be *truly* deleted. When the update was being
done, in addition to temporary high-water marks achieved while
transactions were being done, rpms shuffled, etc., there was probably
additional space not yet released by some component that was "replaced"
but could not yet be deleted.

There is also the possibility that your i-nodes were used up. Since I
set my file systems to 4K blocks, I use fewer than normal.

Do "df -i" on mounted FSs.

I *suspect*, relative to your orphaned i-node, that this same underlying
situation was the cause? Some component that couldn't be released was
still active when the file system had to be unmounted. It should no re-
cur unless it really is some other problem.

-- 
Bill

_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
http://lists.centos.org/mailman/listinfo/centos

[Index of Archives]     [CentOS]     [CentOS Announce]     [CentOS Development]     [CentOS ARM Devel]     [CentOS Docs]     [CentOS Virtualization]     [Carrier Grade Linux]     [Linux Media]     [Asterisk]     [DCCP]     [Netdev]     [Xorg]     [Linux USB]
  Powered by Linux