Re: weird quota issue

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks for replying. The project part is a red herring and I have abandoned it. The only reason project quotas even came up was the winbind/quota issue. UID is fine.
The more interesting part is the way the /proc/self/mounts and mtab/fstab are not coherent.

2 filesystems have identical (cut and paste) setting in fstab. Below results are after setting forcefsck and rebooting.

mount <enter>
/dev/mapper/irphome_vg-home_lv on /home type xfs (rw,delaylog,inode64,nobarrier,logbsize=256k,uquota,prjquota)
/dev/mapper/irphome_vg-imap_lv on /mail type xfs (rw,delaylog,inode64,nobarrier,logbsize=256k,uquota,prjquota)

cat /proc/self/mounts
/dev/mapper/irphome_vg-home_lv /home xfs rw,relatime,attr2,delaylog,nobarrier,inode64,logbsize=256k,sunit=32,swidth=32768,noquota 0 0
/dev/mapper/irphome_vg-imap_lv /mail xfs rw,relatime,attr2,delaylog,nobarrier,inode64,logbsize=256k,sunit=32,swidth=32768,usrquota,prjquota 0 0

cat /etc/mtab
/dev/mapper/irphome_vg-home_lv /home xfs rw,delaylog,inode64,nobarrier,logbsize=256k,uquota,prjquota 0 0
/dev/mapper/irphome_vg-imap_lv /mail xfs rw,delaylog,inode64,nobarrier,logbsize=256k,uquota,prjquota 0 0

List of details per wiki
#############
The most interesting thing in dmesg output was this:
XFS (dm-7): Failed to initialize disk quotas.  from /dev/disk/by-id dm-7 is my problem logical volume => dm-name-irphome_vg-home_lv -> ../../dm-7
#############

2.6.32-504.3.3.el6.x86_64

xfs_repair version 3.1.1

24 cpu using hyperthreading, so 12 real

mem
MemTotal:       49410148 kB
MemFree:          269628 kB
Buffers:          144256 kB
Cached:         47388884 kB
SwapCached:            0 kB
Active:           731016 kB
Inactive:       46871512 kB
Active(anon):       2976 kB
Inactive(anon):    71740 kB
Active(file):     728040 kB
Inactive(file): 46799772 kB
Unevictable:        5092 kB
Mlocked:            5092 kB
SwapTotal:      14331900 kB
SwapFree:       14331900 kB
Dirty:           3773708 kB
Writeback:             0 kB
AnonPages:         75696 kB
Mapped:           190092 kB
Shmem:               312 kB
Slab:            1012580 kB
SReclaimable:     875160 kB
SUnreclaim:       137420 kB
KernelStack:        5512 kB
PageTables:         9332 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:    39036972 kB
Committed_AS:     293324 kB
VmallocTotal:   34359738367 kB
VmallocUsed:      191424 kB
VmallocChunk:   34334431824 kB
HardwareCorrupted:     0 kB
AnonHugePages:      2048 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
DirectMap4k:        6384 kB
DirectMap2M:     2080768 kB
DirectMap1G:    48234496 kB

/proc/mounts
rootfs / rootfs rw 0 0
proc /proc proc rw,relatime 0 0
sysfs /sys sysfs rw,relatime 0 0
devtmpfs /dev devtmpfs rw,relatime,size=24689396k,nr_inodes=6172349,mode=755 0 0
devpts /dev/pts devpts rw,relatime,gid=5,mode=620,ptmxmode=000 0 0
tmpfs /dev/shm tmpfs rw,relatime 0 0
/dev/mapper/VolGroup-lv_root / ext4 rw,relatime,barrier=1,data="" 0 0
/proc/bus/usb /proc/bus/usb usbfs rw,relatime 0 0
/dev/sda1 /boot ext4 rw,relatime,barrier=1,data="" 0 0
/dev/mapper/irphome_vg-home_lv /home xfs rw,relatime,attr2,delaylog,nobarrier,inode64,logbsize=256k,sunit=32,swidth=32768,noquota 0 0
/dev/mapper/irphome_vg-imap_lv /mail xfs rw,relatime,attr2,delaylog,nobarrier,inode64,logbsize=256k,sunit=32,swidth=32768,usrquota,prjquota 0 0
none /proc/sys/fs/binfmt_misc binfmt_misc rw,relatime 0 0
/dev/mapper/homesavelv-homesavelv /homesave xfs rw,relatime,attr2,delaylog,sunit=32,swidth=32768,noquota 0 0

 /proc/partitions
major minor  #blocks  name

   8        0  143338560 sda
   8        1     512000 sda1
   8        2  142825472 sda2
   8       32 17179869184 sdc
   8       96 17179869184 sdg
   8      128 17179869184 sdi
   8       48 17179869184 sdd
   8      112 17179869184 sdh
   8       64 17179869184 sde
 253        0   52428800 dm-0
 253        1   14331904 dm-1
   8      160 17179869184 sdk
   8      176 17179869184 sdl
   8      192 17179869184 sdm
   8      224 17179869184 sdo
   8      240 17179869184 sdp
  65        0 17179869184 sdq
 253        3 17179869184 dm-3
 253        4 17179869184 dm-4
 253        5 17179869184 dm-5
 253        6 5368709120 dm-6
 253        7 42949672960 dm-7
   8       16 2147483648 sdb
   8       80 2147483648 sdf
 253        2 2147483648 dm-2
   8      144 2147483648 sdj
   8      208 2147483648 sdn
 253        8 2147467264 dm-8

 

 Raid layout
 3Par SAN  raid 6 12 4TB SAS disks (more or less, 3Par does some non-classic raid stuff)

 

 mpathd (360002ac000000000000000080000bf12) dm-4 3PARdata,VV
size=16T features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
  |- 1:0:3:12 sdi 8:128 active ready running
  |- 2:0:0:12 sdm 8:192 active ready running
  |- 1:0:2:12 sde 8:64  active ready running
  `- 2:0:5:12 sdq 65:0  active ready running
mpathc (360002ac000000000000000070000bf12) dm-5 3PARdata,VV
size=16T features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
  |- 1:0:2:11 sdd 8:48  active ready running
  |- 2:0:0:11 sdl 8:176 active ready running
  |- 1:0:3:11 sdh 8:112 active ready running
  `- 2:0:5:11 sdp 8:240 active ready running
mpathb (360002ac000000000000000060000bf12) dm-3 3PARdata,VV
size=16T features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
  |- 1:0:2:10 sdc 8:32  active ready running
  |- 2:0:0:10 sdk 8:160 active ready running
  |- 1:0:3:10 sdg 8:96  active ready running
  `- 2:0:5:10 sdo 8:224 active ready running
mpathg (360002ac000000000000000110000bf12) dm-2 3PARdata,VV
size=2.0T features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
  |- 1:0:2:1  sdb 8:16  active ready running
  |- 2:0:5:1  sdj 8:144 active ready running
  |- 1:0:3:1  sdf 8:80  active ready running
  `- 2:0:0:1  sdn 8:208 active ready running

pvscan
  PV /dev/mapper/mpathd   VG irphome_vg   lvm2 [16.00 TiB / 3.00 TiB free]
  PV /dev/mapper/mpathb   VG irphome_vg   lvm2 [16.00 TiB / 0    free]
  PV /dev/mapper/mpathc   VG irphome_vg   lvm2 [16.00 TiB / 0    free]
  PV /dev/mapper/mpathg   VG homesavelv   lvm2 [2.00 TiB / 0    free]
  PV /dev/sda2            VG VolGroup     lvm2 [136.21 GiB / 72.54 GiB free]
  Total: 5 [50.13 TiB] / in use: 5 [50.13 TiB] / in no VG: 0 [0   ]
 vgscan
  Reading all physical volumes.  This may take a while...
  Found volume group "irphome_vg" using metadata type lvm2
  Found volume group "homesavelv" using metadata type lvm2
  Found volume group "VolGroup" using metadata type lvm2
 lvscan
  ACTIVE            '/dev/irphome_vg/imap_lv' [5.00 TiB] inherit
  ACTIVE            '/dev/irphome_vg/home_lv' [40.00 TiB] inherit
  ACTIVE            '/dev/homesavelv/homesavelv' [2.00 TiB] inherit
  ACTIVE            '/dev/VolGroup/lv_root' [50.00 GiB] inherit
  ACTIVE            '/dev/VolGroup/lv_swap' [13.67 GiB] inherit

  

  lvdisplay irphome_vg/home_lv
  --- Logical volume ---
  LV Path                /dev/irphome_vg/home_lv
  LV Name                home_lv
  VG Name                irphome_vg
  LV UUID                8wLM12-e43p-UhIh-YTXn-kMBx-RffN-yNz2V5
  LV Write Access        read/write
  LV Creation host, time nuhome.irp.nia.nih.gov, 2014-12-01 17:53:47 -0500
  LV Status              available
  # open                 1
  LV Size                40.00 TiB
  Current LE             10485760
  Segments               3
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:7

Disks, write cache etc are controlled by 3Par SAN, I just define up to 16TB blocks and export to host over FC or ISCSI.
In this case I am using FC.

 xfs_info /dev/irphome_vg/home_lv 
meta-data="" isize=256    agcount=40, agsize=268435452 blks
         =                       sectsz=512   attr=2, projid32bit=1
data     =                       bsize=4096   blocks=10737418080, imaxpct=5
         =                       sunit=4      swidth=4096 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal               bsize=4096   blocks=521728, version=2
         =                       sectsz=512   sunit=4 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

xfs_info /dev/irphome_vg/imap_lv 
meta-data="" isize=256    agcount=32, agsize=41943036 blks
         =                       sectsz=512   attr=2, projid32bit=1
data     =                       bsize=4096   blocks=1342177152, imaxpct=5
         =                       sunit=4      swidth=4096 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal               bsize=4096   blocks=521728, version=2
         =                       sectsz=512   sunit=4 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

dmesg output
SGI XFS with ACLs, security attributes, large block/inode numbers, no debug enabled
SGI XFS Quota Management subsystem
XFS (dm-7): delaylog is the default now, option is deprecated.
XFS (dm-7): Mounting Filesystem
XFS (dm-7): Ending clean mount
XFS (dm-7): Failed to initialize disk quotas.
XFS (dm-6): delaylog is the default now, option is deprecated.
XFS (dm-6): Mounting Filesystem
XFS (dm-6): Ending clean mount


scsi 2:0:0:0: Direct-Access     3PARdata VV               3210 PQ: 0 ANSI: 6
scsi 2:0:0:10: Direct-Access     3PARdata VV               3210 PQ: 0 ANSI: 6
sd 2:0:0:0: [sdj] 4194304 512-byte logical blocks: (2.14 GB/2.00 GiB)
scsi 2:0:0:11: Direct-Access     3PARdata VV               3210 PQ: 0 ANSI: 6
sd 2:0:0:10: [sdk] 34359738368 512-byte logical blocks: (17.5 TB/16.0 TiB)
sd 2:0:0:0: [sdj] Write Protect is off
sd 2:0:0:0: [sdj] Mode Sense: 8b 00 10 08
scsi 2:0:0:12: Direct-Access     3PARdata VV               3210 PQ: 0 ANSI: 6
sd 2:0:0:11: [sdl] 34359738368 512-byte logical blocks: (17.5 TB/16.0 TiB)
sd 2:0:0:0: [sdj] Write cache: disabled, read cache: enabled, supports DPO and FUA
scsi 2:0:0:254: Enclosure         3PARdata SES              3210 PQ: 0 ANSI: 6
sd 2:0:0:12: [sdm] 34359738368 512-byte logical blocks: (17.5 TB/16.0 TiB)
scsi 2:0:1:0: RAID              HP       HSV400           0005 PQ: 0 ANSI: 5
scsi 2:0:2:0: RAID              HP       HSV400           0005 PQ: 0 ANSI: 5
 sdj:
sd 2:0:0:10: [sdk] Write Protect is off
sd 2:0:0:10: [sdk] Mode Sense: 8b 00 10 08
sd 2:0:0:11: [sdl] Write Protect is off
sd 2:0:0:11: [sdl] Mode Sense: 8b 00 10 08
scsi 2:0:3:0: RAID              HP       HSV400           0005 PQ: 0 ANSI: 5
 unknown partition table
sd 2:0:0:10: [sdk] Write cache: disabled, read cache: enabled, supports DPO and FUA
sd 2:0:0:11: [sdl] Write cache: disabled, read cache: enabled, supports DPO and FUA
scsi 2:0:4:0: RAID              HP       HSV400           0005 PQ: 0 ANSI: 5
sd 2:0:0:12: [sdm] Write Protect is off
sd 2:0:0:12: [sdm] Mode Sense: 8b 00 10 08
sd 2:0:0:12: [sdm] Write cache: disabled, read cache: enabled, supports DPO and FUA
scsi 2:0:5:0: Direct-Access     3PARdata VV               3210 PQ: 0 ANSI: 6
sd 2:0:5:0: [sdn] 4194304 512-byte logical blocks: (2.14 GB/2.00 GiB)
scsi 2:0:5:10: Direct-Access     3PARdata VV               3210 PQ: 0 ANSI: 6
sd 2:0:0:0: [sdj] Attached SCSI disk
scsi 2:0:5:11: Direct-Access     3PARdata VV               3210 PQ: 0 ANSI: 6
sd 2:0:5:0: [sdn] Write Protect is off
sd 2:0:5:0: [sdn] Mode Sense: 8b 00 10 08
SGI XFS with ACLs, security attributes, large block/inode numbers, no debug enabled
SGI XFS Quota Management subsystem
XFS (dm-7): delaylog is the default now, option is deprecated.
XFS (dm-7): Mounting Filesystem
XFS (dm-7): Ending clean mount
XFS (dm-7): Failed to initialize disk quotas.
XFS (dm-6): delaylog is the default now, option is deprecated.
XFS (dm-6): Mounting Filesystem
XFS (dm-6): Ending clean mount
Adding 14331900k swap on /dev/mapper/VolGroup-lv_swap.  Priority:-1 extents:1 across:14331900k 
device-mapper: table: 253:9: multipath: error getting device
device-mapper: ioctl: error adding target to table
pcc-cpufreq: (v1.00.00) driver loaded with frequency limits: 1600 MHz, 2400 MHz
sd 1:0:2:10: Warning! Received an indication that the LUN assignments on this target have changed. The Linux SCSI layer does not automatic
ally remap LUN assignments.
sd 2:0:0:10: Warning! Received an indication that the LUN assignments on this target have changed. The Linux SCSI layer does not automatic
ally remap LUN assignments.
sd 1:0:3:10: Warning! Received an indication that the LUN assignments on this target have changed. The Linux SCSI layer does not automatic
ally remap LUN assignments.
sd 2:0:5:10: Warning! Received an indication that the LUN assignments on this target have changed. The Linux SCSI layer does not automatic
ally remap LUN assignments.
device-mapper: multipath: Failing path 8:80.
device-mapper: multipath: Failing path 8:208.
device-mapper: multipath: Failing path 8:144.
end_request: I/O error, dev dm-2, sector 4194176
Buffer I/O error on device dm-2, logical block 524272
end_request: I/O error, dev dm-2, sector 4194176
Buffer I/O error on device dm-2, logical block 524272
end_request: I/O error, dev dm-2, sector 4194288
Buffer I/O error on device dm-2, logical block 524286
end_request: I/O error, dev dm-2, sector 4194288
Buffer I/O error on device dm-2, logical block 524286
end_request: I/O error, dev dm-2, sector 0
Buffer I/O error on device dm-2, logical block 0
end_request: I/O error, dev dm-2, sector 0
Buffer I/O error on device dm-2, logical block 0
end_request: I/O error, dev dm-2, sector 8
Buffer I/O error on device dm-2, logical block 1
end_request: I/O error, dev dm-2, sector 4194296
Buffer I/O error on device dm-2, logical block 524287
end_request: I/O error, dev dm-2, sector 4194296
Buffer I/O error on device dm-2, logical block 524287
end_request: I/O error, dev dm-2, sector 4194296
device-mapper: table: 253:2: multipath: error getting device
device-mapper: ioctl: error adding target to table
device-mapper: table: 253:2: multipath: error getting device
device-mapper: ioctl: error adding target to table
sd 1:0:3:10: Warning! Received an indication that the LUN assignments on this target have changed. The Linux SCSI layer does not automatically remap LUN assignments.
sd 1:0:2:10: Warning! Received an indication that the LUN assignments on this target have changed. The Linux SCSI layer does not automatically remap LUN assignments.
sd 2:0:5:10: Warning! Received an indication that the LUN assignments on this target have changed. The Linux SCSI layer does not automatically remap LUN assignments.
sd 2:0:0:10: Warning! Received an indication that the LUN assignments on this target have changed. The Linux SCSI layer does not automatically remap LUN assignments.
scsi 1:0:2:1: Direct-Access     3PARdata VV               3210 PQ: 0 ANSI: 6
sd 1:0:2:1: Attached scsi generic sg4 type 0
sd 1:0:2:1: [sdb] 4294967296 512-byte logical blocks: (2.19 TB/2.00 TiB)
scsi 1:0:3:1: Direct-Access     3PARdata VV               3210 PQ: 0 ANSI: 6
sd 1:0:3:1: Attached scsi generic sg9 type 0
sd 1:0:3:1: [sdf] 4294967296 512-byte logical blocks: (2.19 TB/2.00 TiB)
sd 1:0:2:1: [sdb] Write Protect is off
sd 1:0:2:1: [sdb] Mode Sense: 8b 00 10 08
sd 1:0:2:1: [sdb] Write cache: disabled, read cache: enabled, supports DPO and FUA
sd 1:0:3:1: [sdf] Write Protect is off
sd 1:0:3:1: [sdf] Mode Sense: 8b 00 10 08
sd 1:0:3:1: [sdf] Write cache: disabled, read cache: enabled, supports DPO and FUA
 sdb: unknown partition table
 sdf: unknown partition table
sd 1:0:2:1: [sdb] Attached SCSI disk
sd 1:0:3:1: [sdf] Attached SCSI disk
scsi 2:0:5:1: Direct-Access     3PARdata VV               3210 PQ: 0 ANSI: 6
sd 2:0:5:1: Attached scsi generic sg16 type 0
sd 2:0:5:1: [sdj] 4294967296 512-byte logical blocks: (2.19 TB/2.00 TiB)
scsi 2:0:0:1: Direct-Access     3PARdata VV               3210 PQ: 0 ANSI: 6
sd 2:0:0:1: Attached scsi generic sg25 type 0
sd 2:0:0:1: [sdn] 4294967296 512-byte logical blocks: (2.19 TB/2.00 TiB)
sd 2:0:5:1: [sdj] Write Protect is off
sd 2:0:5:1: [sdj] Mode Sense: 8b 00 10 08
sd 2:0:5:1: [sdj] Write cache: disabled, read cache: enabled, supports DPO and FUA
sd 2:0:0:1: [sdn] Write Protect is off
sd 2:0:0:1: [sdn] Mode Sense: 8b 00 10 08
sd 2:0:0:1: [sdn] Write cache: disabled, read cache: enabled, supports DPO and FUA
 sdj: unknown partition table
 sdn: unknown partition table
sd 2:0:5:1: [sdj] Attached SCSI disk
sd 2:0:0:1: [sdn] Attached SCSI disk
XFS (dm-8): Mounting Filesystem
XFS (dm-8): Ending clean mount
sd 1:0:2:10: Warning! Received an indication that the LUN assignments on this target have changed. The Linux SCSI layer does not automatically remap LUN assignments.
sd 1:0:3:10: Warning! Received an indication that the LUN assignments on this target have changed. The Linux SCSI layer does not automatically remap LUN assignments.
sd 2:0:5:10: Warning! Received an indication that the LUN assignments on this target have changed. The Linux SCSI layer does not automatically remap LUN assignments.
sd 2:0:0:10: Warning! Received an indication that the LUN assignments on this target have changed. The Linux SCSI layer does not automatically remap LUN assignments.
 rport-2:0-17: blocked FC remote port time out: removing rport
 rport-2:0-2: blocked FC remote port time out: removing rport

On Dec 22, 2014, at 3:48 PM, Dave Chinner <david@xxxxxxxxxxxxx> wrote:

On Fri, Dec 19, 2014 at 09:26:12PM +0000, Weber, Charles (NIH/NIA/IRP) [E] wrote:
HI everyone, long time xfs/quota user with new server and problem
hardware is HP BL460 G7 blade, qlogic fiber channel and 3Par 7200 storage
3 16TB vols exported from 3Par to server via FC. These are thin volumes, but plenty of available backing storage.

Server runs current patched CentOS 6.6
kernel 2.6.32-504.3.3.el6.x86_64
xfsprogs 2.1.1-16.el6
Default mkfs.xfs options for volumes

mount options for logical volumes  home_lv 39TB imap_lv 4.6TB
/dev/mapper/irphome_vg-home_lv on /home type xfs (rw,delaylog,inode64,nobarrier,logbsize=256k,uquota,prjquota)
/dev/mapper/irphome_vg-imap_lv on /mail type xfs (rw,delaylog,inode64,nobarrier,logbsize=256k,uquota,prjquota)

Users are from large AD via winbind set to not enumerate. I saw
the bug with xfs_quota report not listing winbind defined user
names. Yes this happens to me.

So just enumerate them by uid. (report -un)

I can assign project quota to smaller volume. xfs_quota will not
report it. I cannot assign a project quota to larger volume. I get
this error: xfs_quota: cannot set limits: Function not
implemented.

You need to be more specific and document all your quota setup.

http://xfs.org/index.php/XFS_FAQ#Q:_What_information_should_I_include_when_reporting_a_problem.3F

xfs_quota -x -c 'report -uh' /mail
User quota on /mail (/dev/mapper/irphome_vg-imap_lv)
                       Blocks
User ID      Used   Soft   Hard Warn/Grace
---------- ---------------------------------
root         2.2G      0      0  00 [------]

[xfs_quota -x -c 'report -uh' /home

nothing is returned

I can set user and project quotas on /mail but cannot see them. I have not tested them yet.
I cannot set user or project quotas on /home.
At one time I could definitely set usr quotas on /home. I did so and verified it worked.

Any ideas what is messed up on the /home volume?

Not without knowing a bunch more about your project quota setup.

Cheers,

Dave.
--
Dave Chinner
david@xxxxxxxxxxxxx

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs

[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux