Re: [PATCH] xfs: checksum log record ext headers based on record size

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Jul 20, 2015 at 01:28:08PM -0400, Brian Foster wrote:
> The first 4 bytes of every basic block in the physical log is stamped
> with the current lsn. To support this mechanism, the log record header
> (first block of each new log record) contains space for the original
> first byte of each log record block before it is replaced with the lsn.
> The log record header has space for 32k worth of blocks. The version 2
> log adds new extended record headers for each additional 32k worth of
> blocks beyond what is supported by the record header.
> 
> The log record checksum incorporates the log record header, the extended
> headers and the record payload. xlog_cksum() checksums the extended
> headers based on log->l_iclog_heads, which specifies the number of
> extended headers in a log record based on the log buffer size mount
> option. The log buffer size is variable, however, and thus means the
> checksum can be calculated differently based on how a filesystem is
> mounted. This is problematic if a filesystem crashes and recovery occurs
> on a subsequent mount using a different log buffer size. For example,
> crash an active filesystem that is mounted with the default (32k)
> logbsize, attempt remount/recovery using '-o logbsize=64k' and the mount
> fails on or warns about log checksum failures.
> 
> To avoid this problem, update xlog_cksum() to calculate the checksum
> based on the size of the log buffer according to the log record. The
> size is already included in the h_size field of the log record header
> and thus is available at log recovery time. Extended log record headers
> are also only written when the log record is large enough to require
> them. This makes checksum calculation of log records consistent with the
> extended record header mechanism as well as how on-disk records are
> checksummed with various log buffer size mount options.

Hmmm - I thought that case was handled, but I guess not...

> --- a/fs/xfs/xfs_log.c
> +++ b/fs/xfs/xfs_log.c
> @@ -1652,8 +1652,13 @@ xlog_cksum(
>  	if (xfs_sb_version_haslogv2(&log->l_mp->m_sb)) {
>  		union xlog_in_core2 *xhdr = (union xlog_in_core2 *)rhead;
>  		int		i;
> +		int		xheads;
>  
> -		for (i = 1; i < log->l_iclog_heads; i++) {
> +		xheads = size / XLOG_HEADER_CYCLE_SIZE;
> +		if (size % XLOG_HEADER_CYCLE_SIZE)
> +			xheads++;
> +
> +		for (i = 1; i < xheads; i++) {
>  			crc = crc32c(crc, &xhdr[i].hic_xheader,
>  				     sizeof(struct xlog_rec_ext_header));
>  		}

rhead->h_len is an untrusted value during log recovery. i.e. at this
point we haven't validated that the record size is within the sane
range. See xlog_valid_rec_header() - it only checks that 0 <
rhead->h_len < INT_MAX. Realistically, this should be checking that
it is no more than:

	(XLOG_MAX_RECORD_BSIZE / XLOG_HEADER_CYCLE_SIZE) + 1

as it is now being used as an array index into a dynamically
allocated buffer....

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs



[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux