On Fri, 18 Mar 2011, Tim Soderstrom wrote:
On Mar 18, 2011, at 10:08 AM, Justin Piszcz wrote:
Hi,
I can write to just about the entire USB stick, with no errors:
atom:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 5.8G 1.5G 4.3G 26% /
tmpfs 2.0G 0 2.0G 0% /lib/init/rw
udev 10M 140K 9.9M 2% /dev
tmpfs 2.0G 0 2.0G 0% /dev/shm
atom:~# cd /
atom:/# ls
bin cdrom etc lib media nfs proc sbin srv tmp var
boot dev home lib64 mnt opt root selinux sys usr
atom:/# dd if=/dev/zero of=bigfile bs=1M count=4000
4000+0 records in
4000+0 records out
4194304000 bytes (4.2 GB) copied, 135.536 s, 30.9 MB/s
atom:/# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 5.8G 5.4G 350M 95% /
tmpfs 2.0G 0 2.0G 0% /lib/init/rw
udev 10M 140K 9.9M 2% /dev
tmpfs 2.0G 0 2.0G 0% /dev/shm
atom:/# rm bigfile
However, after some amount of time, the errors occur below, is this USB
stick failing? Since it has no SMART, is there any other way to verify
the 'health' of a USB stick?
What prompted you to go with XFS over, say, ext2? The journal will generally cause quite a bit more writes onto your USB device. I use ext2 on my CF card in my NAS for that reason (the spinning media is on XFS of course). I know that's not an answer to your problem but thought I would add it as a suggestion :)
Hi,
Just habit I suppose.. (XFS). Looks like EXT2 is the correct solution here,
or ext4 w/nojournal (if Google's patch is in the kernel). I have to read
the lwn.net article though.
Justin
--
To unsubscribe from this list: send the line "unsubscribe linux-usb" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html