On Sat, Jan 10, 2009 at 3:08 AM, Sandeep K Sinha <sandeepksinha@xxxxxxxxx> wrote: > Hi Greg, > > On Fri, Jan 9, 2009 at 5:49 AM, Greg Freemyer <greg.freemyer@xxxxxxxxx> wrote: >> Both a top post and bottom post. >> >> == Top Post >> >> Lost the context for this, but looking at your site: >> http://code.google.com/p/fscops/ >> >> I see a very valuable HSM goal, but I don't see the biggest user of >> future HSM implementations. >> > Do you mean the futuristic aspects of this project in the market or > something to be mentioned on the project webpage. I mean the web page is missing any discussion of SSDs in the hierarchy. I see it as Ram / SSD / rotational / tape. Ram = $50/GB or so today SSD = $2-10/GB today rotational (HDD) = $.1 - .5/GB today tape < $0.1/GB today Random i/o speed goes up as price goes up, so a HSM that managed SSD / rotational / Tape would be great. Also a 80 GB SSD for $500 would be a cheap investment for most servers if there was a HSM that allowed it to fit into the existing filesystem architecture. >> Namely servers that add SSD drives as even faster / more expensive >> storage than rotating disks. >> > > > >> The block layer is right now is exposing a flag to user space for >> rotating / non-rotating. I think it will be in 2.6.29, but it has not >> been accepted yet I don't think. >> >> Since non-rotating media is about 50x the cost of rotating media, I >> can envision a lot of useres wanting to put fiels that are about to be >> heavily used onto SSD for processing / searching / etc. then moved >> back to rotating disk until needed again. >> >> I know my companies data usage patterns would benefit greatly from an >> HSM that support SSD drives. We work with a couple hundred datasets a >> year typically. But at any given time, just a couple of them are >> actively in use. >> > > Thanks for your insight. Well, if I am getting it right, does that > mean that such utilities will be of great use in future ? I think so. I would love to have my processing servers to have a HSM with SSD and rotational disk in use. Not sure I would use the HSM to handle tape. > We already have several use cases for such tiered storage. Database > environment and search engines being one of the major ones. > >>>> copy_inode_info() >>> >>> Well, currently I dont intend to move the inode to a new location. I >>> would prefer leave the original inode intact just updating the new >>> data block pointers. This is still in debate, whether to relocate >>> inode or not. >> >> I know I pseudo coded it up based on a new inode. >> >> If your goal is to do the HSM re-org with live files, then I would try >> real hard not to allocate a new inode. >> > > Allocating a new inode will also cause several issues for hardlinks as well. > Manish, does ext2 have support for hardlinks, any idea ? > By hardlinks you mean multiple directory entries for one inode? Almost all Linux Filesystems support that. Ext2/3 definitely does. >> That way after the re-org all of the file i/o will continue to work >> exactly like it was. >> > What if the file which is being copied is of several TB's we will have > to allocate a ghost inode of the same size, which might cause two disk > space on the file system for sometime, failing some of the > applications which tend to allocated/need large number of data blocks. > Is there any way to avoid such situations ? > The more I think of this the more I think you should just do the job one block at a time. After each block leave the file in a consistent operational mode. I'm not sure how that can be done, but it is worth looking into . Greg -- Greg Freemyer Litigation Triage Solutions Specialist http://www.linkedin.com/in/gregfreemyer First 99 Days Litigation White Paper - http://www.norcrossgroup.com/forms/whitepapers/99%20Days%20whitepaper.pdf The Norcross Group The Intersection of Evidence & Technology http://www.norcrossgroup.com -- To unsubscribe from this list: send an email with "unsubscribe kernelnewbies" to ecartis@xxxxxxxxxxxx Please read the FAQ at http://kernelnewbies.org/FAQ