Re: Improving real world performance by moving files closer to their target workloads

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Luke McGregor wrote:
 - we would like to retain the previous copy also as long as there is
free space on the system and have references to both files if this is
possible. The idea would be that over time the nodes which use files
regularly would all have copies of the particular file, (obviously
there is a synchronisation problem here but this could be worked
around), when space is needed the least read copy should be deleted
(assuming that it isnt the last copy). Does this make sense, im not
sure i have explained it very well.

It makes perfect sense. If this is always going to be in a LAN environment, you could probably get away with broadcasting a request for a file and seeing how many nodes respond. If you specify that you want redundancy of X (X copies of files in the network), you could then delete the least recently used local file when you are running out of space, provided that at least X other nodes have the file in their local stores.

In theory, as files migrate to local nodes, the number of broadcasts will reduce, as most requests will be satisfiable locally. The downside is that for each write, you'd have to find the peers with a copy of the file, and send them the delta to update locally. Locking may become difficult, and albeit doable, there may be some nasty race conditions to overcome.

Gordan




[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux