From: Carl Beame (beame@mail1.tinet.ie)
Date: 05/15/99-05:14:54 AM Z
From: Carl Beame <beame@mail1.tinet.ie> Subject: Re: New file handle section (hard links) Message-Id: <1999May15.111557+0100@games> Date: 15 May 1999 11:14:54 +0100 Again I don't believe I got any reasonable answers to the question: Can you explain WHY we need this restriction? > The first problem I can envision is a client with two applications > that open the two different path names and receive the two different > file handles and then start duplicating file data cache > unnecessarily. I would rather see the ability for many NFS servers to be compliant to the NFS V4 specification than worry about the small number of cases where efficiency is lost by the duplication of file data cache! > There may be issues with file locking but I would have to think > about it a little more. The server will at least need to determine > that these to paths refer to the same file so that it can respond > properly to the file lock requests. Locking and Sharing is built into the Windows NT filesystems. Access to these filesystems is by pathname. Because of this two different paths which point to the same file object must be handled by the filesystem itself (and is). > I know that this doesn't > require the client receive the same file handle for both paths but > there are some clients that may need to fold locking semantics into > the NFSv4 requests and knowledge that the two paths are the same > file may ease or allow correct implementation at the client. To me this is a breaking of semantics. Locking and Sharing is a server based function and it is only the server which decides if locks and shares are granted. Can you provide an actual example? > With the servers you mention, are we going to have problems with > correctness with respect to file locking in this dual-path case? No. > A table server like this could even work around the "multiple > filehandle" > problem - multiple filehandles pointing to the same object. > When the server is about to enter a new entry into the table it could > check the fsid/fileid against the table entries and if it found a > match > it could avoid creating a duplicate for the same file (but different > pathname). > So EVERY time a LOOKUPFH occurs we have to search COMPLETELY through a potentially huge table to see if the fsid/fileid matches. > > Hey, what would it take to get Microsoft and the UNIX vendors > to include a new system call to map filenames to filehandles > and vice versa ? > I always like a bit of levity injected into these conversations. > > If you don't have this restriction, the caching on an individual > client > can become inconsistent. > If the cache becomes inconsistent just because of different File Handles then it will become inconsistent between hosts accessing normal files. As far as I can see you are arguing for the following: An application when run by two different users on the same host to an NFS server will work and this same application when run on two different hosts to the same NFS server will fail because of cache inconsistencies. Because of my suggestion of different filehandles for hard links this same application MAY now fail when run on a single host. I find this not to be a valid argument for requiring the same file handle for a hard link. > > I am curious why people are so set on using filenames as part of the > file handle when they are known to have these negative properties? Is > there no other way to manage the file handle space? The filesystems under NT can only be accessed by filename. > Have other > solutions been explored and rejected? You bet! > Why wouldn't a table used to > map file handles to filenames and vice-versa work? > It might work, but definitely not under NFS V2 or NFS V3 as crash recovery would not work. It would make it very difficult to create an NFS server which supports both NFS V3 and NFS V4 if one uses tables and the other filenames. -------------------------------------------- Again, is there any valid reason to make the requirement that filehandles to hard links are the same? Cache efficiency (in my opinion) is not a good enough reason. Cache inconsistency is a bogus problem as the application which is manipulating the information needs to handle sharing between hosts already. - Carl Beame
This archive was generated by hypermail 2.1.2 : 03/04/05-01:47:02 AM Z CST