From: Brent (brent@eng.sun.com)
Date: 02/05/99-09:32:45 PM Z
Message-ID: <36BBB7DD.5429B206@eng.sun.com> Date: Fri, 05 Feb 1999 19:32:45 -0800 From: Brent <brent@eng.sun.com> Subject: Re: NFSv4 and Caching "Theodore Y. Ts'o" wrote: > I believe one of the bugs with CIFS is that it only checks to do > op-locking when the file is first opened. If a long-running process > opens a file, once some other host breaks the op-lock because it needs > to write to that file, all future references by the original > long-running process will never again be cached. Perhaps that's more > of an implementation bug than an protocol design problem, but it's > probably something we would want to avoid. Ah, this is where the server's callback tells the clients "caching's over now folks - everybody must do sync writes to the server." I think what you're saying is: there's no corresponding "go back to caching" call from the server when the conflict is over. > There are plenty of other reasons which one could ascribe to the failure > of DCE/DFS besides the details of DFS's actual filesystem model. > : Thanks for the observations on AFS -> DCE/DFS migration problems. A lesson to us, I suppose, that we should be careful not to place hurdles in the way of folks migrating to v4. The move from v2 to v3 was pretty smooth - I doubt many users or administrators could tell the difference, except in a positive way. I hope we can say the same for v4. > As far as AFS's cacheing model is concerned, while it's not perfect, in > the ten or more years that I've been using AFS at MIT, I've really never > missed the lack of a more highly consistent cacheing model. For most > day-to-day activities, it's perfectly adequate. True, there are certain > things that don't work well, but those aren't common cases, and are > usually ones where you'd probably want to use a local filesystem to > store that kind of data for other reasons anyway. An interesting comment. I think a lot of NFS users would say the same about NFS. You can always find some applications that find gaps in the NFS approximation to cache consistency, but in practice it seems to work remarkably well despite the warts. Is it an indication that the Holy Grail of 100% cache consistency just isn't worth the bother ? Keep in mind that we're dealing with filesystem API's that were designed well before the advent of distributed filesystems. The kind of cache consistency you get on a multi-user box is hard to extend over the highly latent, packet-dropping, vast reaches of Internet. > One final note about the NFS protocol. I've really never been terribly > excited about NFS's claim of being a stateless protocol. Given that > what NFS attempts to model is an inherently stateful idea --- a > filesystem --- requiring the server to keep a little bit of extra state > beyond all of the state of the files in the filesystem perhaps isn't a > bad thing. NFS isn't a stateless protocol - most servers keep some state lying around so for better performance. There's even a correctness requirement in NFS v3 - the exclusive CREATE operation keeps client cookies around so that the server can detect a retransmitted request There's no absolute requirement that NFS servers be stateless, though I think we all expect NFS server recovery to be fast and reliable. The kind of state a file server might keep is highly volatile, perhaps changing thousands of times per second. If you keep this state on disk then it directly impacts the server's I/O bandwidth. If you must keep state, then do it in a way that can be volatile (server recovers state from clients after reboot) or at least keep state that changes slowly (perhaps a list of the "current" clients). Brent
This archive was generated by hypermail 2.1.2 : 03/04/05-01:46:39 AM Z CST