RE: New file handle section (hard links)

New Message Reply About this list Date view Thread view Subject view Author view Attachment view

From: Noveck, Dave (dave.noveck@netapp.com)
Date: 05/15/99-06:48:55 PM Z


Message-ID: <7F608EC0BDE6D111B53A00805FA7F7DA033035B6@TAHOE.netapp.com>
From: "Noveck, Dave" <dave.noveck@netapp.com>
Subject: RE: New file handle section (hard links)
Date: Sat, 15 May 1999 16:48:55 -0700

 
> > A table server like this could even work around the "multiple
> > filehandle"
> > problem - multiple filehandles pointing to the same object.
> > When the server is about to enter a new entry into the 
> table it could
> > check the fsid/fileid against the table entries and if it found a
> > match
> > it could avoid creating a duplicate for the same file (but different
> > pathname).
> > 
> 
> So EVERY time a LOOKUPFH occurs we have to search COMPLETELY through a
> potentially huge table to see if the fsid/fileid matches. 

Rather than searching the table COMPLETELY, I'd hash by fsid/fileid
and search through a much smaller list, albeit completely.

 
> Again, is there any valid reason to make the requirement that
> filehandles to hard links are the same? Cache efficiency (in my
> opinion) is not a good enough reason. Cache inconsistency is a bogus
> problem as the application which is manipulating the information needs
> to handle sharing between hosts already.
 
Suppose I have a set of applications that work fine on a local filesystem
(no cache inconsistency) and then I move them to an nfs fileserver
without distributing the applications among multiple hosts.  This is
something that people do every day.  That they don't have to change
their applications to deal with nfs's inter-host cache consistency
is a Good Thing.  They can deal with that when they choose to allow
distributed access to their data (and get the benefits) but they 
don't have to do so.

You may consider this of insufficient importance, compared to the 
convenience of the server writer, but I can't see how you can describe 
it as "bogus".

If we tell people that you have to modify their applications to use 
nfs-v4, when they now work fine on a local filesystem or on nfs-v3,
they are not going to be happy.  If there were to be people out there
who would prefer that people use a different protocol from nfs, one 
that is under their sole control (who could I possibly mean?), then 
this is just the kind of FUD they'd love to have.  

I don't have a specific protocol requirement in mind here.  As Uresh
Vihalia points out, the typical encoding of export information within 
a handle means that, strictly speaking, the handle-object mapping 
requirement we are discussing is not satisfied under v2 and v3.
Nevertheless, if you have a single mount of a single export tree 
and all your applications are on a single host you can use 
an nfs server without worry about cache consistency.  To allow
users to think otherwise would be to shoot ourselves in the foot.
There may be many ways to deal with this issue, but the issue is not
bogus.


New Message Reply About this list Date view Thread view Subject view Author view Attachment view

This archive was generated by hypermail 2.1.2 : 03/04/05-01:47:02 AM Z CST