RE: Creds and open files

New Message Reply About this list Date view Thread view Subject view Author view Attachment view

From: Noveck, Dave (Dave.Noveck@netapp.com)
Date: 06/06/01-12:58:41 PM Z


Message-ID: <8C610D86AF6CD4119C9800B0D0499E3333542C@red.nane.netapp.com>
From: "Noveck, Dave" <Dave.Noveck@netapp.com>
Subject: RE: Creds and open files
Date: Wed, 6 Jun 2001 10:58:41 -0700 

Mike Eisler wrote: 
> "Noveck, Dave" wrote:
>
> > As it stands, if the opens have the same access
> > specified (or more generally if the second is a
> > subset of the first), then the client does no
>                                         ^^^^^^^
> > OPEN operation on the server (I guess he does
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> > have to do an ACCESS) since we have a one open-file
> > for each file-lockowner combination.  (we have
> 
> Are you talking about the case where the first OOPEN
> results in delegation of subsequent OPENs to the client?

No.

> If you mean the non-delegated case,
> then I don't see how this can work, nor do I see 
> where in the spec it says so (but it is a vast spec, so
> I may have missed it).

It may not be there.  It is one of those things that
I've just assumed.  As regards how this could work,
I think you are misunderstanidng the case I'm 
presenting.  I am taking about the second OPEN
being done by the *same* process as the first
OPEN.

> The nfs_lockowner4 is composed of the client-id
> and the "owner", which "may be a thread id, process id,
> or other unique value." In UNIX, it
> will be a process id unique to the client.

Yes.

> It seems to me that when a second process opens
> the same file, even it it is the same principal,
> the client needs to send the OPEN. Otherwise, a
> unique stateid/sequence id space cannot be
> constructed, which is necessary to implement
> mandatory locking.

Right but I'm talking about all the open's being
done on the same process, even though the principal 
may be different.

> > talked about possibly doing something else but
> > that is what we have now).  So it seems like the
> > IO-as-same-principal rule (whether in SHOULD or
> > MUST form) would force us to switch this to a
> > model in which you can have one open-file for
> > each file-lockowner-principal combination.
> 
> I've always assumed this was the case.

The spec now says:

   When an OPEN is done and the specified lockowner already 
   has the resulting filehandle open, the result is to "OR" 
   together the new share and deny status together with the 
   existing status.  In this case, only a single CLOSE need 
   be done, even though multiple OPEN's were completed.

So if your assumption is correct (and I understand you 
correctly) we should change, "the specified lockowner 
already has the resulting filehandle open" to "the 
specified lockowner already has the resulting filehandle 
open and that open was done by the same principal
as that doing the current OPEN operation".
 
> While we are opening cans of worms,
> it also means that when a UNIX process forks,
> the underlying filesystem will need to issue a new
> OPEN before it attempts a READ or WRITE, because
> a new process id is a new nfs_lockowner4.

I'm pretty sure this issue has been discussed, although
I don't know that the right conclusion was reached.
As I remember, it was decided that this was not needed
because there was in fact no new open.  Unix treats
fork as not creating new open files but additional
references to existing open files (like dup()).  So
I think it was felt that it was adequate to treat the
original opener as the lockowner for the purposes of
locking requests on that open file. 

I'll try to find the discussion in archives.   

> As
> a related aside, in UNIX file locks are not inherited
> by child processes. In any case, there is an issue
> with how the OPEN is done in the forked
> process case. There needs to be a way to OPEN the
> file by file handle, and the current specification
> doesn't allow it, except in reboot scenarios.

Thinking about this a little more, I'm doubting 
whether the current approach is adequate.  What if
the two processes sharing an open file try to lock
each other out.  Anyone with current knowledge of
UNIX file locking implementations care to comment?

The trouble with treating this as an open of a
handle is scheduling the proper close's.  Unix
will treat a close() by each process as merely
decrementing a reference count and not until the
count goes to *zero* is it really a close.  How
can you contrive to send the proper CLOSE operation
without maintaining separate reference counts
within the open file object for each process that
might have the file open?  That is really ugly.

> > Another version of this is where the two opens
> > differ in access.  open-for-read/change-identity/
> > open-for-rdwrt.  Right now the client is supposed
> > to do an OPEN which the server is supposed to
> > implement an an upgrade. The credentials for the
> > open presumably the new ones  But then what
> > credentials would you use for the IO?  Of course,
> > if we went to a one-stateid-per-file-lockowner-
> > principal-triple, the you would get separate
> > stateid sequences for each and everything could
> > work find with each IO using the proper stateid
> > for its principal.

> Interesting. From the UNIX client perspective,
> I see two scenarios:
>
> 	1. a process changes its access to the
> 		file via fcntl(). In Solaris,
>		one can even change the share
> 		reservation deny mode with fcntl().

That requires upgrade and downgrade by handle
or stateid.

I hadn't heard of this Solaris requirement.
I think this is a case of "Speak pretty soon
or forever ...".

>		The process also changes its
>		principal identity. In this
>		case, the credentials used when
>		the file descriptor was created
>		should be used for sunbsequent
>		READs and WRITEs.

Unix is nicely behaved in that from the server's 
point of view nothing really happens.  I'm just
worried that there are some systems that do it
differently.

> 	2. a process changes its principal
> 		idenity, while still holding
> 		open a file opened under a
> 		previous identity. Now the same
> 		process opens the same file
> 		with the new identity. 

This is the case I'm worried about. 

> It seems to me that life would be
> much easier if section 8.10,
> Open Upgrade and Downgrade, were
> revised such that the implicit coalescing
> of OPENs did not happen. Instead, there could
> be an explicit OPEN_UPGRADE to handle scenario 1
> (with or without the principal identity change),
> and to hadle scenario 2, a fresh OPEN by pathname
> be issued which does not result in coalescing of second
> OPEN with the first OPEN. But there was problably
> a technical reason that I've forgotten why
> the coalescing is in place.

The reason for implicit upgrade on open was that the
caller may not know that he has the file open when he
issues the request, because of hard links.

I know.  Hard links are a Very Bad Idea, but the damage
is done.


New Message Reply About this list Date view Thread view Subject view Author view Attachment view

This archive was generated by hypermail 2.1.2 : 03/04/05-01:48:49 AM Z CST