OSRLogo
OSRLogoOSRLogoOSRLogo x Seminar Ad
OSRLogo
x

Everything Windows Driver Development

x
x
x
GoToHomePage xLoginx
 
 

    Thu, 14 Mar 2019     118020 members

   Login
   Join


 
 
Contents
  Online Dump Analyzer
OSR Dev Blog
The NT Insider
The Basics
File Systems
Downloads
ListServer / Forum
  Express Links
  · The NT Insider Digital Edition - May-June 2016 Now Available!
  · Windows 8.1 Update: VS Express Now Supported
  · HCK Client install on Windows N versions
  · There's a WDFSTRING?
  · When CAN You Call WdfIoQueueP...ously

Writing Filters is Hard Work: Undocumented DFS & RDR Interactions

?

The Distributed File System (DFS) is a Microsoft technology that allows for an enterprise to configure a single server with a root share that contains multiple links. To clients, the links appear as regular folders under the root share, however they can point to shares either on the same server system or on any other computer in the organization.

For example, you may access a share from your client machine via the path \\DfsServer\DfsRoot\Documents. This could then end up being a link to an entirely different server, say \\FooServer\Share. Thus, ignoring the other features and options that DFS provides, DFS is effectively a name aliasing technology that allows an administrator to redirect client requests from one machine to another.

Frustratingly, over the years we?ve regularly run into nagging issues with DFS? interaction with the file system filter drivers that we have written. To the casual observer, one would expect that a file system filter driver writer would care little to nothing about DFS. Aside from the potential aliasing issues that this architecture introduces, which exist in other scenarios anyway, this seems like the sort of thing that should be handled transparently to the client. Unfortunately, due to some architectural and design decisions made in the implementation of XP?s DFS client support, dealing with DFS can quickly become a hornet?s nest at the bottom of a rat hole. In addition, the Filter Manager mini-filter abstraction attempts to hide some of the implementation details and in the process complicates things even more.

Before we begin, we should note that substantial changes have been made to this space in Vista and thus the discussions in this article only apply to O/S releases prior to Vista unless noted otherwise.

The Players: DFS Client, MUP, and Network Redirectors

In order to understand the complexities of the problems we?ve experienced, we?ll need to first get a feel for the way things work without any filters involved. During the boot process, special drivers called network redirectors (typically either monolithic network file system or mini-redirector drivers) register with the Multiple UNC Provider (MUP) driver by calling FsRtlRegisterUncProvider(or FsRtlRegisterUnc ProviderExin Vista and later), providing the name of their redirector device object as part of that registration process. For example, in the case of the SMB mini-redirector driver, the name provided to MUP is \Device\LanmanRedirector. Other popular redirector device object names are \Device\WebDavRedirector and \Device\NetWareRedirector, for WebDAV and Novell Netware support, respectively.

Upon receipt of this request, MUP uses the device object to extract and store the name of the redirector device as well as a pointer to the redirector device object. Later, when a user attempts to open a name in the form of \\FooServer\Share, MUP begins calling each of the redirector drivers to determine if this path represents a server that the redirector supports. If the redirector indicates that the server is indeed one of its own, then the name resolution stops and MUP reparses the open (using the standard STATUS_REPARSEmethod for Windows) to the appropriate redirector device object. This restarts the create operation targeted at the appropriate redirector device and MUP, with its job done at this point, is no longer in the picture. A simplified depiction of this configuration is shown in Figure 1(note that the calling of the redirectors for the name resolution step is not depicted).

Figure 1 - The Lay of the Land, Sans DFS

The introduction of DFS in this picture complicates things, as shown in Figure 2. With DFS, things begin identically to the non-DFS case behavior, with the open of \\DfsServer\DfsRootfirst arriving at MUP?s dispatch entry point for create operations, MupCreate. MUP then does a probe of the server to determine if it is a DFS server (details of this probe are outside of the scope of this article), and receives a response that indicates that \\DfsServerdoes in fact support DFS. At this point, MUP then reparses this open of the DFS root back to its own driver, by replacing the name in the file object with \Device\WinDfs\Root\DfsServer\DfsRoot and returning STATUS_REPARSE. \Device\WinDfs\Root happens to be another device object that the MUP driver creates to handle opens targeted to DFS servers.

Figure 2 - And Now, with DFS

As a result of this reparse, processing of the open restarts back at the MupCreate function. However, this time the create is targeted to the DFS device object and proceeds down a different code path in MUP that handles DFS related activity. At the start of processing, MUP finds the appropriate redirector to use to communicate with this DFS server and prepares to send this create request to the DFS server via the redirector. Due to the fact that DFS runs over SMB, normally the redirector device chosen will be \Device\Lanman Redirector. If this operation completes successfully, MUP completes the create IRP back to the original caller.

Note that the end result of this is different than in the non-DFS case. In the non-DFS case, the create operation was reparsed to the appropriate redirector. In the DFS case, the create is reparsed back to MUP, passed to the redirector, and then completed. Due to these differences, in the non-DFS case the resulting file object points to a redirector device object and in the DFS case the resulting file object points to a MUP device object.

This Is Where It Starts To Get Messy...

If you had a hard time following the above, it?s going to get a bit worse. As it turns out, LanmanRedirector must have knowledge of the fact that the create operation is targeted to a DFS server. Unfortunately, the IRP_MJ_CREATE IRP structure is already entirely packed and there?s no room for custom parameters.? To accommodate the additional information the DFS developers had to find a creative way to pass the extra information.

To achieve this the DFS and redirector designers decided to use some fields of the file object passed along with the create operation. Under normal circumstances, the FsContext and FsContext2 fields of the file object are set to NULL when the I/O Manager sends the create operation to the file system. Before completion of the create IRP, it is the job of the file system to set the FsContext field to a structure that is unique to the stream and the FsContext2 field to a structure that is unique to this open instance of the stream.

Based on the knowledge that nothing is usually put there before calling the file system, MUP stores a magic value of 0xFF444653 (0xFF?MUP?) into FsContext2 and a pointer to a data structure in FsContext before passing the request to the redirector. Attempting to find more details about the magic value brings up practically nothing, though you can find the following definitions in the LanmanRedirector sources that used to be provided with the WDK:

#define DFS_OPEN_CONTEXT ? ? ? ? ? ? ? ? ? ? ? ?0xFF444653
?
typedef struct _DFS_NAME_CONTEXT_ {
?? ?UNICODE_STRING ?UNCFileName;
?? ?LONG ? ? ? ? ? ?NameContextType;
?? ?ULONG ? ? ? ? ? Flags;
} DFS_NAME_CONTEXT, *PDFS_NAME_CONTEXT;
?
And some further details on their usage throughout the source of the sample.

As we?ll soon learn, this approach creates issues for file system filters that need to filter the SMB redirector. However, before we point out the complexities this creates for filters, let?s continue our investigation of how this all works without filters involved.

What About Accessing a Link?

So far, we?ve only seen accessing a standalone UNC share and the root of a DFS server. Accessing a link of a DFS server complicates this situation even further and brings to light another feature of the MUP DFS implementation.

When a user attempts to open \\DfsServer\DfsRoot\DfsLink, everything proceeds as it did previously. The create first arrives at MUP and MUP determines that this is a DFS server, so MUP reparses the create back to itself at its DFS root device object. DFS then finds the appropriate redirector device for \\DfsServer\DfsRoot, which will again be \Device\LanmanRedirector.

MUP then attempts to open \\DfsServer\DfsRoot\DfsLinkon the DFS server via the redirector. Because this is not an actual share on the server but a link to another server, the DFS server will return a special status of STATUS_PATH_NOT_ COVEREDto the client. MUP responds to this special status by sending what is called a, ?DFS referral packet? via the redirector to the server for the failing path. The response to this referral packet is the actual location of the DFS link, for example \\FooServer\Share.

MUP is now at the end of its name resolution process and it knows the real path that is to be used for this create operation. However, MUP is also now back to square one in processing the create request. In other words, MUP must process the open of \\FooServer\Shareas if this is what the user initially opened. This is due to the fact that MUP is not sure which redirector to use to communicate with this server/share combination.

Therefore, at this point MUP sends the create IRP back to itself at MupCreate. Processing at this point proceeds exactly as it did in the non-DFS case, with MUP finding the appropriate redirector for this open, pre-pending the redirector device name to the path, and returning STATUS_REPARSE. For example, in this case the resulting name might end up as, \Device\LanmanRedirector\FooServer\Share.

You would believe that at this point MUP would return STATUS_REPARSEback to the I/O Manager and the resulting file object would point to \Device\Lanman Redirector, just as in the non-DFS case that we first looked at. However, that is not at all what happens. Instead, MUP internally handles the reparse processing and forwards the request to the appropriate redirector device object. Upon successful completion, MUP will then (finally) complete the user request with success, resulting in the file object pointing to the MUP DFS device object.

Why This Creates Additional Complexity for Filters

Why does filtering DFS turn out to be such a big pain for filter writers?

The first issue that you?ll have when dealing with DFS is load ordering. When the redirector drivers register with MUP via FsRtlRegisterUncProvider, MUP opens the target device name and caches the returned device object. Thus, if your filter is not instantiated on the redirector device at the time of this open your filter will be bypassed for the communication between DFS and the redirector. Mini-filters luck out a bit here because you just need to make sure that Filter Manager is loaded and attached early, this ensures that the Filter Manager device object will be in the call chain and then you can just attach your filter instance later. Usual load ordering rules can be used to make sure that fltmgr.sys is loaded early enough in the boot process. Prior to Vista, in order to guarantee that Filter Manager will actually attachearly in the boot process make sure that the Filter Manager AttachWhenLoaded registry value is correctly set to 1.

With your filter properly inserted between MUP and the redirector you have a new set of issues to handle. Remember that DFS and the redirector have a secret handshake stuffed in a couple of fields of the file object. In order for this design to work in all cases, those fields must be propagated to whatever file object is sent to the redirector. This is a monster problem for a filter that wants to perform its own open of the file or directory with IoCreateFileSpecifyDeviceObjectHintor FltCreateFile. These APIs generate their own file objects with no opportunity for the caller to modify the resulting file object before it is sent to the target device. Thus, a filter performing these types of operations breaks the chain between DFS and the redirector, leading to unexpected results from the create.

This is also a case where writing your filter as a mini-filter will create surprising complications. For reasons unknown to us, before calling the mini-filters in the create path, Filter Manager will set the FsContext field of the file object to NULL, even if it was not NULL on entry to the Filter Manager filter device. Once all of the mini-filters are called, the value is restored before calling the underlying FSD with the request. Thus, even if you could figure out a way to get the fields set appropriately in the file object, the values are hidden from your filter during pre-create processing.

And Don?t Ever Try to Return STATUS_ REPARSE?

If you noticed above, MUP heavily relies on STATUS_REPARSEin order to perform its work. We snuck something by you in a previous section as well when we said:

You would believe that at this point MUP would return STATUS_REPARSEback to the I/O Manager and the resulting file object would point to \Device\Lanman Redirector, just as in the non-DFS case. However, that?s not at all what happens. Instead, MUP internally handles the reparse processing and forwards the request to the appropriate redirector device object.

What we omitted in this explanation how MUP decides to handle this particular instance of STATUS_REPARSEitself.? Believe it or not, what MUP actually does is dig into the device object to which that the create IRP was sent, finds the driver object, and captures the driver object name. This driver object name is then compared to the hardcoded value, ?\Filesystem\Mup?. If the names match, MUP knows that he just reparsed the create back to himself and can handle it internally. However, if the names do not match then MUP allows the reparse to travel back to the I/O Manager.

This means that if you reparse a DFS open to something like a shadow stack (a technique we have used in some of our layered file system work), DFS will entirely step out of the way of the create processing. The result of this is that DFS create operations no longer pass through the DFS client code, which can again lead to unexpected results.

Note that this also prevents one from attaching a filter to \Device\Mup, which you might want to do in an attempt to avoid sitting between the DFS code and the redirector. Because this specially-handled create IRP will be sent to your filter device object instead of directly to the MUP device object, the driver name check will fail due to the fact that the driver name in the target device will be your filter driver?s name. The end result will be a broken DFS client that can no longer communicate with any DFS servers.

Not All Stories Have a Happy Ending

Unfortunately, we continue fighting with DFS to this day and have open bugs for configurations that do not work and, quite possibly, might never be made to work.? Complicating things for us is that MUP and DFS have been redesigned on Windows Vista and later such that filters are no longer sit in between MUP and the redirector. Because the design has moved forward, we have no chance of getting any changes made to legacy platforms. Until every last one of our clients has upgraded to Vista and later, we?re going to be stuck trying to hammer filter drivers into an architecture that clearly wasn?t designed to interact with other components in the system.

User Comments
Rate this article and give us feedback. Do you find anything missing? Share your opinion with the community!
Post Your Comment

Post Your Comments.
Print this article.
Email this article.
bottom nav links