One of the amazing ideas we often use in developing computer software is to add "levels of indirection" to the processing. While we all know this doesn't improve their performance it often allows us to make changes without interfering with other software components that are using our resources.
Virtual Memory is a rather old idea, being first described in an academic paper in 1959. Since that time it has been rediscovered numerous times by various operating systems groups as they tried to find ways to avoid issues related to the amount of physical memory present in the system.
To do this, Windows divides up all the physical memory in the system into a series of pages (on the x86 architecture this is normally 4KB but driver writers should use the manifest constant PAGE_SIZE). It also divides up the virtual address space into a series of comparably sized pages as well. Finally, Windows and the underlying hardware platform agree upon a means to tell the hardware how to translate the address of a virtual page into a corresponding physical page.
Of course, since there is generally far less physical memory than there is virtual memory, part of this mechanism defines what the hardware should do if a virtual page is accessed but there is no translation defined to a physical page. In Windows terminology, this is defined to be a page fault.
When a page fault occurs, the hardware cannot do anything else with the instruction that caused the page fault and thus it must transfer control to an operating system routine (this is the page fault handler). The page fault handler must then decide how to handle the page fault. It can do one of two things:
It can decide the virtual address is just simply not valid. In this case, Windows will report this error back by indicating an exception has occurred (typically STATUS_ACCESS_VIOLATION)
It can decide the virtual address is valid. In this case, Windows will find an available physical page, place the correct data in that page, update the virtual-to-physical page translation mechanism and then tell the hardware to retry the operation. When the hardware retries the operation it will find the page translation and continue operations as if nothing had actually happened.
Once you have a virtual-to-physical page translation there's always the temptation to add features to it. So for all the Windows platforms each page can support specific types of access:
User mode access. This access indicates if code running in user mode (that is the CPU operating mode with the least privilege, CPL 3 on the x86) can access the page. Code in kernel mode (that is the CPU operating mode with the most privilege, CPL 0 on the x86) can always access the page.
Write access. This access indicates if code accessing the page is allowed to modify the contents of the virtual page. This applies to all running code.
If either of these two access restrictions is violated, the hardware transfers control to Windows so that Windows can hande the event properly. While Windows might throw this error back by raising a STATUS_ACCESS_VIOLATION it might also modify the page tables to resolve the problem.
The final "trick" here is that by telling the hardware to use different page tables at different times, we can substitute one set of virtual-to-physical translations for a different set of translations. Thus, when we switch from one process to a different process, we change the page tables. This means the new process has a different "virtual address space".
To summarize then, a page fault is nothing more than the computer hardware reporting to Windows that it either is not allowed to access the virtual page as requested by the running code (because of the access) or it cannot translate the virtual page to a physical page. In either case, it is Windows' responsibility to "do the right thing" and allow the system to continue running.