<< Chapter < Page | Chapter >> Page > |
So far we have disentangled the programmer's view of memory from the system's view using a mapping mechanism. Each sees a differentorganization. This makes it easier for the OS to shuffle users around and simplifies memory sharing between users.
However, until now a user process had to be completely loaded into memory before it could run. This is wasteful since aprocess only needs a small amount of its total memory at any one time (locality). Virtual memory permits a process to run with only some of itsvirtual address space loaded into physical memory.
The idea is to produce the illusion of a memory with the size of the disk and the speed of main memory.
Data can be in registers (very fast), caches (fast), main memory (not so fast, or disk (slow). Keep the things that you usefrequently as close to you (and as fast to access) as possible.
The reason that this works is that most programs spend most of their time in only a small piece of the code. Give Knuth'sestimate of 90% of the time in 10% of the code. Introduce again the principle oflocality.
If not all of process is loaded when it is running, what happens when it references a byte that is only in the backing store?Hardware and software cooperate to make things work anyway.
Continuing process is very tricky, since it may have been aborted in the middle of an instruction. Do not want user process tobe aware that the page fault even happened.
ld [%r2], %r2
We can calculate the estimated cost of page faults by performing an effective access time calculation. The basic idea is thatsometimes you access a location quickly (there is no page fault) and sometimes more slowly (you have to wait for a page to come into memory). We use the costof each type of access and the percentage of time that it occurs to compute the average time to access a word.
Notification Switch
Would you like to follow the 'Operating systems' conversation and receive update notifications?