Fancier algorithm: give pages a second (third?fourth?) chance. Store (in software) a counter for each page frame, and
increment the counter if use bit is zero. Only throw the page out if the counterpasses a certain limit value. Limit = 0 corresponds to the previous case. What
happens when limit is small? large?
Some systems also use a "dirty" bit to give
preference to dirty pages. This is because it is more expensive to throw outdirty pages: clean ones need not be written to disk.
What does it mean if the clock hand is sweeping very
slowly?
What does it mean if the clock hand is sweeping very
fast?
If all pages from all processes are lumped together
by the replacement algorithm, then it is said to be a global replacementalgorithm. Under this scheme, each process competes with all of the other
processes for page frames. A per process replacement algorithm allocates pageframes to individual processes: a page fault in one process can only replace one
of that process' frames. This relieves interference from other processes. A perjob replacement algorithm has a similar effect (e.g. if you run vi it may cause
your shell to lose pages, but will not affect other users). In per-process andper-job allocation, the allocations may change, but only slowly.
Thrashing: consider what happens when memory gets
overcommitted.
- Suppose there are many users, and that between them their
processes are making frequent references to 50 pages, but memory has 40 pages.
- Each time one page is brought in, another page, whose contents
will soon be referenced, is thrown out.
- Compute average memory access time.
- The system will spend all of its time reading and writing pages.
It will be working very hard but not getting anything done.
- Thrashing was a severe problem in early demand paging systems.
Thrashing occurs because the system does not know
when it has taken on more work than it can handle. LRU mechanisms order pages interms of last access, but do not give absolute numbers indicating pages that
must not be thrown out.
What can be done?
- If a single process is too large for memory, there is nothing the
OS can do. That process will simply thrash.
- If the problem arises because of the sum of several processes:
- Figure out how much memory each
process needs.
- Change scheduling priorities to run processes in
groups whose memory needs can be satisfied.
Working sets
Working Sets are a solution proposed by Peter
Denning. An informal definition is "the collection of pages that a process isworking with, and which must thus be resident if the process is to avoid
thrashing." The idea is to use the recent needs of a process to predict itsfuture needs.
- Choose tau, the working set parameter. At any given time, all
pages referenced by a process in its last tau seconds of execution areconsidered to comprise its working set.
- A process will never be executed unless its working set is
resident in main memory. Pages outside the working set may be discarded at anytime.
Working sets are not enough by themselves to make
sure memory does not get overcommitted. We must also introduce the idea of abalance set:
- If the sum of the working sets of all runnable processes is
greater than the size of memory, then refuse to run some of the processes (for awhile).
- Divide runnable processes up into two groups: active and inactive.
When a process is made active its working set is loaded, when it is madeinactive its working set is allowed to migrate back to disk. The collection of
active processes is called the balance set.
- Some algorithm must be provided for moving processes into and out
of the balance set. What happens if the balance set changes too frequently?
As working sets change, corresponding changes will
have to be made in the balance set.
Problem with the working set: must constantly be
updating working set information.
- One of the initial plans was to store some sort of a capacitor
with each memory page. The capacitor would be charged on each reference, thenwould discharge slowly if the page was not referenced. Tau would be determined
by the size of the capacitor. This was not actually implemented. One problem isthat we want separate working sets for each process, so the capacitor should
only be allowed to discharge when a particular process executes. What if a pageis shared?
- Actual solution: take advantage of use bits
- OS maintains idle time value for each page: amount of CPU
time received by process since last access to page.
- Every once in a
while, scan all pages of a process. For each use bit on, clear page's idle time.For use bit off, add process' CPU time (since last scan) to idle time. Turn all
use bits off during scan.
- Scans happen on order of every few
seconds (in Unix, tau is on the order of a minute or more).
Other questions about working sets and memory
management in general:
- What should tau be?
- What if it is too large?
- What if it is too
small?
- What algorithms should be used to determine which processes are in
the balance set?
- How do we compute working sets if pages are shared?
- How much memory is needed in order to keep the CPU busy? Note than
under working set methods the CPU may occasionally sit idle even though thereare runnable processes.