Memory Management
Storage allocation
Information stored in memory is used in many
different ways. Some possible classifications are:
- Role in Programming Language:
- Instructions (specify the operations to be performed and
the operands to use in the operations).
- Variables (the information
that changes as the program runs: locals, owns, globals, parameters, dynamicstorage).
- Constants (information that is used as operands, but that
never changes: pi for example).
- Changeability:
- Read-
only: (code, constants).
- Read&write: (variables).
Why is identifying non-changing memory useful or
important?
- Initialized:
- Code,
constants, some variables: yes.
- Most variables:
no.
- Addresses vs. Data: Why is this distinction useful or important?
- Binding time:
- Static:
arrangement determined once and for all, before the program starts running. Mayhappen at compile-time, link-time, or load-time.
- Dynamic:
arrangement cannot be determined until runtime, and may change.
Note that the classifications overlap: variables may
be static or dynamic, code may be read-only or read&write, etc.
The compiler, linker, operating system, and run-time
library all must cooperate to manage this information and perform allocation.
When a process is running, what does its memory look
like? It is divided up into areas of stuff that the OS treats similarly, calledsegments. In Unix, each process has three segments:
- Code (called "text" in Unix terminology)
- Initialized data
- Uninitialized data
- User's dynamically linked libraries (shared objects (.so) or
dynamically linked libraries (.dll))
- Shared libraries (system dynamically linked libraries)
- Mapped files
- Stack(s)
In some systems, can have many different kinds of
segments.
One of the steps in creating a process is to load its
information into main memory, creating the necessary segments. Information comesfrom a file that gives the size and contents of each segment (e.g. a.out in
Unix). The file is called an object file. See man 5 a.out for format of Unixobject files.
Division of responsibility between various portions
of system:
- Compiler: generates one object file for each source code file
containing information for that file. Information is incomplete, since eachsource file generally uses some things defined in other source files.
- Linker: combines all of the object files for one program into a
single object file, which is complete and self-sufficient.
- Operating system: loads object files into memory, allows several
different processes to share memory at once, provides facilities for processesto get more memory after they have started running.
- Run-time library: provides dynamic allocation routines, such as
calloc and free in C.
Dynamic memory allocation
Why is not static allocation sufficient for
everything? Unpredictability: cannot predict ahead of time how much memory, orin what form, will be needed:
- Recursive procedures. Even regular procedures are hard to predict
(data dependencies).
- OS does not know how many jobs there will be or which programs
will be run.
- Complex data structures, e.g. linker symbol table. If all storage
must be reserved in advance (statically), then it will be used inefficiently(enough will be reserved to handle the worst possible case).