<< Chapter < Page | Chapter >> Page > |
In the early days of computers, the instruction memories of main frames were incredibly small by today's standards - in the hundreds or thousands of bytes. This small capacity placed the emphasis on making each instruction count and each data value saved useful. Fortunately, just as processors have become millions of times faster, program memory have become similarly increased. However, there are still general practices that must be kept in mind when using program memory. Further, smaller platforms like microcontrollers are still limited to program and data memory in the kilobytes. This module explains the kinds of memory, some common memory organizations, and the basics of conserving memory.
So far in this module, we have referred to the program memory of a computer (the RAM of a PC), but in most memory architectures there is some categorization of the memory into parts. The basic principle behind subdividing the memory is that by breaking the memory into sections, it will be easier to access the smaller memory. Also, clever memory restrictions allow the designer of the system to improve performance. Strict divisions between memory sections are also very important for compilers to be able to utilize the memory.
Instruction memory is a region of memory reserved for the actual assembly code of the program. This memory may have restrictions on how it can be written to or accessed because it is not expected that changes will need to be made frequently to the code of the program. Because the size of instruction memory is known when the program compiles, called compile time, this section of memory can be segmented by hardware, software, or a combination of the two.
Data memory is a region of memory where the temporary variables, arrays, and information used by a program can be stored without using the hard disk or long term memory. This is the section memory that memory allocations come from when more memory for data structures is needed in the course of the program.
Heap memory is an internal memory pool that tasks use to dynamically allocate memory as needed. It may be used when functions must be put on hold and the function's data needs to be stored. As functions call other functions, it is necessary that the new (callee) function's data be loaded into the CPU. The previous (caller) function's data must be stored in the heap memory. The deeper function calls go, the larger the heap portion of memory needs to be.
Often, the heap memory and the data memory compete directly for space while the program is running. This is because both the depth of the function calls and the size of the data memory can fluctuate based on the situation. This is why it is important to return the heap memory the task uses to the memory pool when the task is finished.
The organization of memory can vary among compilers and programming languages. In most cases, the goal of memory management systems is to make the limited resource of memory appear infinite (or at least more abundant) than it really is. The goal is to free the application programmer from having to worry about where his memory will come from. In the oldest days of mainframes, when each byte of memory was precious, a programmer might account each address in memory himself to ensure that there was enough room for the instructions, heap, and data. As programming languages and compilers were developed, algorithms to handle this task were developed so that the computer could handle its own memory issues.
Notification Switch
Would you like to follow the 'Microcontroller and embedded systems laboratory' conversation and receive update notifications?