<< Chapter < Page | Chapter >> Page > |
The following code segment traverses a pointer chain:
while ((p = (char *) *p) != NULL);
How will such a code interact with the cache if all the references fall within a small portion of memory? How will the code interact with the cache if references are stretched across many megabytes?
How would the code in [link] behave on a multibanked memory system that has no cache?
A long time ago, people regularly wrote self-modifying code — programs that wrote into instruction memory and changed their own behavior. What would be the implications of self-modifying code on a machine with a Harvard memory architecture?
Assume a memory architecture with an L1 cache speed of 10 ns, L2 speed of 30 ns, and memory speed of 200 ns. Compare the average memory system performance with (1) L1 80%, L2 10%, and memory 10%; and (2) L1 85% and memory 15%.
On a computer system, run loops that process arrays of varying length from 16 to 16 million:
ARRAY(I) = ARRAY(I) + 3
How does the number of additions per second change as the array length changes? Experiment with
REAL*4
,
REAL*8
,
INTEGER*4
, and
INTEGER*8
.
Which has more significant impact on performance: larger array elements or integer versus floating-point? Try this on a range of different computers.
Create a two-dimensional array of 1024×1024. Loop through the array with rows as the inner loop and then again with columns as the inner loop. Perform a simple operation on each element. Do the loops perform differently? Why? Experiment with different dimensions for the array and see the performance impact.
Write a program that repeatedly executes timed loops of different sizes to determine the cache size for your system.
Notification Switch
Would you like to follow the 'High performance computing' conversation and receive update notifications?