<< Chapter < Page | Chapter >> Page > |
Once the intermediate language is broken into basic blocks, there are a number of optimizations that can be performed on the code in these blocks. Some optimizations are very simple and affect a few tuples within a basic block. Other optimizations move code from one basic block to another without altering the program results. For example, it is often valuable to move a computation from the body of a loop to the code immediately preceding the loop.
In this section, we are going to list classical optimizations by name and tell you what they are for. We’re not suggesting that you make the changes; most compilers since the mid-1980s automatically perform these optimizations at all but their lowest optimization level. As we said at the start of the chapter, if you understand what the compiler can (and can’t) do, you will become a better programmer because you will be able to play to the compiler’s strengths.
To start, let’s look at a technique for untangling calculations. Take a look at the following segment of code: notice the two computations involving
X
.
X = Y
Z = 1.0 + X
As written, the second statement requires the results of the first before it can proceed — you need
X
to calculate
Z
. Unnecessary dependencies could translate into a delay at runtime.
This code is an example of a flow dependence. I describe dependencies in detail in
[link] . With a little bit of rearrangement we can make the second statement independent of the first, by
propagating a copy of
Y
. The new calculation for
Z
uses the value of
Y
directly:
X = Y
Z = 1.0 + Y
Notice that we left the first statement,
X=Y
, intact. You may ask, “Why keep it?” The problem is that we can’t tell whether the value of
X
is needed elsewhere. That is something for another analysis to decide. If it turns out that no other statement needs the new value of
X
, the assignment is eliminated later by dead code removal.
A clever compiler can find constants throughout your program. Some of these are “obvious” constants like those defined in parameter statements. Others are less obvious, such as local variables that are never redefined. When you combine them in a calculation, you get a
constant expression . The little program below has two constants,
I
and
K
:
PROGRAM MAIN
INTEGER I,KPARAMETER (I = 100)
K = 200J = I + K
END
Because
I
and
K
are constant individually, the combination
I+K
is constant, which means that
J
is a constant too. The compiler reduces constant expressions like
I+K
into constants with a technique called
constant folding .
How does constant folding work? You can see that it is possible to examine every path along which a given variable could be defined en route to a particular basic block. If you discover that all paths lead back to the same value, that is a constant; you can replace all references to that variable with that constant. This replacement has a ripple-through effect. If the compiler finds itself looking at an expression that is made up solely of constants, it can evaluate the expression at compile time and replace it with a constant. After several iterations, the compiler will have located most of the expressions that are candidates for constant folding.
Notification Switch
Would you like to follow the 'High performance computing' conversation and receive update notifications?