« Back to James's CS61A page. | CS61A: Spring 2000 |
One problem with recursive procedures is that as the number of recursive calls increases, the amount of memory needed can also increase. Each time a procedure is invoked, memory must be allocated for it, such as for storing its local variables.
However, we can note that there is a special class of recursive procedures known as tail-recursive procedures.
A tail-recursive procedure is a procedure that invokes itself as its last step. Tail-recursive procedures do not have any operations to perform after recursing and therefore do not need extra memory for any pending operations.
In contrast, recursive procedures that are not tail-recursive do have pending operations after their recursive calls. Therefore they do require extra memory to "remember" what operations are left to perform.
Tail-recursive procedures are also known as iterative
procedures. During each loop, or iteration, we keep track of our
result as we progress. (
We can keep track of our running result in an iterative process by passing it as an extra argument during each iteration. For example:
(define (factorial n result) (if (= n 0) result (factorial (- n 1) (* result n)) ))
For the initial call, we would use 1 for result. From an interface perspective, asking people to pass in this extra argument themselves is troublesome and error-prone, so we typically encapsulate the iterative procedure itself inside another procedure for convenience:
(define (factorial n) (define (fact-helper i result) (if (= i 0) result (fact-helper (- i 1) (* result i)) )) (fact-helper n 1) )
Recursive and iterative processes typically fold in different directions. The clearest way I can explain this probably is with an example. Consider a recursive version of factorial:
(define (factorial n) (if (= n 0) 1 (* n (factorial (- n 1))) ))
Suppose we call
(* 5 (* 4 (* 3 (* 2 (* 1 1)))))
Now, contrast that with the steps the iterative version performs:
(fact-helper 5 1) (fact-helper 4 (* 1 5)) ;; 5 (fact-helper 3 (* 5 4)) ;; (* 5 4) (fact-helper 2 (* 20 3)) ;; (* (* 5 4) 3) (fact-helper 1 (* 60 2)) ;; (* (* (* 5 4) 3) 2) (fact-helper 0 (* 120 1)) ;; (* (* (* (* 5 4) 3) 2) 1)
On the right side are the expanded expressions for result. As we
can see, although the iterative version and the recursive version share the
same basic structure
In the recursive process, the first operation encountered,
In the iterative process, the operations are performed at each stage, computing the result as we go.
What's the big deal? In this case, it doesn't matter, because multiplication is associative and the end result will be the same in either case. In general, however, our operations may not be associative, so you should keep this in mind.
Order of growth is a rough description of how a program's complexity increases in response to increasing the size of its input. We measure the time complexity of a program by approximating the number of steps necessary to compute the result. We measure space complexity by approximating the amount of memory necessary.
In this course, we will be using Theta notation to measure order of growth. For some procedure, consider graphing its running time against the size of its input. As the input size gets large, what mathematical function does the graph approximate?
More technically speaking,
In most other CS courses, you will use
Whatever notation we use, we are performing an approximation. We
are interested in the dominant behavior of a program. We thus can
generally ignore constant multiples and constant terms. For
example,
A rough test to measure order of growth is to ask: if we double the size of the input, how does it affect the time/space requirements to compute the result? Do they double? Do they quadruple? Do they increase by some constant amount?
We must be careful to specify, however, exactly what we mean by the "size" of the input. How do we measure the size of the input? What does "doubling" the size of the input mean? If we are dealing with numbers, are we merely multiplying them by a factor of 2? Or are we doubling the number of bits necessary? Saying that an algorithm has linear growth is meaningless unless we know what we are measuring it with respect to.
We also must specify what a "step" is. For example, consider the
problem of adding two numbers together. From one perspective, addition is
one "step" and is performed in constant time. From
another perspective, at the machine level, a "step" may be
defined to be adding two bits together. Thus, at the machine level, adding
two
In most cases, these definitions should be fairly obvious from the context. We will not, for example, need to take into consideration individual bitwise operations performed at the machine level. In most cases, exactly how we define our steps is not important either. Let's look at some more examples:
To add together two matrices, we must add together the individual elements:
To add together two n×n matrices requires n2
additions. If we doubled the dimensions of the matrices
Another example of a
« Week 2 | Week 4 » |
« Back to James's CS61A page. | CS61A: Spring 2000 |
Last Modified: Tuesday, 30-Dec-2014 11:58:34 PST