« Back to James's CS61A page. CS61A: Spring 2000

Week 10 Discussion/Lab Notes


Administrivia


Discussion Outline


(Concurrency is my least favorite topic of this course. In addition to reading the lecture notes, I suggest reading Benson Limketkai's and Wilson Cheung's concurrency notes.)


Dining Philosophers

There are numerous Java applets on the web that graphically demonstrate Dijkstra's infamous dining philosophers problem. Here are a couple of good ones that I found:

A common attempt to solve the dining philosophers problem is by merely having the philosophers take turns eating in an orderly fashion; for example, number the philosophers in order, and let the odd-numbered ones eat for some time, and then let the even-numbered ones eat...

This is an unsatisfactory solution because it is inefficient. What if some of the philosophers aren't hungry when it's their turn to eat? You can't force them! And what if some of the philosophers are hungry when it's not their turn?


Serializers

Remember that every time we call make-serializer, we create a new, unique serializer! Each serializer is independent of other serializers. Processes protected with different serializers may still be interleaved. For example:

   (define s1 (make-serializer))
   (define s2 (make-serializer))

   (define x 0)
   (define (incrx) (set! x (+ x 1)))

   (parallel-execute (s1 incrx)  ;; WRONG
                     (s2 incrx)) ;; s1 and s2 do not protect against each
                                 ;; other and are useless here!

Can we create serializers with just software? As exercise 3.46 demonstrates, it is possible for two serializers running in parallel to acquire the same mutex! Thus, although software serialization can reduce the probability of concurrency problems from occuring, they cannot solve them completely.

Therefore, in order to implement serializers completely correctly, we must use specialized hardware. Specifically, we need the hardware to provide an atomic instruction (i.e., an indivisible instruction which cannot be interrupted) to acquire mutexes.


Implementation: Why All the (lambda () ...)'s?

Here's a typical example of using parallel-execute:

   (define x 0)
   (define y 0)

   (parallel-execute (lambda () (set! x (+ x 1)) )
                     (lambda () (set! y (+ y 1)) ))

Some people wonder why we need the (lambda () ...) business. First, let's think about the evaluation rules for lambda:

We know that lambda is a special form that does not evaluate its first argument (the list of parameters) nor its remaining arguments (the body of the procedure). The body is never evaluated until we explicitly invoke the resulting procedure.

Thus, lambda has the property of not evaluating its arguments until we explicitly want it to. This is a useful technique for delaying the evaluation of expressions! We'll see much more of this next week.

It should be clear why parallel-execute requires delayed expressions. Consider:

   (define x 0)
   (define y 0)

   (parallel-execute (set! x (+ x 1)) )  ;; WRONG
                     (set! y (+ y 1)) )) ;; WRONG

Is parallel-execute a special form? No, it is not, and thus when we type in the above expression, x and y will be incremented before invoking parallel-execute. Unless the Scheme implementation you're using evaluates arguments in parallel, this defeats the whole point of using parallel-execute!

What about serializers? For example:

   (define s1 (make-serializer))

   (define x 0)

   (parallel-execute (s1 (lambda () (set! x (+ x 1))) )
                     (s1 (lambda () (set! x (+ x 1))) ))

Are s1 or s2 special forms? No, they are ordinary procedures, and just as with parallel-execute, serializers must also take procedures as arguments to delay evaluation.


« Week 9 Week 11 »
« Back to James's CS61A page. CS61A: Spring 2000

Last Modified: Tuesday, 30-Dec-2014 11:58:34 PST