Writing Style Guide

This document is a rough guide on writing up solutions for the algorithms class. It is prepared by Chris Calabro

Length

Write-ups should not be excessively long. Get to the point fast. A typical problem will require a writeup of 1-2 pages, but this is just an estimate, not a hard rule. If you can argue your point in 1 paragraph, that's even better. The hard rule is
  1. Did I say everything I needed to say? (answer the question asked and backup all claims)
  2. Did I say only what I needed to say? (discard unused claims and babble)
Of course if you have some amazing insight into the problem that is not asked for, like an optimality or tightness proof, feel free to share it. You will not lose points as long as the part that is asked for is efficient.

Writing Style

The key idea of the algorithm should be conveyed clearly. Following should come algorithmic details such as what data structures are involved. (For example, "To remove the least weighted edge in O(ln n) time, we maintain a minheap.") Pseudocode should be provided, where appropriate. The idea is that an algorithm should be conveyed several times but in increasing level of detail.

Proof must always be provided at a level of detail allowing a peer to be convinced. This may involve equations, induction, combinatoric reasoning, etc., and should be written in prose. Almost any proof in the textbook serves as a good model.

State any extra assumptions that you need. For example, "The graph G is represented as an array of adjacency lists, and we assume that the adjacency lists are sorted."

In this class we will be primarily interested in 3 properties of algorithms:

  1. correctness (meets specifications)
  2. complexity (efficiency in time, space, or other resources)
  3. optimality with respect to a certain class of algorithms

Correctness

Correctness of an algorithm is often (but not always) proved in 2 parts: first we show that it yields the correct output upon termination, and then we show that it terminates. This division often makes the proof easier. For example, consider the following pseudocode.
INPUT: a satisfiable conjunctive normal form formula F in the variables x1,...,xn
OUTPUT: an assignment to x=(x1,...xn) such that F(x)=1

  if F(0)==1, return 0
  choose prime p>=2^n
  g=0
  do
    g++
    found=1
    for each prime q|p-1
      if g^((p-1)/q)=1
        found=0
        break
  until found
  x = 1
  while F(x mod 2^n)==0
    x = g * x mod p
  return x mod 2^n
It is clear that the output is correct if the algorithm terminates, but it is less clear that the algorithm terminates, so the majority of the effort in proving correctness will be in proving the latter. In this example, x runs through a strange permutation of all of the numbers in [1,p-1] and so will hit a solution if there is one.

By contrast, in the Ford-Fulkerson method for network flow, it is clear that the algorithm terminates but it is less clear that it is partially correct. Indeed, for most algorithms in this class, termination will be clear.

Complexity

The most typical measures of resource consumption for an algorithm are time and space, though sometimes we care about others, such as number of key comparisons for a sorting algorithm. We usually care about worst case resource consumption and occasionally expected resource consumption. (expected values taken over the random choices that the algorithm makes) Usually, we do not assume any particular distribution of the input. We might as well assume that some adversary has access to the code of our algorithm and gives an input designed to thwart our algorithm.

Big-Oh class is usually sufficient, but occasionally we will want an exact quantity. All relevant parameters should be included in the expression. For example, Theta(m+n) time is required to add an m-bit number to an n-bit number. To see the lower bound, note that any algorithm A must examine all of the bits. To see the upper bound, use the grade school addition method (say, on a RAM model). If we know that m<=n, then we may simplify this to Theta(n).

Optimality

We will be concerned with 3 only occasionally - such results are hard to come by.

Giving credit

Give credit to any other students who contributed key ideas. Credit should be given when using ideas from research papers, especially when obscure. For example, "by a slight modification to the Agrawal-Kayal-Saxena algorithm..." You do not need to give credit to the professor, the TA, nor the textbook (though possibly to the author of an algorithm appearing in the textbook!). Innocent citation mistakes will not be brutally penalized.

Minor Infractions

The following is a non-exhaustive list of mistakes that may cause the loss of 0 to 1 points.
  • Confusing O,Omega,o,omega,Theta. The easy way to remember is that these are like <=,>=,<,>,=, respectively. For example, "the algorithm uses O(n) time," or "any algorithm must use Omega(n) time."
  • Saying that an algorithm definitely terminates when it actually just terminates with probability 1. For example,
      do
        x=random 0 or 1
      until x=0
    
    The problem is that there is a nonempty, albeit measure 0, subset of the sample space where the algorithm does not terminate.

Major Infractions

These may cause greater loss of credit.
  • Forgetting to prove a major claim
  • Proof incorrect
  • Proof too confusing to understand. Poor notation may cause this. For example, suppose you need to distinguish between the value of a variable x at the beginning of an algorithm and at step i. Use a disambiguating notation in the proof such as xi.
  • Algorithm too suboptimal. Almost any problem can be solved inefficiently by brute force. Strive for better.
  • Too much slack in the analysis. For example do not say that the algorithm takes time O(n^100) if it is more obviously O(n^2).
  • Algorithm unnecessarily complex

How to write a good proof

The following suggestions need not be followed rigorously, but may help you to improve clarity. Notice how good proof writing practice mirrors good programming practice.
  1. Develop the necessary notation and be consistent with it. Notation developed within a proof is usually considered local to the proof. If you need notation to have a more global scope, declare it more globally. (If G is a graph, let A(G) be its adjacency matrix.)
  2. State what you want to prove clearly and precisely. (G is connected iff the largest and second largest eigenvalues of A(G) are unequal)
  3. Write the word, "proof."
  4. Prove it.
    1. If you are going to use a proof by contradiction, state the assumption which is supposed to cause the contradiction and clearly indicate that it is not to be believed outright, but that it will cause a contradiction later. For example, "To show that the number of primes is infinite, suppose indirectly that p_1< ...< p_n are the only primes. (notice the use of the word "indirectly") Then p_1*...*p_n+1 is divisible by one of them, say p_i. But then 1 is divisible by p_i, a contradiction. So the number of primes must be infinite."
    2. If you use induction and the induction assumption is not obvious, state it. Sometimes, to prove a statement A by induction, we need to prove a stronger statement B or write the claim in a clever way to allow the induction to work. This often occurs when proving loop invariants. For example, "We prove by induction on the size n of the larger of the 2 graphs that the algorithm uses only n comparisons."
    3. When using induction, prove the base cases. If these are truly trivial, it is permissible to state this, e.g. "The n=0,1,2 cases are all trivial." If a case requires checking nothing, for example if some assertion is made about all of the elements of the empty set, then it is permissible to state that it holds "vacuously". But sometimes base cases are hard and require careful justification. Recognize these times.

      Also be careful of what constitutes a base case. For example, if, exactly when n>=1, we replace n by floor(n/2) and invoke the induction hypothesis, then 0 is the base case. But if, exactly when n>=100, we replace n by floor(n/2) and invoke the induction hypothesis, then 50 through 99 are the base cases. (and possibly 0 though 49 if we care about these)

    4. List the important calculations, but truly trivial calculations can be omitted. For example, "x>=1 => x+1>= 2 => (x+1)/2>=1 => -(x+1)/2<=-1" is verbose, "x>=1 => -(x+1)/2<=-1" is better.
    5. Approximate equality is fine for building intuition and can even be stated in a proof, but exact inequalities should be used to back it up. For example 1+x is approximately e^x for small x, but it is useful to know that 1+x<=e^x for all x and Taylor's theorem can be used to bound the gap if need be.
    6. For a very tricky proof, give the high-level intuition first.
    7. If an idea surfaces within a proof that could be used later and is interesting in its own right, consider making it a lemma. This is like code modularity and reuse.
  5. Indicate that the proof is over, e.g. by a square or QED.
  6. Possibly state the significance of what was just shown if it is not obvious. (If all we cared about was whether G were connected, we would use a simpler technique, such as Dijkstra's algorithm; but the above is useful for analysis.)

Last modified 9/27/04