http://en.wikipedia.org/wiki/Mathematical_induction
Mathematical induction is a form of direct proof, and it is done in two steps. The first step, known as the base case, is to prove the given statement for the first natural number. The second step, known as the inductive step, is to prove that the given statement for any one natural number implies the given statement for the next natural number.
Mathematical induction in this extended sense is closely related to recursion. Mathematical induction, in some form, is the foundation of all correctness proofs for computer programs.
http://www.cs.sfu.ca/~bbart/225/induction1-1.html
The technical term for this is the inductive step: to use the verified smaller structures to verify the larger structures. Each of the smaller structures is called an inductive hypothesis.
In a simple induction, like this one, the inductive step shows that the nth case you verify will imply the n + 1st case.
Loop Invariants
http://sydney.edu.au/engineering/it/~jchan3/soft1001/jme/recursion/recursion.html
http://www.johndcook.com/blog/2012/08/14/understanding-recursion-ii/
Recursion is about solving a problem in terms of smaller versions of itself. The process ends because the problems get smaller until the process reaches some problem small enough to be solved directly.
Mathematical induction is a form of direct proof, and it is done in two steps. The first step, known as the base case, is to prove the given statement for the first natural number. The second step, known as the inductive step, is to prove that the given statement for any one natural number implies the given statement for the next natural number.
Mathematical induction in this extended sense is closely related to recursion. Mathematical induction, in some form, is the foundation of all correctness proofs for computer programs.
Another variant, called complete induction (or strong induction or course of values induction), says that in the second step we may assume not only that the statement holds for n = m but also that it is true for all n less than or equal to m.
The technical term for this is the inductive step: to use the verified smaller structures to verify the larger structures. Each of the smaller structures is called an inductive hypothesis.
In a simple induction, like this one, the inductive step shows that the nth case you verify will imply the n + 1st case.
For simple induction, you only have to verify one basis case:n = 1. For strong induction, it may be several cases: whatever your inductive step doesn't cover. The process of verifying these cases is known as the basis step. The combination of the inductive step and basis step is a proof by induction.
Part 2: How Induction Relates to Computer Science
Induction is about mathematical structures which are self-similar.
Induction and Recursion
Recursion is an algorithmic technique where you solve a problem by using the solutions to smaller instances of the same problem.
This sort of recursive reasoning, where you break down the large case into smaller cases, is known as the top-down approach
Induction and Iteration
The other way to look at induction is by starting with case 1, the basis case. Then, by using the inductive step, case 1 implies case 2.
This sort of recursive reasoning, where you use the smaller cases to build up to the large case, is known as the bottom-up approach.
Insertion Sort is an incremental sort. Each loop begins with a sorted
A[0..i-1]
, and the element A[i]
is joined to it such that the result is a sorted A[0..i]
. In other words:At the beginning of each loop,This is a loop invariant about the progress of the algorithm.A[0..i-1]
is a sorted permutation of the firsti
elements ofA[]
, and at the end of each loopA[0..i]
is a sorted permutation of the firsti+1
elements ofA[]
.
Once you have declared a loop invariant, your next goal is to prove it by induction. Why? Because the byproduct of your proof will be a proof of correctness of the algorithm.
Invariants and Recursive Algorithms
Recursive Algorithms often solve problems by the Divide & Conquer paradigm, which involves three stages: divide, conquer and combine.
- Divide the problem into 2 or more smaller subproblems, whose solutions are strongly related to the solution of the original problem.
- Conquer by calling the recursive algorithm on each subproblem.
- Combine together the subproblem solutions to form the solution to the original problem.
Running Time Analysis of Merge Sort
Inductive Step
Assume T(n) ≤ c⋅n log n holds for every n < k, and prove T(k) ≤ c⋅k log k.
Case 1: k is even
T(k) | ≤ T(⎡k/2⎤) + T(⎣k/2⎦) + ak = T(k/2) + T(k/2) + ak ≤ c⋅(k/2) log(k/2) + c⋅(k/2) log(k/2) + ak (Inductive Hypothesis) = c⋅k log(k/2) + ak = c⋅k (log k − log 2) + ak = c⋅k log k − ck log 2 + ak ≤ c⋅k log k (when c ≥ a / log 2) |
Case 2: k is odd
T(k) | ≤ T(⎡k/2⎤) + T(⎣k/2⎦) + ak = T((k + 1)/2) + T((k − 1)/2) + ak ≤ c⋅(k + 1)/2 log((k + 1)/2) + c⋅(k − 1)/2 log((k − 1)/2) + ak (Inductive Hypothesis) = (c/2)⋅[(k + 1) log(k + 1) + (k − 1) log(k − 1)] − ck log 2 + ak |
If you're still paying attention, you'll notice the symmetry with Case 1. Except for the first term, the two expressions are identical, and so if only you could show
(k + 1) log(k + 1) + (k − 1) log(k − 1) ≤ 2k log k, |
you would be home. Unfortunately, this inequality is false, and you can prove it is false by using calculus, or perhaps even mathematical induction. But it turns out that the left hand side is not that much bigger than the right hand side, which means that you can utilize a portion of the − ck log 2 term to reduce its value by just enough to satisfy the inequality.
Back to the action:
T(k) | ≤ (c/2)⋅[(k + 1) log(k + 1) + (k − 1) log(k − 1)] − ck log 2 + ak ≤ (c/2)⋅[(k + 1) log(k + 1) + (k − 1) log(k + 1)] − ck log 2 + ak = c⋅k log(k + 1) − ck log 2 + ak = c⋅k log(k + 1) − ck log 4/3 − ck log 3/2 + ak = c⋅k[log(k + 1) − log 4/3] − ck log 3/2 + ak = c⋅k log 3(k + 1)/4 − ck log 3/2 + ak ≤ c⋅k log k − ck log 3/2 + ak (when 3(k + 1)/4 ≤ k, i.e., k ≥ 3) ≤ c⋅k log k (when c ≥ a / log(3/2)) |
So, no matter whether k is even or odd, T(k) ≤ c⋅k log k, whenever k ≥ 3 and c ≥ max(a / log 2, a / log(3/2)) = a / log(3/2).
[You're almost there. There's one last detail to take care of.]
http://www.johndcook.com/blog/2012/08/14/understanding-recursion-ii/
Recursion is about solving a problem in terms of smaller versions of itself. The process ends because the problems get smaller until the process reaches some problem small enough to be solved directly.
Recursion is the process of solving a problem in terms of smaller versions of the same problem. Since the problem gets smaller each time, the process eventually terminates in a problem (the “base case”) that can be solved directly. Be sure of three things:
The problem gets smaller each time.
You include a solution for the base case.
Each case is handled correctly.