Would you like to inspect the original subtitles? These are the user uploaded subtitles that are being translated:
1
00:00:09,368 --> 00:00:21,367
Another category of functions is recursive programs, which are programs that are defined using references to themselves.
2
00:00:21,367 --> 00:00:29,367
Here is an example of a recursive method, the Faculty function. The faculty of some number is defined as the
3
00:00:29,367 --> 00:00:38,368
multiple of all whole numbers from that number down to 1. It is a function that grows extremely fast,
4
00:00:38,368 --> 00:00:45,368
and as you can see in the code, whenever the input parameter n is not equal to 1, the function returns n
5
00:00:45,368 --> 00:00:55,368
times itself, but called with an input that is 1 smaller. If n does equal 1, the function returns 1.
6
00:00:55,368 --> 00:01:04,367
Let's try to illustrate it. Here is Faculty of N, which is N times Faculty of N-1.
7
00:01:04,367 --> 00:01:15,367
Faculty of N-1 in turn gets N-1 as the input, so that is N-1 times Faculty of N-2.
8
00:01:15,367 --> 00:01:29,367
Faculty of N-2 is N-2 times Faculty of N-3, and so on, until we have subtracted so much from N, so we are
9
00:01:29,367 --> 00:01:38,367
down at, say, 2 times Faculty of 1, and Faculty of 1 returns 1 by definition.
10
00:01:38,367 --> 00:01:44,367
The execution then goes all the way back up the recursion chain with all the return values.
11
00:01:44,367 --> 00:01:51,367
All in all, we have seen N recursion steps from N all the way down to 1.
12
00:01:51,367 --> 00:01:56,367
Each recursion step required some constant number of instructions.
13
00:01:56,367 --> 00:02:02,367
We have seen that the exact constant number is not important when making this kind of asymptotic complexity
14
00:02:02,367 --> 00:02:14,367
analysis, which is what all this Big O, Big Theta, and Big Omega is all about, so let's just write c as any constant.
15
00:02:14,367 --> 00:02:23,367
In total, we get a complexity of c times N instructions, and we can use the component N and the slightly
16
00:02:23,367 --> 00:02:30,367
larger constant c + 1 to express an upper bound of the complexity.
17
00:02:30,367 --> 00:02:41,367
Similarly, we can use c - 1 to express a lower bound. And since we now both have a lower and an upper bound,
18
00:02:41,367 --> 00:02:50,367
we can denote the complexity with Big Theta of N. Let's try a slightly more complicated example.
19
00:02:50,367 --> 00:02:57,367
You may know the Fibonacci numbers, which is a sequence of numbers where any element in the sequence is
20
00:02:57,367 --> 00:03:05,367
defined as a sum of the former two elements. Consider 1 and 1 as the first 2 elements.
21
00:03:05,367 --> 00:03:15,367
The next element must be 2, because 1 + 1 is 2. The next element is 3, because the 2 former elements,
22
00:03:15,367 --> 00:03:28,367
1 and 2 added gives 3, and so on. The code shown returns the Fibonacci number with precision N.
23
00:03:28,367 --> 00:03:34,367
If the input is 0 or 1, it returns 1. That was the two first positions.
24
00:03:34,367 --> 00:03:42,367
Otherwise, it returns the sum of the previous Fibonacci number, and the number before that.
25
00:03:42,367 --> 00:03:49,367
Like with the Faculty function in the previous example, we can try to unfold the Fibonacci function called
26
00:03:49,367 --> 00:04:04,367
with the input value of 5. That call results in 2 new calls, Fibonacci of 4 and Fibonacci of 3, respectively.
27
00:04:04,367 --> 00:04:12,367
Fibonacci of 4 results in 2 new calls, namely 2 Fibonacci of 3 and Fibonacci of 2.
28
00:04:12,367 --> 00:04:21,367
Also, the Fibonacci of 3 from before similarly results in 2 new calls, namely Fibonacci of 2 and Fibonacci of 1.
29
00:04:21,367 --> 00:04:30,367
Notice that at Fibonacci of 1, that part of the recursion tree stops as the function just returns 1.
30
00:04:30,367 --> 00:04:39,367
Fibonacci of 3 results in 2 new calls, namely Fibonacci of 2 and Fibonacci of 1, like we have seen just before.
31
00:04:39,367 --> 00:04:48,367
Fibonacci of 2 results in 2 new calls, Fibonacci of 1 and Fibonacci of 0, both of which returns with a 1
32
00:04:48,367 --> 00:04:58,367
instead of continuing the recursion. The 2 final recursion steps happen at the Fibonacci of 2 here, and here.
33
00:04:58,367 --> 00:05:06,367
That seemed relatively complex, almost as if the execution exploded in recursion steps.
34
00:05:06,367 --> 00:05:14,367
But how complex is it actually? Consider the recursion chain along the shown arrow.
35
00:05:14,367 --> 00:05:23,367
This is the longest of all the recursion chains in the tree. Along that chain we have N recursion steps
36
00:05:23,367 --> 00:05:32,367
for the input parameter N. If we should provide a quick upper bound and worst-case for the complexity,
37
00:05:32,367 --> 00:05:44,367
we could notice that for each step along this longest chain, the total number of notes in the tree at most doubles.
38
00:05:44,367 --> 00:05:54,367
So, there are N levels in the tree and at each level, the width of the tree at most doubles.
39
00:05:54,367 --> 00:06:02,367
If the body of the function consumes some constant number of instructions, the total complexity can be
40
00:06:02,367 --> 00:06:10,367
described as that constant number times the total number of function calls across a whole tree.
41
00:06:10,367 --> 00:06:18,367
And since the number of function calls doubles at each level, it can be expressed as 2 times 2 times 2 and
42
00:06:18,367 --> 00:06:30,367
so on, or c times 2 to the power of N - 1, to be precise, because we do not answer the recursion for values
43
00:06:30,367 --> 00:06:36,367
of N smaller than or equal to 1. If we should come up with some upper bound for that expression,
44
00:06:36,367 --> 00:06:43,367
we could use this constant c, but maybe the double value, for example, c times 2.
45
00:06:43,367 --> 00:06:51,367
So, an upper bound can be written like this. And as you can see, we just got a factor of 2 more,
46
00:06:51,367 --> 00:07:00,367
so instead of writing 2 to the power of N - 1, we can write 2 to the power of N, which is simpler.
47
00:07:00,367 --> 00:07:08,367
In other words, the worst-case complexity of our recursive Fibonacci function is the exponential function,
48
00:07:08,367 --> 00:07:16,367
Big O of 2 to the power of N. That results in a really poor performance.
49
00:07:16,367 --> 00:07:25,367
And I therefore challenge you, try to implement the function yourself, and execute it with 2 inputs, 8 and 80, respectively.
50
00:07:25,367 --> 00:07:32,367
If you accept the challenge, I recommend that you change the return type of the function from an int to a long.
51
00:07:32,367 --> 00:07:38,000
I'm glad you're still here, because that means that you didn't wait to continue with the course until the
52
00:07:38,000 --> 00:07:44,500
recursive Fibonacci function returned. The thing is that even though Fibonacci of 8 might return as within
53
00:07:44,500 --> 00:07:54,367
the blink of an eye, Fibonacci of _____ will explode in complexity to a level where you shouldn't expect it to return.
54
00:07:54,367 --> 00:08:01,367
Here you see another approach to the Fibonacci function. I will not dive into details, but as you can see,
55
00:08:01,367 --> 00:08:08,367
it follows a plain old iterative strategy. Let's describe the complexity of that implementation.
56
00:08:08,367 --> 00:08:15,367
The two initial declarations and assignments, as well as the last return statement are independent on
57
00:08:15,367 --> 00:08:25,367
the input in and can thus be considered constant in complexity. Let's use c1 to describe these.
58
00:08:25,367 --> 00:08:35,368
Next the follow up has iterations, and in each iteration, some things happen that are also independent on
59
00:08:35,368 --> 00:08:41,368
the input end. It's just a bunch of assignments and the single declaration, therefore, we can describe the
60
00:08:41,368 --> 00:08:48,368
complexity of the loop internals with another constant, say, c2.
61
00:08:48,368 --> 00:09:00,368
So, c1 + N times c2 is a complexity of this method where c1 and c2 are constant numbers.
62
00:09:00,368 --> 00:09:11,368
In order to come up with an upper bound for that complexity, I could use c2 and increase it; for example, to c2 + 1.
63
00:09:11,368 --> 00:09:23,368
Then, N times c2 + 1 would be a steeper line than this c1 + N times c2, and thus an upper bound sooner or
64
00:09:23,368 --> 00:09:33,368
later, and therefore the worst-case complexity of this implementation is O of N, Big O of N.
65
00:09:33,368 --> 00:09:40,368
This is a straight line, and a much, much better performance guarantee than the recursive implementation from
66
00:09:40,368 --> 00:09:50,368
before, which was Big O of 2 to the power of N. This implementation is also Big Omega of N, by the way,
67
00:09:50,368 --> 00:09:59,368
since we can use, for example, c2 - 1 to express a lower bound or best-case for the complexity.
68
00:09:59,368 --> 00:10:10,368
And thus, the complexity is also Big Theta of N. As a last example of recursive complexity calculation,
69
00:10:10,368 --> 00:10:17,368
consider this somewhat complex function. It is a recursive implementation of the binary search that we have
70
00:10:17,368 --> 00:10:25,368
looked at earlier. We will not go into details in the code, but use it as an example of how to recognize
71
00:10:25,368 --> 00:10:34,368
what can be considered as constant elements of the code and how to quickly get an idea of the worst-case performance.
72
00:10:34,368 --> 00:10:44,368
Again, we consider a haystack that is ordered, and this time the needle that we look for is the element 25.
73
00:10:44,368 --> 00:10:52,368
The code uses two pointers representing a lower and upper boundary of the haystack.
74
00:10:52,368 --> 00:10:58,368
The two boundaries define the interesting area to search in. The first thing that happens is that the
75
00:10:58,368 --> 00:11:07,368
midpoint between the two boundaries is calculated. In this case, that midpoint points to the value 17.
76
00:11:07,368 --> 00:11:17,368
Seventeen is smaller than 25, so everything to the left is irrelevant and can be ignored in further search.
77
00:11:17,368 --> 00:11:21,368
We therefore move the lower boundary to the right of the midpoint.
78
00:11:21,368 --> 00:11:28,368
Now we calculate the midpoint again, and this time it points to the value 26.
79
00:11:28,368 --> 00:11:35,368
Twenty-six is larger than 25, so everything to the right is now irrelevant, and we can now move the upper
80
00:11:35,368 --> 00:11:46,368
boundary to the left of the midpoint. A new midpoint is calculated, 21 is smaller than 25, and thus we move
81
00:11:46,368 --> 00:11:53,368
the lower boundary to the right of the midpoint. Notice that the lower and upper boundaries now point to
82
00:11:53,368 --> 00:12:03,368
the same element. The midpoint is therefore also the same element and in that we find the needle, namely 25.
83
00:12:03,368 --> 00:12:11,368
Let's quickly calculate the complexity and assume that the haystack has N elements.
84
00:12:11,368 --> 00:12:19,368
The worst-case is as follows. Each time we perform the recursion, either the lower or upper boundary is
85
00:12:19,368 --> 00:12:26,368
replaced by the midpoint, effectively cutting the remaining search area in half.
86
00:12:26,368 --> 00:12:37,368
And we have seen, therefore, N elements log N halvings unnecessary, and therefore we need at most log N recursion steps.
87
00:12:37,368 --> 00:12:45,368
Consider the highlighted part of the code here. Nothing of that part is related to the input N.
88
00:12:45,368 --> 00:12:53,368
Well, some of the code may return at some point, and we will not always run all of the highlighted code.
89
00:12:53,368 --> 00:13:01,368
The point is, however, that even if we did run all the highlighted code, it would still be at most, say,
90
00:13:01,368 --> 00:13:11,368
100 or 200 instructions. In other words, a constant number. Let's use a c to denote this constant number.
91
00:13:11,368 --> 00:13:18,368
This gives c times log N instructions in total, and since we can easily find an upper bound for this
92
00:13:18,368 --> 00:13:25,500
expression, for example, by multiplying with c + 1 instead of c, we can now say that the worst-case
93
00:13:25,500 --> 00:13:35,368
complexity is Big O of log N. So, a good strategy when dealing with complexity analysis of recursive
94
00:13:35,368 --> 00:13:42,500
functions is to first try to fold the recursion out and see how many recursion steps we worst-case have to
95
00:13:42,500 --> 00:13:51,368
deal with, and then count the size in the number of instructions of each step.
96
00:13:51,368 --> 00:13:57,000
In our examples, we were lucky that only a constant number of instructions was needed.
12784
Can't find what you're looking for?
Get subtitles in any language from opensubtitles.com, and translate them here.