Would you like to inspect the original subtitles? These are the user uploaded subtitles that are being translated:
1
00:00:01,050 --> 00:00:03,770
Hello and welcome back to the course on artificial intelligence.
2
00:00:03,810 --> 00:00:08,280
And today we're talking about Mark of decision processes or M.D..
3
00:00:08,760 --> 00:00:11,120
Let's have a look what we've got today.
4
00:00:11,430 --> 00:00:14,060
So last time we stopped on the concept of a map.
5
00:00:14,070 --> 00:00:19,980
So because we've calculated the values based on the Belman equation we can derive this map for our agent
6
00:00:20,010 --> 00:00:21,060
on this maze.
7
00:00:21,240 --> 00:00:27,570
And basically what that means is wherever the ange a agent starts let's say it starts over there.
8
00:00:27,570 --> 00:00:33,270
It knows exactly which steps to take in order to get to the finish line so it just goes up up right.
9
00:00:33,270 --> 00:00:35,040
Right and done.
10
00:00:35,070 --> 00:00:37,540
And so the question here is is that it.
11
00:00:37,590 --> 00:00:39,780
Is it really that simple.
12
00:00:39,780 --> 00:00:44,690
Is reinforcement learning really that you know for the lack of a better word boring.
13
00:00:44,790 --> 00:00:46,420
It's it's yeah.
14
00:00:46,440 --> 00:00:50,830
Once you have the math that's it all you have to do is you've done it just full of them.
15
00:00:51,090 --> 00:00:55,460
Well the reality is that it's not actually that simple.
16
00:00:55,500 --> 00:01:01,020
And that's a good thing because it makes this course more interesting for us and we can actually solve
17
00:01:01,020 --> 00:01:02,610
much more complex problems.
18
00:01:02,610 --> 00:01:05,460
So this is where a mark of a process is coming.
19
00:01:05,490 --> 00:01:07,770
But first we're going to talk about two things.
20
00:01:07,760 --> 00:01:11,450
We're into it deterministic search versus non-deterministic search.
21
00:01:11,700 --> 00:01:14,750
So let's talk about the concept of deterministic search.
22
00:01:14,820 --> 00:01:21,570
This is our agent in the maze and deterministic search means that if the agent decides to go up then
23
00:01:21,570 --> 00:01:26,980
what will happen is 100 percent probability it will go up.
24
00:01:27,030 --> 00:01:28,700
That's exactly what will happen.
25
00:01:28,700 --> 00:01:29,740
There's no other options.
26
00:01:29,740 --> 00:01:33,690
Once once it says go up or click the up arrow it'll go up.
27
00:01:33,690 --> 00:01:35,070
There's no other options.
28
00:01:35,250 --> 00:01:41,950
Now on the other hand nondeterministic search is when our agent says it wants to go up.
29
00:01:42,130 --> 00:01:44,430
They are actually couple of options.
30
00:01:44,460 --> 00:01:48,820
For example there could be three options and we're going to look an example where there are three options
31
00:01:48,830 --> 00:01:53,400
but it doesn't have to be a limit to three before it could be different depending on depends on the
32
00:01:53,400 --> 00:01:59,640
problem the randomness could be different but in our case it could be three options with an 80 percent
33
00:01:59,640 --> 00:02:01,640
chance he does go up.
34
00:02:01,860 --> 00:02:07,500
But then with a 10 percent chance when he wants to go up he'll actually go to the left just because.
35
00:02:07,500 --> 00:02:11,080
Because that's how the environment works that's the world that he lives in.
36
00:02:11,430 --> 00:02:14,840
And with another check in 10 percent chance he'll actually go right.
37
00:02:14,880 --> 00:02:17,770
And in this case he'll fall into the firepit.
38
00:02:17,850 --> 00:02:20,730
So that is how it all works.
39
00:02:20,760 --> 00:02:26,760
That's an example of a nondeterministic sure search a stochastic process and what the point of this
40
00:02:26,760 --> 00:02:35,370
is is to make a more realistic model of what could actually happen in a real world in a real world type
41
00:02:35,370 --> 00:02:40,560
of problem because very rarely do you get situations like this when you do something and it happens
42
00:02:40,560 --> 00:02:41,390
exactly that way.
43
00:02:41,520 --> 00:02:46,560
And even if you think about it in terms of games let's say you've got an agent playing Pac-Man.
44
00:02:46,740 --> 00:02:51,270
Well not always is it the case that if he's standing in the square he goes up.
45
00:02:51,360 --> 00:02:54,260
He will get the same exact result every time.
46
00:02:54,460 --> 00:02:59,820
Well he will indeed go up but it may be in one case you won't get eaten by a ghost in either case.
47
00:02:59,820 --> 00:03:01,570
He will get eaten by a ghost.
48
00:03:01,590 --> 00:03:05,970
So as you can see there's some randomness to it because it depends on how the ghosts are moving and
49
00:03:05,970 --> 00:03:07,350
they don't always move the same way.
50
00:03:07,350 --> 00:03:09,370
They don't always start in the same locations.
51
00:03:09,510 --> 00:03:16,140
So it's very logical is very fair that there is some randomness there's something that is not under
52
00:03:16,140 --> 00:03:21,810
the control of the agent and that is this is just a way for us to present that in order for us to learn
53
00:03:21,810 --> 00:03:27,240
how we can deal with it and how that affects a Belman equation how it affects the whole reinforcement
54
00:03:27,240 --> 00:03:29,010
learning process.
55
00:03:29,070 --> 00:03:33,780
But at the same time the randomness is of course not limited to if you go up there's a 10 percent chance
56
00:03:33,780 --> 00:03:38,400
you'll go right or temp's and just go left or if you go down to 10 percent chance you go right or left
57
00:03:38,400 --> 00:03:42,840
or you're right there's a 10 percent chance an up or down subtle limited to where you're going to end
58
00:03:42,840 --> 00:03:45,550
up sometimes you might have a problem that is exactly.
59
00:03:45,570 --> 00:03:47,390
Sometimes the possibilities might be different.
60
00:03:47,430 --> 00:03:52,990
Sometimes the randomness might boil down to something else it might be boiled down like that example.
61
00:03:52,980 --> 00:03:58,890
Pacman ghosts eating you are not eating you or it might boil down to something different.
62
00:03:58,890 --> 00:04:05,550
For instance like there's there's like if the agent is playing Doom and then there's something like
63
00:04:05,700 --> 00:04:11,040
a monster which is going to shoot him in one case and other cases there's like there's a probability
64
00:04:11,060 --> 00:04:14,380
if we should all get shot and we won't get shot.
65
00:04:14,550 --> 00:04:19,710
And so and so something that is out of the control of the agents is something I cannot predict.
66
00:04:19,710 --> 00:04:25,740
That's what we are modeling here in nondeterministic search and this is where we have directly approached
67
00:04:25,950 --> 00:04:32,780
two new concepts a mark of processes and or a mark of process and a marker mark of decision process
68
00:04:32,790 --> 00:04:34,130
so let's have a look at these.
69
00:04:34,150 --> 00:04:39,080
And you know how much I don't like to put definitions and lots of text on the side.
70
00:04:39,090 --> 00:04:42,280
But in this case it is necessary for us to go through that.
71
00:04:42,280 --> 00:04:46,220
So let's have a look a stochastic process has a mark of property.
72
00:04:46,240 --> 00:04:51,750
If the conditional probability distribution of future states of the process conditional and both past
73
00:04:51,750 --> 00:04:58,200
and present state depends only upon the present state not on the sequence of events that preceded it.
74
00:04:58,230 --> 00:05:00,410
A process with this property is called a marker.
75
00:05:01,040 --> 00:05:06,470
Very complex definition and it kind of like you introduce a little bit not only contradicts itself but
76
00:05:06,470 --> 00:05:11,110
feels like it contradicts itself so here it is conditional for positive presence that depends on your
77
00:05:11,110 --> 00:05:11,450
point.
78
00:05:11,480 --> 00:05:14,450
But at the same time it only depends upon the present state.
79
00:05:14,510 --> 00:05:17,510
So don't get too bogged down in that.
80
00:05:17,670 --> 00:05:23,050
I'll I'll break it down in simple terms so a mark of property is when your future states.
81
00:05:23,060 --> 00:05:25,310
So not just your choice but the whole thing.
82
00:05:25,310 --> 00:05:31,640
Your choice and the environment it will only like the results of all of the of the action you take in
83
00:05:31,640 --> 00:05:33,900
that environment will only depend on where you are now.
84
00:05:33,920 --> 00:05:35,770
It will not depend on how you got there.
85
00:05:36,110 --> 00:05:36,560
And that's it.
86
00:05:36,560 --> 00:05:40,630
So that's a matter of public and a process which has this property is called the market process.
87
00:05:40,880 --> 00:05:47,570
So to put it into an example so if your agent is here and if he goes if he decides to go up he might
88
00:05:47,570 --> 00:05:48,030
go.
89
00:05:48,040 --> 00:05:52,940
He in our case in our nondeterministic search example he actually might go left and right.
90
00:05:53,000 --> 00:05:53,680
All right.
91
00:05:53,690 --> 00:05:58,940
That's because we have that stick a this city inside our environment we have that randomness inside
92
00:05:58,940 --> 00:05:59,710
our environment.
93
00:05:59,810 --> 00:06:01,820
So any one of these things might happen.
94
00:06:01,820 --> 00:06:07,250
But the key here is that this is a mark of process because we don't care how you got here.
95
00:06:07,250 --> 00:06:10,700
He could have come from the top ended up here he could have come from the left and that up here you
96
00:06:10,700 --> 00:06:12,370
could come from the bottom and end up here.
97
00:06:12,380 --> 00:06:16,640
He could have like play a moved around here like 100000 times and then got here.
98
00:06:16,700 --> 00:06:22,490
It does not matter what happened before only what matters is which state is he in now.
99
00:06:22,520 --> 00:06:31,160
And so the the probabilities of going left or right or up they will always be the same if he's in this
100
00:06:31,160 --> 00:06:32,250
state now.
101
00:06:32,690 --> 00:06:37,530
And so that's basically just saying it doesn't matter what happened before we're here now.
102
00:06:37,640 --> 00:06:39,150
This is the state you're in.
103
00:06:39,200 --> 00:06:42,320
And don't forget that state doesn't just mean where he's standing.
104
00:06:42,320 --> 00:06:48,140
The state is the state of the whole of the whole of the agent in the environment so is there like monsters
105
00:06:48,140 --> 00:06:53,030
on the right or the monsters on the left or you know is the ghost coming from a top or bottom whatever
106
00:06:53,090 --> 00:06:54,530
state you're in now.
107
00:06:54,560 --> 00:06:58,460
Doesn't matter how you got there doesn't matter how and how it all came to be that you're there in that
108
00:06:58,460 --> 00:06:58,790
state.
109
00:06:58,790 --> 00:07:02,990
Now what will happen in the future is only determined by the state you're in now.
110
00:07:02,990 --> 00:07:07,440
Plus the actions you will take them plus of course the randomness that is overlaid on top of that.
111
00:07:07,460 --> 00:07:14,280
So that's a mark of process and a marker decision process or an MVP or marker decision process.
112
00:07:14,390 --> 00:07:20,390
Provide a mathematical framework for of modeling decision making in situations where outcomes are partly
113
00:07:20,420 --> 00:07:23,430
random and partly under control over decision making.
114
00:07:23,570 --> 00:07:29,120
So important to understand that mark of decision process processes are different and different whole
115
00:07:29,150 --> 00:07:32,210
concept mark of process to mark of process.
116
00:07:32,340 --> 00:07:34,810
There are like a mathematical framework so.
117
00:07:34,970 --> 00:07:39,080
But at the same time I thought it was important for us to understand what a mark of process is because
118
00:07:39,170 --> 00:07:45,140
I think it still helps in understanding of mark mark of decision process and so a mark of decision process
119
00:07:45,200 --> 00:07:46,130
is there.
120
00:07:46,230 --> 00:07:50,950
This is exactly what we've been discussing Up till now so that the agent lives in this environment where
121
00:07:51,290 --> 00:07:56,570
he has control like him previously and had full control of what's going on but now it has a little bit
122
00:07:56,570 --> 00:07:57,530
less control.
123
00:07:57,590 --> 00:08:00,270
It can decide to go up but it actually knows.
124
00:08:00,290 --> 00:08:05,570
OK so if I go up there's an apes chance I'll go up this attempts and chances go left and chance will
125
00:08:05,560 --> 00:08:06,170
go right.
126
00:08:06,170 --> 00:08:08,930
So not everything is fully under its control.
127
00:08:08,930 --> 00:08:13,280
There is some randomness in this environment and that's exactly what a mark of decision process and
128
00:08:13,280 --> 00:08:18,830
Markov decision process is the framework that the agent will use in order to understand what to do in
129
00:08:18,830 --> 00:08:19,400
this environment.
130
00:08:19,400 --> 00:08:22,400
So we've got an environment with some toxicity some randomness.
131
00:08:22,550 --> 00:08:27,000
And now the agent has to choose for instance should go up down left or right.
132
00:08:27,370 --> 00:08:28,530
He has to make that decision.
133
00:08:28,520 --> 00:08:29,820
He doesn't know what to do.
134
00:08:30,140 --> 00:08:36,200
And in order to make that decision is going to apply a framework is going to be using a mark of decision
135
00:08:36,200 --> 00:08:40,960
process in order to make that decision what what's going to happen where it's going to go.
136
00:08:40,970 --> 00:08:47,600
And so basically this environment that poses this problem it is referred to the mark of decision process
137
00:08:47,600 --> 00:08:52,820
so it's the framework the agent using at the same time the environment is referred to that the agent
138
00:08:52,820 --> 00:08:55,810
is operating in a market decision process environment.
139
00:08:56,280 --> 00:09:01,190
And so basically here we have two concepts we've got the mark of process is the way this environment
140
00:09:01,190 --> 00:09:03,740
is designed that the PA does the work.
141
00:09:03,770 --> 00:09:07,020
What happens from where you are now doesn't depend on the past.
142
00:09:07,130 --> 00:09:11,240
And that same time we've got the mark of decision process is the framework that the agent is going to
143
00:09:11,240 --> 00:09:13,630
be using in order to solve this environment.
144
00:09:13,970 --> 00:09:18,830
And the good news is that the mark of decision process or that framework that we're talking about is
145
00:09:18,830 --> 00:09:24,730
actually just an add on to our Belman equation question is the Belman equation but just a bit more sophisticated.
146
00:09:24,740 --> 00:09:26,960
So let's have a look at that.
147
00:09:27,050 --> 00:09:28,910
This is our Belman equation so far.
148
00:09:29,030 --> 00:09:31,030
It's the maximum of all possible actions.
149
00:09:31,040 --> 00:09:35,150
So the value of being in a state is the maximum of all possible actions that you can take from that
150
00:09:35,150 --> 00:09:35,990
state.
151
00:09:36,260 --> 00:09:41,930
The maximum is taken from the reward that you would get by taking that action in that state plus a discount
152
00:09:41,930 --> 00:09:45,410
factor times the value of the next state which is as prime.
153
00:09:45,410 --> 00:09:47,390
So that's what we've had so far.
154
00:09:47,400 --> 00:09:52,550
Now because we have some randomness in our whole process this this part will change because we don't
155
00:09:52,550 --> 00:09:57,620
actually know which state will end up and we don't know what s prime will be will it be if we're going
156
00:09:57,630 --> 00:10:03,680
up will it be up or will be left will be right so we actually have to place this with the expected value
157
00:10:03,680 --> 00:10:04,960
of the next date.
158
00:10:04,970 --> 00:10:08,810
So here we're going to replace this so there's three possible states we can end up in.
159
00:10:08,810 --> 00:10:15,480
And so we're going to replace that with some value that state has a value of as one prime.
160
00:10:15,520 --> 00:10:18,190
That it has a view of as prime to prime.
161
00:10:18,470 --> 00:10:22,490
And this state has a value of the of us three Bryne.
162
00:10:22,640 --> 00:10:28,790
So now we're going to multiply the state that we actually are intending to go into by 80 percent because
163
00:10:28,790 --> 00:10:33,770
that's how probability of getting to that state plus the probability of getting to this state is 10
164
00:10:33,770 --> 00:10:39,800
percent plus people getting in-state So this is just our expected value so if from statistics if we
165
00:10:39,800 --> 00:10:46,880
take the expected value of getting into the state that we'll get into these are kind of like the average
166
00:10:47,060 --> 00:10:52,040
What's the average of what we'll get and then we replace that over here.
167
00:10:52,040 --> 00:10:56,210
Then we get this aggression and it jumps very quickly just because there's a big but if you look at
168
00:10:56,210 --> 00:10:59,930
it carefully you'll see the same thing said about Max here Max here.
169
00:10:59,960 --> 00:11:06,340
Then you've got r of S and A R of S and they you've got gamma you've got gamma.
170
00:11:06,410 --> 00:11:08,600
And then finally here you've got v.
171
00:11:08,630 --> 00:11:13,640
So you knew exactly it was a deterministic search you knew which states you'll get into.
172
00:11:13,640 --> 00:11:16,120
Now you don't know which state you'll get into since that of taking V.
173
00:11:16,120 --> 00:11:23,300
You're taking the expected value of the state you'll get into or of the future state or just in simpler
174
00:11:23,300 --> 00:11:25,920
terms you're just taking the average of what you will getting into.
175
00:11:26,060 --> 00:11:32,450
So you know like it was like for 30 plus 3 percent chance it will be like this Plus's divide by three
176
00:11:32,590 --> 00:11:32,900
basically.
177
00:11:32,900 --> 00:11:37,130
But in this case it's not it's not exactly like average average.
178
00:11:37,130 --> 00:11:40,410
It's it's a weighted average because of the probabilities here.
179
00:11:40,430 --> 00:11:45,980
So here you've got the probability of it when you're in this stage to take this action of getting into
180
00:11:46,040 --> 00:11:50,630
state as prime time the value of s prime and some to cross all these primes that you could possibly
181
00:11:50,630 --> 00:11:51,830
get into who we are.
182
00:11:51,830 --> 00:11:54,690
So exactly what we had three here one two three.
183
00:11:54,890 --> 00:11:57,330
Add them up multiply these add them up.
184
00:11:57,330 --> 00:11:58,040
Same here.
185
00:11:58,040 --> 00:11:58,820
One two three.
186
00:11:58,820 --> 00:12:01,660
Multiply them by the probabilities and add them up.
187
00:12:02,090 --> 00:12:05,180
And that is your new Belman equation.
188
00:12:05,180 --> 00:12:06,440
Congratulations.
189
00:12:06,470 --> 00:12:08,990
This is what we're going to be working with going forward.
190
00:12:09,140 --> 00:12:15,590
And that is the framework that is used in of decision processes so that is the framework that solves
191
00:12:15,590 --> 00:12:16,490
this.
192
00:12:16,620 --> 00:12:22,670
That agents used to solve this whole stochastic nondeterministic search problem where there's random
193
00:12:22,670 --> 00:12:25,460
events that are happening that they cannot control.
194
00:12:25,460 --> 00:12:26,920
So it's it's much more complex.
195
00:12:26,930 --> 00:12:30,150
But as you can see because we've built up slowly to it.
196
00:12:30,290 --> 00:12:33,120
Now we already know about this we know about.
197
00:12:33,130 --> 00:12:35,090
There's worry about this.
198
00:12:35,090 --> 00:12:36,160
We know about this.
199
00:12:36,170 --> 00:12:36,710
We know what they are.
200
00:12:36,710 --> 00:12:42,500
So all we did is we just introduced this part over here because there are probabilities involved in
201
00:12:42,920 --> 00:12:49,000
the action or the consequences of your action on nondeterministic they are based on probabilities.
202
00:12:49,220 --> 00:12:50,600
And so there we go.
203
00:12:50,600 --> 00:12:58,280
That's how a marker of decision process works and the underlying equation behind it.
204
00:12:58,330 --> 00:13:04,630
Once again it is something that is more like more closely resembles real world problems real or Sinatras
205
00:13:04,670 --> 00:13:08,690
or even game scenarios because not everything is straightforward.
206
00:13:08,690 --> 00:13:15,880
There is some randomness of all involved and not always will taking an action in a certain state will
207
00:13:15,870 --> 00:13:18,810
always Nawal not always will it lead to the same outcome.
208
00:13:18,890 --> 00:13:23,150
And so this is what we're going to be dealing with going forward and that's going to make things way
209
00:13:23,150 --> 00:13:24,310
more interesting.
210
00:13:24,380 --> 00:13:29,290
So hopefully you're excited for that and excited to see what's going to come next.
211
00:13:29,690 --> 00:13:35,870
And in the meantime I found a really cool paper for you to have a look at this time.
212
00:13:35,870 --> 00:13:37,460
It's a very applied paper.
213
00:13:37,460 --> 00:13:40,150
So this one is actually really interesting to read through.
214
00:13:40,160 --> 00:13:46,810
It's called a survey of applications of Mark of decision processes proces and it was written by white
215
00:13:46,820 --> 00:13:47,970
in 1993.
216
00:13:47,990 --> 00:13:56,000
There's the link and Ill show you examples of where Markov decision processes actually are used to model
217
00:13:56,000 --> 00:13:59,560
real life Sinatras I think I was very excited by this.
218
00:13:59,560 --> 00:14:03,880
I was impressed by some examples of population harvesting for instance.
219
00:14:03,880 --> 00:14:09,290
So let's say you have some fish and you know what the population of fish is you need to decide how many
220
00:14:09,290 --> 00:14:12,910
fish can we fish out this year and what.
221
00:14:13,250 --> 00:14:14,330
So that's your current state.
222
00:14:14,330 --> 00:14:17,220
That's the action that you're taking How many can we've shot at this year.
223
00:14:17,230 --> 00:14:20,420
So what are the up what are the possible outcomes of that.
224
00:14:20,540 --> 00:14:22,100
How many fish will we have next year.
225
00:14:22,160 --> 00:14:25,210
How many fish will we have the year after and the year after and so on.
226
00:14:25,250 --> 00:14:30,230
And it's not deterministic because it's not like if you take it at an hour and 90 percent of the population
227
00:14:30,230 --> 00:14:34,640
the next year you will have you know back to 100 percent is not not exactly sermonising.
228
00:14:34,640 --> 00:14:39,590
There are certain random factors involved which are out of our control and therefore we have to understand
229
00:14:39,760 --> 00:14:43,640
what's what's going to happen we have to model what's going to happen that's where a market decision
230
00:14:43,860 --> 00:14:46,060
processes agriculture.
231
00:14:46,070 --> 00:14:50,250
There's an example like something like harvesting crops how much crops do we harvest how much money
232
00:14:50,280 --> 00:14:51,440
do we not harvest.
233
00:14:51,470 --> 00:14:58,190
Another one which I looked at finance and investment like an insurance company needs to decide how much
234
00:14:58,190 --> 00:15:04,990
of its funds it will invest in any I think day or year or some period of time and there are those certain
235
00:15:05,020 --> 00:15:06,490
factors are of his control.
236
00:15:06,490 --> 00:15:11,260
For instance you know the market movements it doesn't know what can happen so it needs to actually model
237
00:15:11,260 --> 00:15:12,070
that somehow.
238
00:15:12,110 --> 00:15:14,350
A mark of decision processes used for that.
239
00:15:14,350 --> 00:15:16,890
So here you can see lots and lots of examples.
240
00:15:16,900 --> 00:15:20,340
And this is the number of examples given I think for each one.
241
00:15:20,650 --> 00:15:28,030
And so you know even sports examples for sports and epidemics and motor insurance claims inspections
242
00:15:28,090 --> 00:15:31,030
and maintenance and repairs it's also very interesting.
243
00:15:31,030 --> 00:15:31,900
Have a look at that.
244
00:15:31,930 --> 00:15:40,390
Just to give you an understanding of hey this is not just all made up stuff hypothetical The Matrix
245
00:15:40,390 --> 00:15:41,130
type of thing.
246
00:15:41,140 --> 00:15:45,580
This is actually the real world scenario so Ill give you a better understanding and this is what we
247
00:15:45,580 --> 00:15:50,410
talked about in the promotional video for the scores that or the description of the course that we're
248
00:15:50,410 --> 00:15:55,900
going to inspire you and your intuition to give you ideas for how to use AI in real life.
249
00:15:55,900 --> 00:15:57,820
This is your opportunity.
250
00:15:57,820 --> 00:15:59,790
Look at this paper to understand.
251
00:15:59,900 --> 00:16:02,890
OK so we're going to be dealing with mark of decision process going forward.
252
00:16:02,890 --> 00:16:07,210
That's really cool what do they look like in real life and this possibly could trigger some ideas for
253
00:16:07,210 --> 00:16:13,300
you how you could apply in the future to make the world a better place and we'd be super happy about
254
00:16:13,300 --> 00:16:13,650
that.
255
00:16:13,690 --> 00:16:18,560
We'd be happy if you could use what you learn in this course to make the world a better place.
256
00:16:18,730 --> 00:16:20,050
How fantastic with that.
257
00:16:20,380 --> 00:16:23,170
So on that note I hope you enjoyed today's tutorial.
258
00:16:23,170 --> 00:16:24,540
I look forward See you next time.
259
00:16:24,610 --> 00:16:26,420
And until then enjoy AI.
28357
Can't find what you're looking for?
Get subtitles in any language from opensubtitles.com, and translate them here.