Would you like to inspect the original subtitles? These are the user uploaded subtitles that are being translated:
1
00:00:00,330 --> 00:00:06,640
Hello and welcome to the final episode of this module the episode we will get to see the final result.
2
00:00:06,720 --> 00:00:11,790
Not only are we going to see the training of our DC fans but also we're going to see what it's capable
3
00:00:11,790 --> 00:00:13,980
of in terms of art.
4
00:00:14,160 --> 00:00:22,020
We're going to see the generated images of our DC guns in this fall their result here that so far is
5
00:00:22,020 --> 00:00:22,520
empty.
6
00:00:22,620 --> 00:00:29,520
As you can see as far is empty it's going to be populated with all the fake images of our Zygons.
7
00:00:29,520 --> 00:00:31,770
We're going to check if it looks like something.
8
00:00:31,890 --> 00:00:38,370
So if you're ready we'll start by printing some interesting information that we would like to see during
9
00:00:38,370 --> 00:00:39,200
the training.
10
00:00:39,450 --> 00:00:44,300
And that is for example the Epoque how many epochs out of 25.
11
00:00:44,340 --> 00:00:48,870
And also the step that would be nice to see the steps reached because there are actually a lot of steps
12
00:00:48,870 --> 00:00:56,100
in the day loading many batches so let's print of this and mostly what we need to do also is to save
13
00:00:56,520 --> 00:00:59,300
the images in this result's folder.
14
00:00:59,610 --> 00:01:03,570
And then after all this will be good to start the show.
15
00:01:03,720 --> 00:01:07,750
So let's do this let's quickly do those print.
16
00:01:08,040 --> 00:01:10,400
So I'm going to put in quotes.
17
00:01:10,620 --> 00:01:19,680
Well first the box into brackets so I'm going to add a percent D out of percent D because you know this
18
00:01:19,680 --> 00:01:21,980
first percent will correspond to the epoch.
19
00:01:22,230 --> 00:01:24,750
The second percent will correspond to 25.
20
00:01:24,750 --> 00:01:30,430
So you know we'll see like the epical reached out of 25.
21
00:01:30,450 --> 00:01:34,270
Then we're going to do something similar for the steps.
22
00:01:34,290 --> 00:01:37,300
So I'm adding another pair of square brackets.
23
00:01:37,340 --> 00:01:41,500
Percent D out of percent D.
24
00:01:41,510 --> 00:01:49,020
So this time percent here will correspond to the step I and percentage will correspond to the number
25
00:01:49,020 --> 00:01:53,290
of elements and they lower that is Lendale or lower.
26
00:01:53,490 --> 00:01:53,940
All right.
27
00:01:53,940 --> 00:02:02,190
And then after this that is for each step of each epoch we will print the last of the discriminator
28
00:02:02,340 --> 00:02:05,070
which will be here since it's a float.
29
00:02:05,070 --> 00:02:11,970
We're going to add some percent that for f to have a float with four decimals then we're going to add
30
00:02:11,970 --> 00:02:17,080
the same for the loss of the generator.
31
00:02:17,340 --> 00:02:23,170
So I just need to replace the here by less Gee there we go almost over.
32
00:02:23,220 --> 00:02:30,330
Now we need to get out of the quotes at some percent and then in some parenthesis we put the names of
33
00:02:30,330 --> 00:02:36,920
the variables that correspond to these values here the percentages and the percent point for refs.
34
00:02:36,920 --> 00:02:37,680
All right.
35
00:02:37,680 --> 00:02:43,060
So the first value to response to this First percent D is Epoque.
36
00:02:43,080 --> 00:02:50,900
So here we have two input first Epoque then the second percent here corresponds to 25.
37
00:02:51,090 --> 00:02:57,530
So when 25 then the third percent here corresponds to the step.
38
00:02:57,610 --> 00:02:58,890
I see that exactly.
39
00:02:58,890 --> 00:03:07,460
I then the fourth percent here corresponds to the number of mini batches that is Lenn they are lower.
40
00:03:07,530 --> 00:03:12,030
So here I'm adding Lenn data low.
41
00:03:12,330 --> 00:03:19,560
And then finally we have to input the variable names for our two losses corresponding to point for f
42
00:03:19,740 --> 00:03:21,320
and point for f..
43
00:03:21,330 --> 00:03:27,200
So the first one is the last of the discriminator and the variable for that is.
44
00:03:27,300 --> 00:03:33,510
So I'm adding a comma and the variable for that is of course the R R D.
45
00:03:33,930 --> 00:03:41,250
But then to get the value itself we need to take the data attribute and then in some brackets we add
46
00:03:41,370 --> 00:03:42,030
zero.
47
00:03:42,150 --> 00:03:47,880
That will get us exactly what we want that is the value of the error of the discriminator.
48
00:03:47,880 --> 00:03:49,870
Perfect almost done.
49
00:03:49,890 --> 00:03:52,650
We need to do the same for the generator.
50
00:03:52,830 --> 00:03:58,480
So come out the same and we replace our Ardi by R.G..
51
00:03:58,670 --> 00:03:59,240
Perfect.
52
00:03:59,280 --> 00:04:01,980
I think our print is ready.
53
00:04:02,280 --> 00:04:02,910
Great.
54
00:04:02,920 --> 00:04:10,950
Now as you can see I would like to save the real images and the generated images of the mini batch every
55
00:04:10,950 --> 00:04:12,490
100 steps.
56
00:04:12,510 --> 00:04:19,760
So now we're going to do is to make an IF condition that will save the images every 100 steps.
57
00:04:19,860 --> 00:04:29,290
And the trick to do that is to an IF condition and then I present 100 equal equal zero.
58
00:04:29,440 --> 00:04:33,000
That's the rest of the division of I buy 100.
59
00:04:33,000 --> 00:04:38,840
So if the rest of the division of I by One hundred is zero that means that I's divided by 100.
60
00:04:38,880 --> 00:04:44,430
And so this way we get this step every 100 steps are right every 100 steps.
61
00:04:44,430 --> 00:04:45,540
What do we do.
62
00:04:45,900 --> 00:04:48,620
Well first we're going to save the real images.
63
00:04:48,900 --> 00:04:56,390
So to do this I'm going to take you tiles which is the shortcut name we gave to torch vision that you
64
00:04:56,940 --> 00:04:59,900
that allowed to save some images with torch vision.
65
00:05:00,190 --> 00:05:03,180
An advanced library of torch for computer vision.
66
00:05:03,460 --> 00:05:10,980
So you teals and then we're going to use to save underscore image function to save the different images
67
00:05:10,990 --> 00:05:15,140
that is the real images on which our model was trained.
68
00:05:15,310 --> 00:05:16,690
And then the fake images.
69
00:05:16,900 --> 00:05:22,810
So we're going to start with the real images and therefore what we have to put here is our batch of
70
00:05:22,840 --> 00:05:25,830
real images that we called real.
71
00:05:25,840 --> 00:05:31,870
That's the first argument and then for the second argument we need to specify in quotes the name of
72
00:05:31,870 --> 00:05:36,300
the path leading to the location where we want to save the real images.
73
00:05:36,580 --> 00:05:45,340
And this path is going to be P.C. S which is a train that will refer to the route then slash results
74
00:05:45,400 --> 00:05:51,260
because the results folder here is a subfolder of our replow the root folder.
75
00:05:51,400 --> 00:05:57,440
So prison S then slash and then after that we need to give the name to the PNAC founders who contend
76
00:05:57,530 --> 00:05:58,620
these images.
77
00:05:58,780 --> 00:06:05,070
And we're going to call them real and this core samples that PNH.
78
00:06:05,080 --> 00:06:13,180
All right and then some percent and in some double quotes we need to specify the string attached to
79
00:06:13,210 --> 00:06:20,440
this percent S and that is that slash results because the name of the folder where we want to save the
80
00:06:20,440 --> 00:06:27,520
images is our results folder and this that here corresponds to the roots that is this working directory
81
00:06:27,520 --> 00:06:28,330
folder.
82
00:06:28,340 --> 00:06:29,120
All right.
83
00:06:29,350 --> 00:06:33,500
And then we can add a third argument which is just to normalize.
84
00:06:33,670 --> 00:06:39,600
And this argument is normalize and we have to set it equal to true.
85
00:06:40,090 --> 00:06:40,840
Perfect.
86
00:06:40,870 --> 00:06:48,430
So we saved our batch of real images contained in real here and now we're going to do the same for the
87
00:06:48,430 --> 00:06:49,550
fake images.
88
00:06:49,600 --> 00:06:59,080
So we're going to get these fake images by calling our next G network again to which we feed the noise
89
00:06:59,170 --> 00:07:00,330
random vector.
90
00:07:00,550 --> 00:07:03,400
So we're just doing it again to get the fake images.
91
00:07:03,400 --> 00:07:10,720
And now we can save them so I'm just going to copy this line pasted right below.
92
00:07:10,790 --> 00:07:14,300
And now I just need to replace a few things first.
93
00:07:14,330 --> 00:07:22,170
This time we're not setting the real image but fake that data that's what contains the fake images.
94
00:07:22,170 --> 00:07:23,620
Then here we keep present.
95
00:07:23,630 --> 00:07:31,010
As for the name of the path leading to the folder where we save our images but then we're not going
96
00:07:31,010 --> 00:07:32,000
to call them this time.
97
00:07:32,030 --> 00:07:40,130
Real samples but fake underscores samples underscore airpark underscore.
98
00:07:40,220 --> 00:07:45,200
And here I'm going to put the number of the epoch when the fake images are saved.
99
00:07:45,200 --> 00:07:51,540
And to do this I'm going to specify here a double with three integers so present.
100
00:07:51,570 --> 00:07:52,450
Oh 3D.
101
00:07:52,610 --> 00:07:53,810
And then that PNH.
102
00:07:54,050 --> 00:07:54,410
All right.
103
00:07:54,410 --> 00:08:04,040
And then since I added a new variable here with the percent of 3D then besides the path of the results
104
00:08:04,100 --> 00:08:10,950
in double quotes I need to add the variable corresponding to 0 3 and according to you what is it.
105
00:08:11,120 --> 00:08:13,280
Well that's of course the epoch.
106
00:08:13,610 --> 00:08:19,660
So that's where you will get the fake images and we'll know from which book they are coming.
107
00:08:19,660 --> 00:08:20,160
All right.
108
00:08:20,260 --> 00:08:23,850
We will know in which epoch they were produced by the generator.
109
00:08:23,900 --> 00:08:24,380
Perfect.
110
00:08:24,410 --> 00:08:26,900
And then I'm keeping normalize equal.
111
00:08:27,190 --> 00:08:27,830
All right.
112
00:08:27,920 --> 00:08:30,480
And now we're ready to watch the final result.
113
00:08:30,480 --> 00:08:35,070
And so if you're ready now it's time for the show.
114
00:08:35,130 --> 00:08:35,930
And there we go.
115
00:08:35,930 --> 00:08:37,300
I just executed.
116
00:08:37,340 --> 00:08:42,130
I selected all the code and pressed command or control plus enter.
117
00:08:42,170 --> 00:08:43,380
And there we go.
118
00:08:43,460 --> 00:08:45,640
We have started the training.
119
00:08:45,880 --> 00:08:52,200
So as you can see this is the first epoch and the first steps with 0 1 2 3 4.
120
00:08:52,430 --> 00:08:57,950
And for each of the step in each book we get Indeed the last of the discriminator and the last of the
121
00:08:57,950 --> 00:08:59,140
generator.
122
00:08:59,150 --> 00:09:04,950
So now it's going to take a while it's actually going to take several hours on my computer.
123
00:09:05,060 --> 00:09:09,200
I'm going to let my computer do all this work for me.
124
00:09:09,200 --> 00:09:10,880
We're not going to watch the whole training.
125
00:09:11,150 --> 00:09:17,650
And at the end of the training I'll see you back and we will see the fake images generated by our decisions
126
00:09:17,960 --> 00:09:19,830
and we'll see if it looks like something.
127
00:09:20,010 --> 00:09:24,530
So let's let this run and I'll see you at the end of the training.
128
00:09:24,530 --> 00:09:27,130
All right the training is over.
129
00:09:27,170 --> 00:09:29,240
It actually took more time than I thought.
130
00:09:29,240 --> 00:09:33,230
When I woke up after a long night of sleep well it was still not over.
131
00:09:33,230 --> 00:09:37,000
So I guess it took more than eight hours in my computer.
132
00:09:37,010 --> 00:09:39,910
Indeed I don't sleep any more three hours per day.
133
00:09:39,920 --> 00:09:46,520
I noticed it was bad for life expectancy and I still want to be able to make some courses for your children
134
00:09:46,610 --> 00:09:47,730
and grandchildren.
135
00:09:47,870 --> 00:09:51,860
But as long as we have the final results that's all good.
136
00:09:51,860 --> 00:09:57,200
So I'm going to show them to you now and we'll see if we can call our deep convolutional dance.
137
00:09:57,260 --> 00:09:58,890
A great artist.
138
00:09:58,910 --> 00:09:59,210
All right.
139
00:09:59,210 --> 00:10:07,460
So before I show you the first samples I would like to show you the real samples just to see on which
140
00:10:07,580 --> 00:10:13,550
images real images are a computer vision model learned to generate some fake images.
141
00:10:13,550 --> 00:10:14,980
So these are the images.
142
00:10:15,140 --> 00:10:15,810
All good.
143
00:10:15,830 --> 00:10:20,450
Now let's see what it was capable of creating.
144
00:10:20,450 --> 00:10:22,700
All right so let's start with the first samples.
145
00:10:22,880 --> 00:10:24,820
Nothing special here.
146
00:10:24,860 --> 00:10:27,150
We cannot call it art at all.
147
00:10:27,320 --> 00:10:28,840
But then what about the second one.
148
00:10:28,970 --> 00:10:35,210
The second one is already better the second one looks more like something but still it still looks like
149
00:10:35,210 --> 00:10:42,020
some kind of smoke except for this one maybe that looks like a mountain but I think we'll get better
150
00:10:42,020 --> 00:10:42,720
than that.
151
00:10:43,550 --> 00:10:45,020
Then some pools.
152
00:10:45,150 --> 00:10:46,080
Number two.
153
00:10:46,080 --> 00:10:54,890
So the numbers here correspond to the box that was given at about 0 1 2 3 until about 24 25.
154
00:10:55,050 --> 00:10:56,230
Back in Seoul.
155
00:10:56,370 --> 00:10:58,910
And this one this one is actually pretty good.
156
00:10:59,190 --> 00:11:01,800
We start to see something here.
157
00:11:01,830 --> 00:11:07,230
We still need a bit of imagination to figure out what's inside the image here.
158
00:11:07,230 --> 00:11:12,580
I see him for example to see some kind of a duck on a on the sea or the ocean.
159
00:11:12,600 --> 00:11:14,160
I don't know if you see the same thing.
160
00:11:14,280 --> 00:11:20,270
Well maybe my imagination is playing some tricks but I can start to see something here.
161
00:11:20,280 --> 00:11:23,130
Let's look at the third fake samples.
162
00:11:23,300 --> 00:11:28,570
So the fake images of the third book are right definitely better here.
163
00:11:28,590 --> 00:11:29,900
Here we can see a human.
164
00:11:29,940 --> 00:11:31,410
I guess it's a human here.
165
00:11:31,440 --> 00:11:33,000
I think I see a squirrel.
166
00:11:33,030 --> 00:11:34,280
I don't know if you agree.
167
00:11:34,530 --> 00:11:34,950
OK.
168
00:11:34,950 --> 00:11:36,020
Much better.
169
00:11:36,240 --> 00:11:40,230
Let's look at the other ones much better here as well.
170
00:11:40,230 --> 00:11:40,840
All right.
171
00:11:40,860 --> 00:11:43,420
Let's look at number six.
172
00:11:43,530 --> 00:11:44,880
Much better here.
173
00:11:45,030 --> 00:11:46,760
We can see it better and better each time.
174
00:11:46,770 --> 00:11:52,620
I don't know if you agree but to me the fake images really start to look like some real images even
175
00:11:52,620 --> 00:11:54,930
if it's not perfectly net.
176
00:11:55,200 --> 00:11:56,590
It's still a little bit blurry.
177
00:11:56,610 --> 00:11:59,880
But still we can see some objects here.
178
00:11:59,880 --> 00:12:00,170
All right.
179
00:12:00,180 --> 00:12:03,740
Let's look at number 10 for example to see if it's already much better.
180
00:12:04,080 --> 00:12:06,120
Yes it looks pretty good.
181
00:12:06,330 --> 00:12:09,050
Let's have a look at number 15.
182
00:12:10,840 --> 00:12:16,480
Still very good so perhaps you didn't need that many a book perhaps you could try to do the training
183
00:12:16,480 --> 00:12:18,970
with five or up to 10 bucks.
184
00:12:19,170 --> 00:12:23,940
But definitely after a certain number of book we see some great results.
185
00:12:24,130 --> 00:12:27,720
And here it's even more visible than before even more clear.
186
00:12:27,910 --> 00:12:29,260
And these are pretty cool images.
187
00:12:29,260 --> 00:12:31,010
We can see some nice colors now.
188
00:12:31,010 --> 00:12:32,270
So I would.
189
00:12:32,270 --> 00:12:38,950
They're calling this model an artist maybe not Picasso but definitely a better artist than me.
190
00:12:38,950 --> 00:12:40,010
So that's pretty cool.
191
00:12:40,060 --> 00:12:46,960
And that's actually the end of this Mudgal congratulations for having implemented the deep convolutional
192
00:12:46,960 --> 00:12:47,600
Ganns.
193
00:12:47,680 --> 00:12:49,660
That was definitely not good stuff.
194
00:12:49,680 --> 00:12:54,150
You coded almost 150 lines of code so well done.
195
00:12:54,280 --> 00:12:55,170
Awesome job.
196
00:12:55,210 --> 00:13:01,550
You not only implemented the deep convolutional Ganns But also you smashed the three modules of this
197
00:13:01,550 --> 00:13:02,180
course.
198
00:13:02,240 --> 00:13:06,300
Well keep in mind you implemented some cutting edge models.
199
00:13:06,430 --> 00:13:11,700
I remind that SSD is the state of the art moral and object detection.
200
00:13:11,710 --> 00:13:14,650
It beats the faster our CNN and you know.
201
00:13:14,650 --> 00:13:21,940
So at the time I'm speaking and I hope this course will last but at the time I'm speaking you did implement
202
00:13:22,120 --> 00:13:25,780
a state of the art model in computer vision and deep learning.
203
00:13:25,780 --> 00:13:28,850
So really really really you can be proud of yourself.
204
00:13:29,020 --> 00:13:35,560
Keep up that great work and great passion for the coming courses and some more modules.
205
00:13:35,590 --> 00:13:38,170
This adventure is definitely not over.
206
00:13:38,170 --> 00:13:39,720
We will learn so much more.
207
00:13:39,750 --> 00:13:45,540
We are dedicated instructors really happy to share our knowledge so there will be more.
208
00:13:45,670 --> 00:13:47,750
And until then enjoy machine learning.
209
00:13:47,770 --> 00:13:49,320
Enjoy the journey enjoy.
210
00:13:49,390 --> 00:13:51,640
I mostly enjoy computer vision.
20348
Can't find what you're looking for?
Get subtitles in any language from opensubtitles.com, and translate them here.