Would you like to inspect the original subtitles? These are the user uploaded subtitles that are being translated:
1
00:00:00,420 --> 00:00:05,720
Hello and welcome to the second big part of this implementation training the brains.
2
00:00:05,880 --> 00:00:11,520
So we have just smashed part one creating the brains and now we're going to tackle part to training
3
00:00:11,610 --> 00:00:12,480
the brains.
4
00:00:12,510 --> 00:00:16,580
So this training implementation will be broken down into two big steps.
5
00:00:16,590 --> 00:00:22,230
First step will be to update the weight of the neural network of the discriminator and then the second
6
00:00:22,380 --> 00:00:26,370
step will be to update the weight of the new one that work of the generator.
7
00:00:26,460 --> 00:00:30,460
And then if we want to break down this process into some more steps.
8
00:00:30,570 --> 00:00:33,520
Well that's how it goes in the first big step.
9
00:00:33,540 --> 00:00:39,840
Updating the weights of the discriminator we will want to train the discriminator to see to understand
10
00:00:40,110 --> 00:00:41,590
what's real and what's fake.
11
00:00:41,790 --> 00:00:48,030
So we will first train it by giving it a real image and we'll set the target to one because one means
12
00:00:48,210 --> 00:00:54,570
that the image is accepted and then we'll do another training by giving it this time a fake image and
13
00:00:54,570 --> 00:00:59,170
setting the standard target to zero because zero means that the image is not accepted.
14
00:00:59,400 --> 00:01:04,120
So this first big step can be broken down into two steps.
15
00:01:04,170 --> 00:01:09,960
First training the discriminator with a real image and then training the discriminator with a fake image
16
00:01:10,380 --> 00:01:14,590
and this fake image will be of course a fake image created by the generator.
17
00:01:14,970 --> 00:01:22,270
And then in the second step of the training that is about the weight of the neural network of the generator.
18
00:01:22,440 --> 00:01:28,710
Well we'll take the fake image again which will be in one step of the loop then we'll feed this fake
19
00:01:28,710 --> 00:01:35,130
image into the discriminator to get the output which will be a value between 0 and 1 the discriminating
20
00:01:35,130 --> 00:01:35,750
value.
21
00:01:35,970 --> 00:01:38,770
But then we'll set a new target to 1.
22
00:01:38,820 --> 00:01:44,520
The target will be equal to 1 always and will compute the computerless between the output of the discriminator
23
00:01:44,640 --> 00:01:46,250
the value between 0 and 1.
24
00:01:46,290 --> 00:01:49,890
And this target always equal to 1 then be careful.
25
00:01:49,890 --> 00:01:55,920
That's the key thing to understand will back propagate this error not back inside the discriminator
26
00:01:56,100 --> 00:01:58,320
but back inside the generator.
27
00:01:58,320 --> 00:02:03,450
That's the key thing to understand the error is the error between the prediction of the discriminator
28
00:02:03,720 --> 00:02:09,750
and the target equal to one but will back propagate this error back inside the neural network of the
29
00:02:09,750 --> 00:02:10,580
generator.
30
00:02:10,890 --> 00:02:16,500
And then once we do that will applies to kesa great in the sense to date the weight of the neural network
31
00:02:16,740 --> 00:02:17,820
of the generator.
32
00:02:18,120 --> 00:02:18,700
All right.
33
00:02:18,750 --> 00:02:25,040
So we have given a teaser of what's going to happen in the second phase of training the brain.
34
00:02:25,140 --> 00:02:29,110
So let's tackle the second phase and let's start right now.
35
00:02:29,460 --> 00:02:29,810
All right.
36
00:02:29,820 --> 00:02:36,030
So we have to start by getting a criterion that will measure the error of prediction that will be of
37
00:02:36,030 --> 00:02:41,640
value between 0 and 1 because that will be the prediction of the discriminator the discriminating number
38
00:02:41,640 --> 00:02:49,910
between 0 and 1 and a ground truth that will only be 0 or 1 0 means fake and 1 means real.
39
00:02:49,950 --> 00:02:50,860
So let's do this.
40
00:02:50,940 --> 00:02:52,340
That's our first step.
41
00:02:52,350 --> 00:03:02,500
We create a criterion object criterion object that will be an object of the b c the last.
42
00:03:02,520 --> 00:03:09,390
So if we go to the PI torch documentation we can see that the B C class is actually given by this formula.
43
00:03:09,630 --> 00:03:17,010
And so this is a special kind of less perfect to train at this adversarial networks and busy means binary
44
00:03:17,010 --> 00:03:18,120
Krus entropy.
45
00:03:18,360 --> 00:03:21,350
So that's the last we'll be using for our Just again.
46
00:03:21,360 --> 00:03:24,630
But we see that it is also used for our encounters.
47
00:03:24,810 --> 00:03:31,140
But the most important thing is that the target should be numbers between 0 and 1 and that will be the
48
00:03:31,140 --> 00:03:32,580
case for TCN.
49
00:03:32,610 --> 00:03:38,550
Because the targets will either be zero when we want to try to discriminate or to recognize a fake image
50
00:03:38,970 --> 00:03:39,900
or one.
51
00:03:39,900 --> 00:03:43,420
When we train the discriminator to recognize a real image.
52
00:03:43,480 --> 00:03:46,940
So basically us and then we just need some parenthesis.
53
00:03:46,950 --> 00:03:48,510
All right so perfect.
54
00:03:48,510 --> 00:03:55,060
We have our criterion and now we need to optimize one optimizer for the generator and one optimized
55
00:03:55,060 --> 00:03:56,650
for the discriminator.
56
00:03:56,670 --> 00:04:01,590
So we'll start with the optimizer of the discriminator.
57
00:04:01,860 --> 00:04:04,190
And this time we're not going to get it from.
58
00:04:04,260 --> 00:04:10,420
And then the end module but if we have a look again at our libraries that we imported.
59
00:04:10,500 --> 00:04:13,440
Well as you can see here we imported towards start up.
60
00:04:13,560 --> 00:04:16,850
And we gave it the shortcut name up to him to simplify.
61
00:04:16,980 --> 00:04:21,700
But that's from which we will import our optimizer object.
62
00:04:21,720 --> 00:04:31,380
So here we go optimize Riddhi equals Optum dot and we'll get the atom optimizer which is a highly advanced
63
00:04:31,440 --> 00:04:34,090
optimizer for stochastic great in the sense.
64
00:04:34,250 --> 00:04:34,970
Okay.
65
00:04:35,220 --> 00:04:40,770
And now this time we have to put some arguments the first argument we need to input are the parameters
66
00:04:40,860 --> 00:04:47,310
of our neural network to one of the discriminator and to get them we're going to take our neural network
67
00:04:47,400 --> 00:04:50,990
objects of the discriminator that we called Net D.
68
00:04:51,090 --> 00:04:52,900
So net D.
69
00:04:53,190 --> 00:05:01,320
And then that and then we get the parameters this way with some parenthesis then we specify a learning
70
00:05:01,320 --> 00:05:10,830
rate and the argument for that is are and will show the value of our point 0 2 and then the third argument
71
00:05:10,860 --> 00:05:16,710
we need to specify the beta's parameters and we're going to improve them this way the name of the argument
72
00:05:16,770 --> 00:05:18,150
is better.
73
00:05:18,330 --> 00:05:22,310
And it's actually a couple of two values that we have to choose.
74
00:05:22,500 --> 00:05:29,310
And if we go to the pie torch documentation again well we see that for this Adam optimizer that these
75
00:05:29,320 --> 00:05:35,340
bettors are actually coefficients used for computing running averages of gradient and square.
76
00:05:35,370 --> 00:05:42,930
So we're just going to pick two values now again coming from experimentation and we'll choose 0.5 and
77
00:05:43,440 --> 00:05:45,300
point nine nine.
78
00:05:45,510 --> 00:05:46,180
Great.
79
00:05:46,230 --> 00:05:47,160
Perfect.
80
00:05:47,160 --> 00:05:50,590
We have the optimizer of the discriminator.
81
00:05:50,700 --> 00:05:54,430
And now let's get the optimizer of the generator.
82
00:05:54,630 --> 00:05:59,670
So since we're going to have the same parameters the same learning rate and the same betas.
83
00:05:59,820 --> 00:06:03,270
We simply need to change the name of the optimizer.
84
00:06:03,420 --> 00:06:10,690
And of course we're going to call the optimize of the generator optimizer G which will also be an atom
85
00:06:10,770 --> 00:06:16,380
optimizer with the same parameters except of course for the parameters of the neural network.
86
00:06:16,380 --> 00:06:21,630
So here we just need to replace Nat d by Najee and perfect.
87
00:06:21,630 --> 00:06:29,040
Now we have zero Pretorian attached to the PC plus the optimizer of the discriminator and the optimizer
88
00:06:29,040 --> 00:06:30,230
of the generator.
89
00:06:30,510 --> 00:06:36,840
And that's good news because basically we're ready to start the big loop of the training that is the
90
00:06:36,840 --> 00:06:39,120
loop over the different epochs.
91
00:06:39,270 --> 00:06:49,410
So we're going to have 25 EPOC and therefore this loop is simply going to be for Epoque in range 25
92
00:06:50,080 --> 00:06:50,720
and in.
93
00:06:50,900 --> 00:06:55,340
And this way the epochs will go from zero to 24.
94
00:06:55,470 --> 00:06:57,070
We'll have 25 epix.
95
00:06:57,150 --> 00:07:02,580
You're welcome to try more e-books if you want to try to have some even better images.
96
00:07:02,580 --> 00:07:07,540
But I can tell you that already with 25 bucks we'll get some great images.
97
00:07:07,860 --> 00:07:14,970
And also it's important to understand is that in each of these 25 bucks we'll go through all the images
98
00:07:15,060 --> 00:07:19,620
of the dataset we'll go through all the images of the dataset twenty five times.
99
00:07:19,950 --> 00:07:26,130
All right so now we're directly going to start with a new for loop that if you've listened to the last
100
00:07:26,220 --> 00:07:32,150
sentence I've just said it's going to be the nucleus we'll go through all the images of the dataset.
101
00:07:32,220 --> 00:07:39,780
Therefore the second for loop is going to be for I which will just be the index of the loop and then
102
00:07:40,320 --> 00:07:44,020
data that will contain a million batch of images.
103
00:07:44,130 --> 00:07:48,450
We will go through all the images of the dataset mini batch by mini bitch.
104
00:07:48,540 --> 00:07:49,720
That is what it means.
105
00:07:49,890 --> 00:07:52,890
And data will be one of these mini batches.
106
00:07:53,100 --> 00:07:59,580
So we're breaking the data the whole data set of all the images in too many batches and data will be
107
00:07:59,760 --> 00:08:03,020
each of these different mini batches.
108
00:08:03,090 --> 00:08:08,970
And so now to get these different mini batches we're going to use data loader and we're going to use
109
00:08:08,970 --> 00:08:15,760
the enemy rate function to get these separate mini batches of the dataset.
110
00:08:16,170 --> 00:08:23,430
And so inside and emirate's we have to put our data loader which will get us the mini batches but then
111
00:08:23,430 --> 00:08:29,100
we also have to specify where the index of the loop is going to start from that is where I is going
112
00:08:29,100 --> 00:08:29,980
to start from.
113
00:08:30,150 --> 00:08:31,650
And I will start from zero.
114
00:08:31,740 --> 00:08:35,100
So I'm just adding 0 here and there we go.
115
00:08:35,130 --> 00:08:38,880
We are ready to start the big two steps of the training.
116
00:08:38,880 --> 00:08:41,920
So we'll do these big two steps in two separate tutorials.
117
00:08:41,940 --> 00:08:46,560
The next one will be about updating the weights of the neural network the discriminator.
118
00:08:46,560 --> 00:08:51,300
Then the next tutorial after that will be about the second big step about the update of the weight of
119
00:08:51,300 --> 00:08:58,110
the new work of the generator and then we'll have the last final exciting tutorial to print the classes
120
00:08:58,260 --> 00:09:02,970
save the real images save the fake images and watch the final results.
121
00:09:02,970 --> 00:09:04,390
So be ready for all this.
122
00:09:04,410 --> 00:09:06,310
And until then enjoy computer vision.
13242
Can't find what you're looking for?
Get subtitles in any language from opensubtitles.com, and translate them here.