Would you like to inspect the original subtitles? These are the user uploaded subtitles that are being translated:
1
00:00:00,300 --> 00:00:02,690
Hello and welcome to this new tutorial.
2
00:00:02,730 --> 00:00:09,280
So here we are ready for the big first step of the training that is updating the weight of the neural
3
00:00:09,300 --> 00:00:11,310
network of the discriminator.
4
00:00:11,310 --> 00:00:15,300
This big first step we're going to tackle it in three subsets.
5
00:00:15,330 --> 00:00:20,220
The first step is to train the discriminator with a real image of the data set.
6
00:00:20,220 --> 00:00:26,010
The second step is to train the discriminator with this time a fake image generated by the generator.
7
00:00:26,220 --> 00:00:32,880
And finally will back propagate the total error which will be the sum of the errors of these two previous
8
00:00:32,880 --> 00:00:33,630
trainings.
9
00:00:33,840 --> 00:00:35,660
So now I have a question for you.
10
00:00:35,850 --> 00:00:43,170
Why do we have to do a training of the discriminator with both a real image of the dataset and a fake
11
00:00:43,170 --> 00:00:45,480
image generated by the generator.
12
00:00:45,750 --> 00:00:53,850
And the answer is that simply because we want to train the discriminator to see and understand what's
13
00:00:53,850 --> 00:01:00,180
real and what's fake and therefore to make the discriminator understand that well we need to give him
14
00:01:00,450 --> 00:01:06,720
the two different ground tricks we need to give him the ground truth of what's real and the ground truth
15
00:01:06,840 --> 00:01:08,200
of what's fake.
16
00:01:08,400 --> 00:01:13,230
So the ground truth of what's real is of course the real image and the ground truth of what's fake is
17
00:01:13,230 --> 00:01:16,520
of course the fake image generated by the generator.
18
00:01:16,530 --> 00:01:24,330
So if you understand that well it will be very easy these two subsets will appear very natural to you.
19
00:01:24,480 --> 00:01:28,540
And so now if you're ready let's do these three steps.
20
00:01:28,740 --> 00:01:33,990
But before we start with the first obstacle is training the discriminator with a real image of the data
21
00:01:33,990 --> 00:01:36,460
set to train it to understand what's real.
22
00:01:36,630 --> 00:01:43,650
Well we need to initialize the gradient of the discriminator with respect to the weight to zero and
23
00:01:43,650 --> 00:01:50,850
to do this it's very simple we take the neural network of the discriminator which we called Nedney then
24
00:01:50,850 --> 00:01:59,380
we added that and then we use the zero underscore grad function and that will automatically initialize
25
00:01:59,380 --> 00:02:02,200
to zero the gradients with respect to the weights.
26
00:02:02,210 --> 00:02:09,300
So that's done and now we can move on to the first subset that is training the discriminator with a
27
00:02:09,300 --> 00:02:11,340
real image of the day set.
28
00:02:11,400 --> 00:02:15,300
All right so the first thing we need to do is get the real images.
29
00:02:15,300 --> 00:02:17,250
Why do I say real images.
30
00:02:17,250 --> 00:02:23,220
That's because we're going to get in fact a mini batch of real images and that's because a neural network
31
00:02:23,370 --> 00:02:30,600
actually accept as inputs some many batches of single inputs like single images and therefore we have
32
00:02:30,600 --> 00:02:36,690
to work with many batches but that's perfect because we already iterated through some many batches that
33
00:02:36,690 --> 00:02:38,630
we got with a loader.
34
00:02:38,700 --> 00:02:40,530
So we already have these mini batches.
35
00:02:40,560 --> 00:02:47,720
And therefore as you will see right now it will be very easy to get these in any batch of real images.
36
00:02:48,180 --> 00:02:54,690
So this input many batch I'm going to call it real and it's actually going to be the first element of
37
00:02:54,990 --> 00:02:56,740
our mini batch data.
38
00:02:56,820 --> 00:03:02,080
Right now we're dealing with a specific mini batch which is data and data is composed of two elements.
39
00:03:02,190 --> 00:03:07,260
The first elements are the real images themselves and the second element are the labels.
40
00:03:07,260 --> 00:03:09,430
But we don't really care about the labels right now.
41
00:03:09,480 --> 00:03:15,710
So I'm just getting the first elements and to do that to get the first element of technique is to add
42
00:03:15,700 --> 00:03:22,920
a come here and then an underscore to specify We actually don't care about the second element and then
43
00:03:23,400 --> 00:03:25,250
equals data.
44
00:03:25,260 --> 00:03:26,610
All right so perfect.
45
00:03:26,610 --> 00:03:34,260
We have our input but it is not yet an accepted input of a neural network in by torche by torche neural
46
00:03:34,260 --> 00:03:42,060
networks only accept the input in Torch variables and I remind that a torch very well is a highly advanced
47
00:03:42,060 --> 00:03:46,350
variable that contains both a sensor and a gradient.
48
00:03:46,350 --> 00:03:50,720
Right now we have to sensor because real is actually a sensor of images.
49
00:03:50,850 --> 00:03:57,480
But we need to wrap it into a torch very well to associate it with a gradient and to do this we're going
50
00:03:57,480 --> 00:04:02,850
to introduce a new varry Walters we're going to call input because this will be the input of the neural
51
00:04:02,850 --> 00:04:10,020
network and this input is going to be an object of the variable class which will take his arguments
52
00:04:10,250 --> 00:04:18,330
are real input images in them in batch and now are input images are not only in too many Bache but also
53
00:04:18,390 --> 00:04:19,750
in a torch variable.
54
00:04:19,770 --> 00:04:25,140
So now we're allowed to feed the neural network with this input.
55
00:04:25,200 --> 00:04:27,630
But before that we will get the target.
56
00:04:27,870 --> 00:04:33,130
And now the line of good I'm going to type is very very important at this stage.
57
00:04:33,340 --> 00:04:36,520
You're going to try to guess what the target is going to be.
58
00:04:36,540 --> 00:04:42,070
We're going to have to target for the different training this training and the following training.
59
00:04:42,180 --> 00:04:45,730
And so try to guess what exactly these targets are going to be.
60
00:04:45,840 --> 00:04:49,260
So I'm going to introduce here a new target.
61
00:04:49,290 --> 00:04:52,480
So now according to you what is this target going to be.
62
00:04:52,830 --> 00:04:59,830
Well since we are training the discriminator with a real image of the data set to train him to under
63
00:05:00,070 --> 00:05:03,720
one and see what's real what is a real image.
64
00:05:03,890 --> 00:05:10,790
Well for each of the real image of the million batch Well we need to set the target to one white one.
65
00:05:11,000 --> 00:05:15,230
That's because remember zero corresponds to rejection.
66
00:05:15,380 --> 00:05:20,720
The image is rejected by the discriminator and one corresponds to acceptance.
67
00:05:20,750 --> 00:05:26,570
The image is accepted by the discriminator and therefore we need to set the target to one because we
68
00:05:26,570 --> 00:05:33,080
need to specify to the discriminator that the ground truth is actually one to ground truth is the image
69
00:05:33,170 --> 00:05:33,830
is real.
70
00:05:33,890 --> 00:05:35,630
So the image gets a 1.
71
00:05:35,930 --> 00:05:41,390
And that's why right now we're going to create a torch sensor that is going to have the size of the
72
00:05:41,390 --> 00:05:42,090
mini match.
73
00:05:42,260 --> 00:05:48,790
And that is going to be composed of only once we will have a one for each of the input image of the
74
00:05:48,790 --> 00:05:49,710
mini batch.
75
00:05:49,730 --> 00:05:50,270
So there we go.
76
00:05:50,270 --> 00:05:51,200
Let's do this.
77
00:05:51,200 --> 00:05:58,510
We need to take the torch library and then very simply we have a great function that is called.
78
00:05:58,730 --> 00:06:01,880
And that will create this sensor only once.
79
00:06:01,880 --> 00:06:08,030
And as you might guess what we need input in this one function is actually the size of the sensor that
80
00:06:08,030 --> 00:06:15,930
is how many ones do we want and we want as many ones as they are real images in the real input mini
81
00:06:15,980 --> 00:06:16,600
batch.
82
00:06:16,850 --> 00:06:20,140
So how can we get this size of the input batch.
83
00:06:20,360 --> 00:06:27,590
Well we just need to take our input many batch and then add that and then add size with some parenthesis
84
00:06:28,040 --> 00:06:33,740
input that size contains the size of the mini batch that is contains the number of real images of the
85
00:06:33,740 --> 00:06:38,560
input many batch and therefore also the number of ones the target should contain.
86
00:06:38,810 --> 00:06:41,160
But as you notice I said contains.
87
00:06:41,240 --> 00:06:49,230
And to get the actual number we need to take the first index of this element in that size which is zero.
88
00:06:49,400 --> 00:06:54,810
So input that size of index 0 will return you the size of the mean bitch.
89
00:06:55,190 --> 00:06:55,840
Perfect.
90
00:06:55,890 --> 00:06:59,270
And now question Are we allowed to move onto the next step.
91
00:06:59,270 --> 00:07:00,280
No we're not.
92
00:07:00,290 --> 00:07:04,650
The reason is we have to wrap the target in a torch viable.
93
00:07:04,660 --> 00:07:09,920
Indeed we're going to compute some gradients of the target as well and therefore we need to attach this
94
00:07:09,920 --> 00:07:13,940
target sensor to a gradient inside a torch variable.
95
00:07:14,000 --> 00:07:22,400
So I'm going to take my variable class again and I'm going to put everything inside some parenthesis
96
00:07:22,730 --> 00:07:29,130
so that target becomes an object of the variable class taking his argument this torch and sort of once
97
00:07:29,940 --> 00:07:30,570
right.
98
00:07:30,600 --> 00:07:33,210
And now we have the inputs and the target.
99
00:07:33,260 --> 00:07:34,530
So we know what to do next.
100
00:07:34,580 --> 00:07:36,490
We need to get the outputs.
101
00:07:36,710 --> 00:07:40,310
So let's do this let's get the output well to get the output.
102
00:07:40,340 --> 00:07:42,610
Actually pretty fun and very simple.
103
00:07:42,650 --> 00:07:49,880
We'll first introduce a new variable for the output output and we are going to call our neural network
104
00:07:50,410 --> 00:07:57,830
of the discriminator so needy and we are going to feed this neural network with of course the inputs
105
00:07:58,340 --> 00:08:02,480
the input which is a torch Vrable of a mini batch of real images.
106
00:08:02,480 --> 00:08:07,260
And so inside here I just need to input Well input.
107
00:08:07,310 --> 00:08:08,280
All right.
108
00:08:08,520 --> 00:08:16,850
So through the main metal module that forward propagates the real input images of the mini match inside
109
00:08:16,940 --> 00:08:22,640
the neural network of the discriminator to get for each of these real inputs images the prediction of
110
00:08:22,640 --> 00:08:26,170
the discriminator whether they should be accepted or not.
111
00:08:26,300 --> 00:08:32,330
So I remind that for each of these real images the output that is a prediction is a number between 0
112
00:08:32,330 --> 00:08:38,360
and 1 and a number close to zero means that the discriminator will reject the image and a number close
113
00:08:38,360 --> 00:08:42,130
to 1 means that the discriminator will accept the image.
114
00:08:42,170 --> 00:08:46,160
So that's the discriminating number and it's a number between 0 and 1.
115
00:08:46,160 --> 00:08:47,200
All right perfect.
116
00:08:47,210 --> 00:08:51,540
And now that we have target full of ones and the outputs.
117
00:08:51,590 --> 00:08:53,110
Well guess what we're going to get.
118
00:08:53,270 --> 00:08:55,460
Well of course we're going to get the error.
119
00:08:55,640 --> 00:09:02,570
The first error coming from this first training of the discriminator with the real image ground troops.
120
00:09:03,050 --> 00:09:07,990
And so this specific first arrow we're going to call it e r r d.
121
00:09:08,180 --> 00:09:13,040
Because we're also going to have an area for the generator but much later that is in the second big
122
00:09:13,040 --> 00:09:14,440
step of the training.
123
00:09:14,440 --> 00:09:15,590
E r r d.
124
00:09:15,620 --> 00:09:20,020
And since this area corresponds to the real ground truth.
125
00:09:20,210 --> 00:09:24,440
Well I'm going to add here and underscore and real.
126
00:09:24,440 --> 00:09:25,320
All right.
127
00:09:25,400 --> 00:09:28,780
We are the real and now to get this error.
128
00:09:28,820 --> 00:09:32,520
Well what should I take I should take my criterion.
129
00:09:32,630 --> 00:09:36,110
We have a new object that will compute the last error for us.
130
00:09:36,110 --> 00:09:39,820
It will compute the last area between the output and the target.
131
00:09:39,890 --> 00:09:40,940
So that's perfect.
132
00:09:41,000 --> 00:09:44,490
And as you might guess inside this concern we need two inputs.
133
00:09:44,570 --> 00:09:48,700
First the output and second the target.
134
00:09:48,830 --> 00:09:49,670
And there we go.
135
00:09:49,700 --> 00:09:56,030
We have our first last error of the discriminator the one corresponding to the training of the discriminator
136
00:09:56,300 --> 00:10:02,950
with the real images to train it to understand to recognize what's real real images perfect.
137
00:10:02,950 --> 00:10:08,790
And now we're ready to move on to the second step of the training of the discriminator.
138
00:10:08,850 --> 00:10:13,890
It is the training with this time a fake image generated by the generator.
139
00:10:13,920 --> 00:10:19,590
So this time we're training the discriminator to see and understand what's fake.
140
00:10:19,590 --> 00:10:21,960
That is to recognize fake images.
141
00:10:22,170 --> 00:10:24,680
So we're going to do the same process as we did here.
142
00:10:24,680 --> 00:10:31,020
We're going to get first the input then the target then the output then this will generate a loss which
143
00:10:31,020 --> 00:10:37,170
will call our already underscore fake and we'll be done with this second step.
144
00:10:37,260 --> 00:10:44,030
And then finally we'll get the all error as to some of these two errors are already real and are fake.
145
00:10:44,250 --> 00:10:48,370
And then we'll do the big back propagation of this total error.
146
00:10:48,480 --> 00:10:51,470
Back inside the neural network of the discriminator.
147
00:10:51,480 --> 00:10:56,940
So let's do this let's tackle this second training with the ground truth of the vague images and let's
148
00:10:56,940 --> 00:10:59,670
start right now by getting the inputs.
149
00:10:59,670 --> 00:11:02,370
So it's actually not that direct according to you.
150
00:11:02,460 --> 00:11:09,070
How are we going to get the input which this time should be a mini batch of fake images.
151
00:11:09,300 --> 00:11:15,510
Well if you remember what we did here when defining the architecture of the generator.
152
00:11:15,840 --> 00:11:23,970
Well remember that the first inverted convolution takes as input a random vector of size 100.
153
00:11:24,240 --> 00:11:29,410
And that I remind is because the generator is like an inverted CNN.
154
00:11:29,490 --> 00:11:34,890
And since CNN takes as input some images and returns a flattened vector of one dimension.
155
00:11:35,010 --> 00:11:40,860
Well this inverted CNN of the generator will do exactly the opposite it will take as input a vector
156
00:11:41,040 --> 00:11:47,220
one dimension and we'll return the images or return some fake images and since we specify here that
157
00:11:47,220 --> 00:11:49,800
the input vector is of size 100.
158
00:11:49,980 --> 00:11:55,450
Well right now we are exactly going to create a vector of size 100.
159
00:11:55,620 --> 00:12:01,950
This will be a random vector and that will represent some noise and then we'll feed the neural network
160
00:12:01,950 --> 00:12:07,680
of the generator with this random vector and it will return some fake images.
161
00:12:07,770 --> 00:12:13,710
Of course at the beginning it will return some images that look like nothing but over the books we will
162
00:12:13,710 --> 00:12:19,920
update the weights so that the images look like something that is look like some real images.
163
00:12:20,160 --> 00:12:22,620
But before doing that that's actually what we'll be doing.
164
00:12:22,620 --> 00:12:28,440
The second step before doing that we need to train the discriminator to recognize what's fake.
165
00:12:28,710 --> 00:12:33,120
So let's do this let's make this random vector of size 100.
166
00:12:33,270 --> 00:12:39,420
And as we just said we're going to call it noise it actually represents some noise being a random input
167
00:12:39,420 --> 00:12:40,050
vector.
168
00:12:40,170 --> 00:12:46,920
So to create with by toward a vector of random values of a specific size Well it's actually very simple
169
00:12:46,930 --> 00:12:48,570
We have a function for this.
170
00:12:48,610 --> 00:12:55,050
This function we get it from the torch library of course and the name of this function is round and
171
00:12:55,650 --> 00:12:56,220
runs.
172
00:12:56,310 --> 00:13:01,190
And now inside this runs and function we need to input several arguments.
173
00:13:01,290 --> 00:13:06,830
The first one is going to be the batch size which I remind is 64.
174
00:13:06,870 --> 00:13:09,600
So that's the first argument we to input here.
175
00:13:09,720 --> 00:13:15,870
And therefore I'm going to copy this batch size which I got for my one function.
176
00:13:15,870 --> 00:13:16,610
There we go.
177
00:13:16,620 --> 00:13:19,630
Copy and paste.
178
00:13:19,680 --> 00:13:25,320
So we need to input this first argument of the batch size because we're not only going to create one
179
00:13:25,500 --> 00:13:32,250
random vector of size 100 we're going to create of course a mini batch of random vectors of size 100.
180
00:13:32,520 --> 00:13:36,630
So that then we can get a mini batch of fake images.
181
00:13:36,840 --> 00:13:43,470
So input size corresponds to the size of the batch and then the second argument will be well the number
182
00:13:43,470 --> 00:13:50,190
of elements we want in this vector and that is 100 because we specify in the architecture of the generator
183
00:13:50,520 --> 00:13:57,840
that the input vector should be of size 100 and then we're going to add to arguments which are going
184
00:13:57,840 --> 00:14:06,300
to be one and one and that is just to give to these random vectors some vague dimensions that will correspond
185
00:14:06,300 --> 00:14:12,150
to a future map that is in fact instead of having 100 values in the vector.
186
00:14:12,210 --> 00:14:19,260
It's like we will have 100 feature maps of size one by one meaning each of the 100 feature maps will
187
00:14:19,260 --> 00:14:21,690
be a matrix of size one by one.
188
00:14:21,840 --> 00:14:22,360
Great.
189
00:14:22,380 --> 00:14:23,960
So now we get our noise.
190
00:14:24,030 --> 00:14:26,240
So are we ready to move on to the next step.
191
00:14:26,340 --> 00:14:28,510
Well no no we're not.
192
00:14:28,530 --> 00:14:36,150
Because always for the same reason we have to wrap this mini batch of random vectors inside a variable.
193
00:14:36,150 --> 00:14:36,960
Why is that.
194
00:14:36,960 --> 00:14:42,750
That's because this noise here is going to be the input of the new one that work of the generator and
195
00:14:42,750 --> 00:14:46,750
neural networks and pite which only accept torch variables.
196
00:14:46,810 --> 00:14:48,090
So let's do that quickly.
197
00:14:48,150 --> 00:14:57,390
Let's get our variable class and put this torche tensor of random vectors inside the variable so that
198
00:14:57,390 --> 00:15:03,090
now noise becomes an object of this variable class containing this tensor.
199
00:15:03,330 --> 00:15:03,760
Right.
200
00:15:03,780 --> 00:15:06,840
And now we are allowed to move on to the next step.
201
00:15:06,840 --> 00:15:08,780
So now according to you what is the next step.
202
00:15:08,880 --> 00:15:12,190
Well obviously the next step is to get what we want.
203
00:15:12,210 --> 00:15:14,090
That is this new ground truth.
204
00:15:14,120 --> 00:15:19,440
We're looking for which of the fake images and the fake images we can now get them because we have the
205
00:15:19,440 --> 00:15:22,830
right inputs of the neural network of the generator.
206
00:15:22,830 --> 00:15:25,970
So let's get them we're going to call them fake.
207
00:15:26,160 --> 00:15:31,690
But keep in mind that this will represent a mini batch of fake images a fake is the name of the match
208
00:15:32,080 --> 00:15:33,760
so fake equals.
209
00:15:33,960 --> 00:15:40,260
Then very simply again we're going to take the neural network of the generator to which we're going
210
00:15:40,260 --> 00:15:40,980
to feed.
211
00:15:41,280 --> 00:15:48,090
Well this noise mini batch containing the random vectors and therefore will get a mini batch of the
212
00:15:48,090 --> 00:15:51,380
same size containing some fake images.
213
00:15:51,540 --> 00:15:52,100
Awesome.
214
00:15:52,170 --> 00:15:58,510
So now it is important to understand that this is the new input compared to this one.
215
00:15:58,530 --> 00:16:01,930
This was the input containing the ground truth of the real images.
216
00:16:02,070 --> 00:16:08,250
And this is a new input containing the ground truth of the fake images and therefore very quickly will
217
00:16:08,250 --> 00:16:09,980
get the new output.
218
00:16:10,020 --> 00:16:17,130
That is the output we'll get after feeding our discriminator with these new inputs fake images.
219
00:16:17,280 --> 00:16:20,420
But before we get this outputs we need to get the target.
220
00:16:20,610 --> 00:16:27,120
And now that's crucial to understand that this time the target is going to be a new kind of target.
221
00:16:27,150 --> 00:16:28,690
And so what is it going to be.
222
00:16:28,890 --> 00:16:34,350
Well this time since we are training the discriminator with some fake images we want to train it to
223
00:16:34,350 --> 00:16:40,560
recognize the fake images we want to train it to recognize what's fake and therefore the target should
224
00:16:40,560 --> 00:16:47,220
be the rejection of the images and the rejection of the images correspond to 0 0 means that the image
225
00:16:47,280 --> 00:16:53,990
is rejected by the generator and therefore the target should be this time a sensor full of zeroes.
226
00:16:54,180 --> 00:17:00,960
And so what I'm going to do I'm going to take this again because it's the same we're going to wrap it
227
00:17:00,960 --> 00:17:07,030
into a viable but instead of using the ones function to get sensor full of ones.
228
00:17:07,200 --> 00:17:14,750
Well I'm going to replace it by guess what zeros we have these very practical functions intuitive to
229
00:17:14,750 --> 00:17:20,790
remember and by torche this time we have the zero's function that will return a sensor full of zeros
230
00:17:21,030 --> 00:17:26,240
and the number of zeroes will be the size of the batch and put that size of index zero.
231
00:17:26,520 --> 00:17:27,180
Perfect.
232
00:17:27,210 --> 00:17:28,830
We have our new target.
233
00:17:28,830 --> 00:17:31,130
And so now let's get the outputs.
234
00:17:31,170 --> 00:17:36,840
So I'm going to introduce the variable for this output which I'm going to call outputs again and to
235
00:17:36,840 --> 00:17:38,970
get my new output.
236
00:17:39,150 --> 00:17:45,740
Well I'm going to take the neural network of the discriminator because we're still training discriminator
237
00:17:46,200 --> 00:17:54,180
and I'm going to feed this do all that work with of course the fake images and then for each of the
238
00:17:54,180 --> 00:18:00,750
fake images of the fake mini batch Well you'll get a prediction which is a discriminating number between
239
00:18:00,750 --> 00:18:01,680
0 and 1.
240
00:18:01,830 --> 00:18:07,170
And again if it is close to zero the discriminator will reject the image and if it is close to one it
241
00:18:07,170 --> 00:18:08,820
will accept it.
242
00:18:08,880 --> 00:18:09,950
All right perfect.
243
00:18:09,990 --> 00:18:13,190
But we can do something actually better here.
244
00:18:13,230 --> 00:18:15,210
We can save up some memory.
245
00:18:15,210 --> 00:18:22,220
Remember that fake is a total variable because the output of a PI torche neural network is also a torch
246
00:18:22,240 --> 00:18:27,680
variable and therefore it contains not only the tensor of the predictions the discriminating numbers
247
00:18:27,690 --> 00:18:30,600
between 0 and 1 but also the gradients.
248
00:18:30,810 --> 00:18:36,330
But actually we're not going to use this gradient after back propagating the error back inside the neural
249
00:18:36,330 --> 00:18:39,060
network and when applying stochastic great in the sense.
250
00:18:39,060 --> 00:18:46,200
So what we can do now is actually detach the gradient of this fake torche viable that will save up some
251
00:18:46,200 --> 00:18:48,850
memory and that will speed up the computations.
252
00:18:48,870 --> 00:18:53,820
And trust me we want to do this because the training is going to take quite a while.
253
00:18:53,830 --> 00:18:59,780
So want to savor as much memory as possible and get the fastest computations as possible.
254
00:18:59,810 --> 00:19:05,600
So we're going to detach the gradient here of this toward horrible and to do this we are done here and
255
00:19:05,600 --> 00:19:13,250
then detach and then some parenthesis we absolutely don't care of the gradient of the output with respect
256
00:19:13,250 --> 00:19:14,850
to the weight of the generator.
257
00:19:14,850 --> 00:19:19,060
It will not be part of the considerations in stochastic gradient descent.
258
00:19:19,070 --> 00:19:23,610
All right so now we have the time yet we have the output.
259
00:19:23,720 --> 00:19:29,750
So guess what we're ready to have we are ready to have the new error corresponding to the training of
260
00:19:29,750 --> 00:19:32,120
the discriminator with the fake images.
261
00:19:32,300 --> 00:19:33,920
So let's do this.
262
00:19:33,920 --> 00:19:40,080
And actually it's very simple we just need to copy this line because it's almost going to be the same.
263
00:19:40,090 --> 00:19:44,080
We will just need to change the name of this new error.
264
00:19:44,210 --> 00:19:48,340
So this new error corresponds to the training of the discriminator with the fake image.
265
00:19:48,350 --> 00:19:54,580
So instead of calling it we are the real we will call it R R D fake.
266
00:19:54,800 --> 00:19:59,840
And there we go and then that's the same because we have the same variable names for the outputs and
267
00:19:59,840 --> 00:20:00,620
the target.
268
00:20:00,830 --> 00:20:01,830
Wonderful.
269
00:20:01,850 --> 00:20:08,940
And so now congratulations you are done with the two trainings that we had to do with the discriminator.
270
00:20:09,080 --> 00:20:15,230
We trained the discriminator to recognize real images and fake images.
271
00:20:15,230 --> 00:20:21,770
So two subsets done one more to go and we'll do the last one about back propagating the total error
272
00:20:21,890 --> 00:20:26,000
back into the new one that work of discriminator and the next tutorial.
273
00:20:26,000 --> 00:20:27,860
Until then enjoy computer vision.
30001
Can't find what you're looking for?
Get subtitles in any language from opensubtitles.com, and translate them here.