All language subtitles for 062 The Neuron-en

af Afrikaans
sq Albanian
am Amharic
ar Arabic
hy Armenian
az Azerbaijani
eu Basque
be Belarusian
bn Bengali
bs Bosnian
bg Bulgarian
ca Catalan
ceb Cebuano
ny Chichewa
zh-CN Chinese (Simplified)
zh-TW Chinese (Traditional)
co Corsican
hr Croatian
cs Czech
da Danish
nl Dutch
en English
eo Esperanto
et Estonian
tl Filipino
fi Finnish
fr French
fy Frisian
gl Galician
ka Georgian
de German
el Greek
gu Gujarati
ht Haitian Creole
ha Hausa
haw Hawaiian
iw Hebrew
hi Hindi
hmn Hmong
hu Hungarian
is Icelandic
ig Igbo
id Indonesian
ga Irish
it Italian
ja Japanese
jw Javanese
kn Kannada
kk Kazakh
km Khmer
ko Korean
ku Kurdish (Kurmanji)
ky Kyrgyz
lo Lao
la Latin
lv Latvian
lt Lithuanian
lb Luxembourgish
mk Macedonian
mg Malagasy
ms Malay
ml Malayalam
mt Maltese
mi Maori
mr Marathi
mn Mongolian
my Myanmar (Burmese)
ne Nepali
no Norwegian
ps Pashto
fa Persian
pl Polish
pt Portuguese
pa Punjabi
ro Romanian
ru Russian
sm Samoan
gd Scots Gaelic
sr Serbian
st Sesotho
sn Shona
sd Sindhi
si Sinhala
sk Slovak
sl Slovenian
so Somali
es Spanish
su Sundanese
sw Swahili
sv Swedish
tg Tajik
ta Tamil
te Telugu
th Thai
tr Turkish
uk Ukrainian
ur Urdu
uz Uzbek
vi Vietnamese Download
cy Welsh
xh Xhosa
yi Yiddish
yo Yoruba
zu Zulu
or Odia (Oriya)
rw Kinyarwanda
tk Turkmen
tt Tatar
ug Uyghur
Would you like to inspect the original subtitles? These are the user uploaded subtitles that are being translated: 1 00:00:00,300 --> 00:00:02,290 Hello and welcome back to the course and deep learning. 2 00:00:02,430 --> 00:00:07,980 Today we're talking about the neuron which is the basic building block of artificial neural networks. 3 00:00:08,010 --> 00:00:09,390 So let's get started. 4 00:00:09,390 --> 00:00:11,340 Previously we saw an image which looked like this. 5 00:00:11,340 --> 00:00:18,300 And these are actual real life neurons which have been smeared onto a gloss and color a little bit and 6 00:00:18,360 --> 00:00:19,950 they are observed through a microscope. 7 00:00:19,950 --> 00:00:22,140 So this is what they look like as you can see. 8 00:00:22,140 --> 00:00:29,730 Quite an interesting structure a body and then lots of different tails kind of branches coming out of 9 00:00:29,730 --> 00:00:30,250 them. 10 00:00:30,330 --> 00:00:32,350 And this is very interesting. 11 00:00:32,370 --> 00:00:38,400 But the question is how can we recreate that in a machine because we really need to recreate it and 12 00:00:38,400 --> 00:00:47,610 machine since the whole purpose of deep learning is to mimic how the human brain works in the hope that 13 00:00:47,820 --> 00:00:51,000 by doing so we're going to create something amazing. 14 00:00:51,000 --> 00:00:55,200 We're going to create an amazing infrastructure for machines to be able to learn. 15 00:00:55,230 --> 00:00:56,800 And why do we hope for that. 16 00:00:56,820 --> 00:01:03,480 Well because the human brain is well just happens to be one of the most powerful learning learning tools 17 00:01:03,880 --> 00:01:07,300 on the planet or like learning mechanisms on the planet. 18 00:01:07,320 --> 00:01:11,310 And we just hope that if we recreate that we'll have something as awesome as that. 19 00:01:11,310 --> 00:01:17,670 So our challenge right now our very first step to creating artificial neural networks is to recreate 20 00:01:17,700 --> 00:01:18,380 a neuron. 21 00:01:18,390 --> 00:01:19,090 So how do we do it. 22 00:01:19,110 --> 00:01:23,840 Well first of all let's have a closer look at what it actually is. 23 00:01:23,880 --> 00:01:33,180 This image was first created by a Spanish neuroscientist and Chagga Ramon Yi Kajal in 1899. 24 00:01:33,180 --> 00:01:37,780 And what he did was he died in neurons in actual brain tissue. 25 00:01:37,780 --> 00:01:39,850 And look at them under a microscope. 26 00:01:39,900 --> 00:01:43,530 And while he was looking at them he actually drew what he saw and this is what he saw. 27 00:01:43,530 --> 00:01:49,560 He saw it to your hands or two large neurons over there at the top which had all these branches coming 28 00:01:49,560 --> 00:01:57,930 off of them towards their top parts and then each one had this Araud or like thread coming out towards 29 00:01:57,930 --> 00:01:59,410 the bottom very long one. 30 00:01:59,520 --> 00:02:01,510 And so that's what he saw. 31 00:02:01,660 --> 00:02:07,800 And now you know technology has advanced quite a lot and we have seen neurons much closer in more detail 32 00:02:07,800 --> 00:02:11,890 and now we can actually draw what it looks like diagrammatic. 33 00:02:11,910 --> 00:02:13,220 So let's have a look at that. 34 00:02:13,440 --> 00:02:14,190 Here's a neuron. 35 00:02:14,190 --> 00:02:21,810 This is what it looks like very similar to what Santiago around one drew over here and here and this 36 00:02:21,810 --> 00:02:24,310 year and what we can see is that it's got a body. 37 00:02:24,570 --> 00:02:29,100 That's the main part of the neuron and then it's got some branches at the top which is called dendrites 38 00:02:29,160 --> 00:02:33,200 and it's also got an X on which is that long tail of the euro. 39 00:02:33,300 --> 00:02:38,030 And so what are these dendrites and when foreign was the axen for well. 40 00:02:38,130 --> 00:02:44,040 The key point to understand here is that neurons by themselves are pretty much useless. 41 00:02:44,040 --> 00:02:45,570 It's like it's like an ant. 42 00:02:45,600 --> 00:02:46,140 Right. 43 00:02:46,170 --> 00:02:49,640 And on its own can do my psych five ads together. 44 00:02:49,830 --> 00:02:51,170 Maybe they can pick something up. 45 00:02:51,190 --> 00:02:55,830 But again they they don't they can build an anthill or they call them establish a colony that can't 46 00:02:56,430 --> 00:02:59,340 work together as a as a huge organism. 47 00:02:59,370 --> 00:03:03,510 But at the same time when you have lots and lots of ads like you have in a million and they can build 48 00:03:03,510 --> 00:03:05,680 a whole colony they can build an anthill. 49 00:03:05,680 --> 00:03:06,600 Same thing with neurons. 50 00:03:06,600 --> 00:03:12,320 By itself it's not that strong but when you have lots of neurons together they work together to do magic. 51 00:03:12,510 --> 00:03:13,820 And how do they work together. 52 00:03:13,820 --> 00:03:14,430 That's the question. 53 00:03:14,440 --> 00:03:19,140 Well that's what the dendrites and Aksenov for so the dendrites are kind of like the receivers of the 54 00:03:19,140 --> 00:03:22,980 signal for the neuron and the axon is the transmitter of the signal for the neuron. 55 00:03:23,220 --> 00:03:26,520 And here's an image of how it all works conceptually. 56 00:03:26,520 --> 00:03:32,550 So at the top you've got on your own and you can see that it's dendrites are connected to axons of other 57 00:03:32,550 --> 00:03:35,990 neurons that are like even further away above it. 58 00:03:36,000 --> 00:03:42,930 And then the signal from your own travels down its axon and connects or passes on to the dendrites of 59 00:03:42,930 --> 00:03:44,960 the next neuron and that's how they're connected. 60 00:03:45,030 --> 00:03:53,040 And in that small image over there you can see that the axon doesn't actually touch the dendrite lot. 61 00:03:53,310 --> 00:03:59,130 A lot of machine learning or like a few machine learning scientists are very adamant about the fact 62 00:03:59,130 --> 00:04:03,650 that it doesn't touch it like the room it doesn't touch. 63 00:04:03,660 --> 00:04:06,890 It has been proven that there is no physical connection there. 64 00:04:06,960 --> 00:04:14,010 But the point that we're interested in is that that connection between them that the whole concept of 65 00:04:14,010 --> 00:04:19,590 the signal being passed that recall the sign ups you can see over there in that little image that's 66 00:04:20,300 --> 00:04:22,210 figure bracket is a sign up. 67 00:04:22,230 --> 00:04:23,820 And that's the trend we're going to be doing. 68 00:04:23,820 --> 00:04:29,820 So instead of calling our artificial neurons the lines that we're going to have or the connectors for 69 00:04:29,820 --> 00:04:34,200 artificial neurons we're now going to be calling them axons or dendrites because then the question is 70 00:04:34,200 --> 00:04:36,880 whose connection is this is it that neurons are neurons. 71 00:04:36,990 --> 00:04:39,340 We just call that good which is good to call them sign of cells. 72 00:04:39,510 --> 00:04:42,680 And that's kind of just answers all questions in a. 73 00:04:42,690 --> 00:04:47,610 That's just basically where the signal is passed doesn't matter who that element belongs to. 74 00:04:47,610 --> 00:04:51,550 They're just a representation of the signal pass and we'll see that just now. 75 00:04:51,960 --> 00:04:55,210 So basically that's how a neuron works. 76 00:04:55,210 --> 00:05:03,620 And yeah so let's move on to how are we going to represent neuron create neurons in machines are we 77 00:05:03,620 --> 00:05:09,380 moving away now we're moving away from neuroscience and moving into technology. 78 00:05:09,460 --> 00:05:10,260 And here we go. 79 00:05:10,360 --> 00:05:17,260 So here's our neuron also sometimes called the node than your own gets some input signals and it has 80 00:05:17,260 --> 00:05:18,400 an output signal. 81 00:05:18,400 --> 00:05:21,040 So dendrites and axons remember. 82 00:05:21,040 --> 00:05:27,490 But again we're going to call these sign ups and then these input signals we're going to present them 83 00:05:27,490 --> 00:05:29,050 of other neurons as well. 84 00:05:29,080 --> 00:05:35,500 So in this specific case you can see that this neuron is green you're on is getting signals from yellow 85 00:05:35,500 --> 00:05:35,860 neurons. 86 00:05:35,860 --> 00:05:41,800 And in this course we're going to try and stick to a certain color coding regime where yellow means 87 00:05:41,830 --> 00:05:42,540 an input layer. 88 00:05:42,540 --> 00:05:50,700 So basically all of the neurons that are on the outer layer on the first front of where are the signals 89 00:05:50,710 --> 00:05:52,300 coming in and by signal. 90 00:05:52,300 --> 00:05:59,200 It might be like a bit of an over overkill to call this a signal it's just basically input values so. 91 00:05:59,330 --> 00:06:04,720 So you know how even like in a simple linear regression you have input values and then you have a predicted 92 00:06:04,720 --> 00:06:05,620 value Same thing here. 93 00:06:05,620 --> 00:06:10,720 So you have input values and there they are the yellow ones and then on the right you'll see just now 94 00:06:10,720 --> 00:06:11,250 it will be red. 95 00:06:11,260 --> 00:06:12,820 It'll be the output value. 96 00:06:13,600 --> 00:06:17,140 The other thing that I wanted to point out here is that in this specific example we're looking at a 97 00:06:17,140 --> 00:06:21,320 neuron which is getting its signals from the input layer neurons. 98 00:06:21,320 --> 00:06:24,220 There are also neurons but their input their neurons. 99 00:06:24,530 --> 00:06:31,450 Sometimes you'll have neurons which get their signal from other hidden layer neurons so from other green 100 00:06:31,450 --> 00:06:35,860 neurons and the concept is going to be exactly the same I just in this case we use for simplicity's 101 00:06:35,860 --> 00:06:42,820 sake we're portraying this example and in terms of the input layer the way to think about it is in the 102 00:06:42,970 --> 00:06:49,900 in the analogy of the human brain the input layer is your senses right so whatever you can see hear 103 00:06:49,900 --> 00:06:52,290 feel touch or smell. 104 00:06:52,510 --> 00:06:57,220 And of course it's like there's there's a lot of things you can see there's a lot of information coming 105 00:06:57,220 --> 00:06:57,530 in. 106 00:06:57,730 --> 00:07:02,860 But those are your That's what your brain is limited to is pretty much a life. 107 00:07:03,010 --> 00:07:09,160 Israel lives in a box made out of bones and it's only just it's it's a mind blowing concept to think 108 00:07:09,160 --> 00:07:15,430 about that your brain is just locked in a black box and the only thing it can see can hear. 109 00:07:15,430 --> 00:07:20,710 The only thing it's getting is electrical impulses coming from these organs that you have we should 110 00:07:20,710 --> 00:07:28,210 call your ears nose eyes you know your sense of touch and we're whatever and you and your and your taste. 111 00:07:28,210 --> 00:07:34,150 Right so you're just getting signals but it basically lives in this dark black box and it's making making 112 00:07:34,150 --> 00:07:38,460 sense of the world through your senses it's phenomenal. 113 00:07:38,500 --> 00:07:38,930 And so yeah. 114 00:07:38,950 --> 00:07:43,030 So you have these inputs that are coming in in terms of human brain. 115 00:07:43,030 --> 00:07:49,540 Those are your five senses and in terms of machine learning or deep learning that is basically your 116 00:07:49,900 --> 00:07:55,500 input values are your independent variables and we'll get that in a second so your input values they 117 00:07:56,400 --> 00:08:01,450 the signal is passed through sinuses to the neuron and then your own has an output value that it passes 118 00:08:01,510 --> 00:08:03,200 further on down the chain. 119 00:08:03,550 --> 00:08:07,990 In this specific case in terms of color coding again yellow means input layer so we kind of simplifying 120 00:08:07,990 --> 00:08:11,830 everything here we're saying we're only going to have like the input layer and then we're going to have 121 00:08:11,830 --> 00:08:16,510 one hidden layer with the green which is the hinterland then we're going to have the output there right 122 00:08:16,510 --> 00:08:17,140 away. 123 00:08:17,530 --> 00:08:21,370 So just so that we can get used to these calls for now. 124 00:08:21,580 --> 00:08:24,030 So there we go that's the basic structure. 125 00:08:24,030 --> 00:08:28,390 So now let's look in a bit more detail at these different elements that we have. 126 00:08:28,390 --> 00:08:31,090 So we've got the input layer and what do we have here. 127 00:08:31,090 --> 00:08:37,090 Well we have these inputs which are in fact independent variable so depend variable one and a bit variable 128 00:08:37,090 --> 00:08:38,170 to independent variable. 129 00:08:38,170 --> 00:08:44,230 And the important thing to remember here is that these independent variables are all for one single 130 00:08:44,230 --> 00:08:44,740 observation. 131 00:08:44,740 --> 00:08:47,620 So think of it as just one row in your database. 132 00:08:47,620 --> 00:08:54,790 One observation you just take all of the independent variables you know maybe it's the age of the person 133 00:08:54,820 --> 00:09:01,270 there the amount of money in the bank account and then how how do they drive or walk to work what method 134 00:09:01,270 --> 00:09:03,070 of Shampoo's protection do they use. 135 00:09:03,080 --> 00:09:08,800 So but that's all descriptors of one specific person that you are either your training your model on 136 00:09:09,130 --> 00:09:12,510 or you're performing some prediction on. 137 00:09:12,610 --> 00:09:16,900 And the other thing you need to know about these variables is that you need to standardize them so you 138 00:09:16,900 --> 00:09:21,310 need to either standardize them which means make sure that they have a mean of zero and a variance one 139 00:09:21,340 --> 00:09:29,470 or you can also sometimes and headland will point out these traces in a bit more detail perhaps in practical 140 00:09:29,480 --> 00:09:34,090 terms you might come across these sometimes you might want to know standardising might want to normalize 141 00:09:34,090 --> 00:09:34,800 them. 142 00:09:34,990 --> 00:09:41,050 Meaning that instead of making sure the mean and very Muser and variance is one you just take you know 143 00:09:41,050 --> 00:09:46,130 to subtract the minimum value and then you divide by the maximum minus minimums by the range of your 144 00:09:46,150 --> 00:09:49,350 values and the four you get values between 0 and 1. 145 00:09:49,510 --> 00:09:53,580 And it depends on this scenario you might want to do one or the other. 146 00:09:53,590 --> 00:10:00,700 But basically you want all of these variables to be quite similar in about the same a range of values 147 00:10:00,760 --> 00:10:01,680 and why. 148 00:10:01,700 --> 00:10:02,150 Why is that. 149 00:10:02,150 --> 00:10:06,890 Well all of these values are going to go into a neural network where as we'll see just now they'll be 150 00:10:06,890 --> 00:10:12,980 added up and multiplied by weights it's added up and so on and just going to be is going to be easier 151 00:10:12,980 --> 00:10:16,990 for the neural network to process them if they're all about the same. 152 00:10:17,210 --> 00:10:23,790 And and in fact you know that's that's just how it is going to be able to work properly. 153 00:10:24,260 --> 00:10:29,210 And if you want to read more about standardization normalization and other things that you can do if 154 00:10:29,210 --> 00:10:36,440 you know what variables a good additional reading paper is called efficient back prob by young Licken 155 00:10:37,070 --> 00:10:39,630 1998 links over there. 156 00:10:39,650 --> 00:10:47,570 So we're actually going to talk more about this phenomenal person in the space of deep learning in the 157 00:10:47,680 --> 00:10:49,040 part of the course we're talking about. 158 00:10:49,040 --> 00:10:53,930 Convolutional neural networks and you'll you'll see that this is definitely a person who knows what 159 00:10:53,930 --> 00:10:55,280 he's talking about. 160 00:10:55,280 --> 00:11:00,840 He's a close friend of Jeffrey Hinton who we've already seen who very dim.. 161 00:11:00,860 --> 00:11:07,070 So in this paper you'll learn more about centers ization of normalization but also you can pick up lots 162 00:11:07,070 --> 00:11:11,510 of other different tips and tricks and you'll be a good a good source for additional reading as you 163 00:11:11,510 --> 00:11:12,470 go through this course. 164 00:11:12,590 --> 00:11:15,800 So yeah check it out if you are interested in some additional reading. 165 00:11:16,290 --> 00:11:16,940 There we go. 166 00:11:16,940 --> 00:11:20,210 So that's what we do for the verbals. 167 00:11:20,390 --> 00:11:23,140 And as here we've got the output value. 168 00:11:23,180 --> 00:11:24,960 So what can our output value be. 169 00:11:25,130 --> 00:11:26,280 Well we've got a couple of options. 170 00:11:26,300 --> 00:11:32,110 Output value can be it can be continuous Like for instance price it can be binary for instance a person 171 00:11:32,110 --> 00:11:39,260 will exit or will stay or it can be categorical verbal and physical wriggler categorical verbal. 172 00:11:39,260 --> 00:11:44,060 The important thing to remember here is that in that case your output value won't be just one it'll 173 00:11:44,060 --> 00:11:50,390 be several output values because these will be a dummy variables which will be representing your categories 174 00:11:51,360 --> 00:11:57,400 and that's just this how it works and just important to remember that in that case that's how you're 175 00:11:57,410 --> 00:12:02,640 going to be getting your categories out of the artificial neural network. 176 00:12:02,810 --> 00:12:05,510 But let's go back to a simple case of one output volume. 177 00:12:05,750 --> 00:12:11,780 And now let's one more point or kind of like the point the ready made I just wanted to reiterate this 178 00:12:11,780 --> 00:12:15,360 point on the left you've got a single observation. 179 00:12:15,470 --> 00:12:20,180 So wonder if you mean from your data set and on the right you have a single direction as well and that 180 00:12:20,210 --> 00:12:21,990 is the same observation. 181 00:12:22,190 --> 00:12:28,100 So important to remember that like whatever inputs you putting in that's for one row and then the output 182 00:12:28,100 --> 00:12:29,960 you get that is for that same exact row. 183 00:12:30,080 --> 00:12:34,420 Or if you're training your neural network then you know you're putting the inputs in for that one roll 184 00:12:34,420 --> 00:12:36,440 you're putting the output in for that one row. 185 00:12:36,440 --> 00:12:42,440 So like if you want to simplify the complexity think of it as a like a simple thing a regression or 186 00:12:42,440 --> 00:12:43,990 a multivariate linear regression. 187 00:12:43,990 --> 00:12:48,150 So you're putting in your values you have the output. 188 00:12:48,180 --> 00:12:49,700 There's there's like there's no question about it. 189 00:12:49,700 --> 00:12:52,790 When we're talking about things like regression because we're so used to it. 190 00:12:52,790 --> 00:12:54,980 Same thing here it's nothing too complex. 191 00:12:54,980 --> 00:12:56,740 We're just putting in values we're getting output. 192 00:12:56,760 --> 00:13:01,310 But just remember that every time it's one row you're dealing with so you don't get confused and start 193 00:13:01,310 --> 00:13:07,940 putting in like thinking that these are different different rows that you're putting into your artificial 194 00:13:07,940 --> 00:13:12,710 neural network or something this is all just values in that one Rowse a different observation different 195 00:13:12,710 --> 00:13:18,970 characteristics or attributes relating to that one observation every single time. 196 00:13:19,160 --> 00:13:26,090 OK so next thing we want to talk about here is our sinuses is Asylum's here we've got services and they 197 00:13:26,090 --> 00:13:28,940 all actually get assigned weights weights. 198 00:13:28,940 --> 00:13:37,100 We're going to talk more about weights further down but in short weights are crucial to artificial neural 199 00:13:37,100 --> 00:13:44,040 network nerve works functioning because weights are how neural networks learn by adjusting the weights 200 00:13:44,470 --> 00:13:50,270 the neural network decides in every single case what single signal is poor and what signal is not important 201 00:13:50,270 --> 00:13:51,140 to certain neuron. 202 00:13:51,140 --> 00:13:56,300 What single gets passed along and what signal doesn't get passed along or what strength to what extent 203 00:13:56,300 --> 00:13:57,770 signals get passed along. 204 00:13:57,770 --> 00:13:59,320 So weights are crucial. 205 00:13:59,330 --> 00:14:03,590 They are and they are the things that get adjusted through the process of learning. 206 00:14:03,590 --> 00:14:07,760 Like when when you're training under artificial neural network you're basically adjusting all of the 207 00:14:07,760 --> 00:14:11,050 weights in all of the sinuses across this whole neural network. 208 00:14:11,250 --> 00:14:17,910 And that's where gradient descent and back propagation come into play and those are concepts that we'll 209 00:14:17,910 --> 00:14:19,240 also discuss. 210 00:14:19,670 --> 00:14:21,410 So basically those are the weights. 211 00:14:21,410 --> 00:14:22,940 That's what I need to know for now. 212 00:14:23,150 --> 00:14:28,420 And we've got the neurons so signals go into the neuron and what happens in Europe. 213 00:14:28,430 --> 00:14:30,520 So this is the interesting part. 214 00:14:30,530 --> 00:14:33,670 Like we're talking about the neuron today what happens inside the neuron. 215 00:14:33,890 --> 00:14:40,100 So a few things happen first thing and the first step is that all of these values that it's getting 216 00:14:40,190 --> 00:14:43,730 gets added up so it takes that added. 217 00:14:43,850 --> 00:14:51,200 So the weighted sum of all of the input values that is getting very simple it's very very straightforward 218 00:14:51,220 --> 00:14:57,200 just add up multiply by the way add them up and then it applies an activation function. 219 00:14:57,200 --> 00:15:00,960 Now we're going to talk more about activation functions the down but it's basically a function that 220 00:15:00,960 --> 00:15:09,420 is assigned to this neuron or to this whole layer and it is applied to this way to add some. 221 00:15:09,570 --> 00:15:16,890 And then from that the eurozone understands if it needs to pass on a signal if you like and that's the 222 00:15:16,890 --> 00:15:22,260 signal that it passes on that the function applied to the way that some. 223 00:15:22,260 --> 00:15:26,400 But basically depending on the function the neuron will either pass on the signal it or it won't pass 224 00:15:26,400 --> 00:15:27,640 the signal on. 225 00:15:27,870 --> 00:15:30,910 And that's exactly what happened here in step three. 226 00:15:31,430 --> 00:15:35,670 The neuron pasta's on that signal to the next neuron down the line. 227 00:15:36,090 --> 00:15:39,960 And that's what we're going to talk about in the next tutorial because it is quite an important topic. 228 00:15:39,960 --> 00:15:46,380 We want to delve deeper into the activation function but hopefully for now everything is it should be 229 00:15:46,380 --> 00:15:51,600 pretty clear how you know the input values you've got weights you've got design houses you've got something 230 00:15:51,660 --> 00:15:56,430 you know happens in the neuron you've got a way Sarmad an activation function applied and then that 231 00:15:56,430 --> 00:16:00,510 is passed down the line and that is just repeated throughout the whole neural network on and on and 232 00:16:00,510 --> 00:16:06,660 on and on you know thousands hundreds of thousands of times depending on how big how many neurons you 233 00:16:06,660 --> 00:16:09,550 have how many sign ups as you have in your neural network. 234 00:16:09,570 --> 00:16:10,110 So there we go. 235 00:16:10,110 --> 00:16:13,270 Hope you enjoyed today's tutorial code to an extent. 236 00:16:13,410 --> 00:16:15,010 Until then enjoy deep learning. 26485

Can't find what you're looking for?
Get subtitles in any language from opensubtitles.com, and translate them here.