All language subtitles for 001 Accuracy Measurement using Mean Average Precision

af Afrikaans
ak Akan
sq Albanian
am Amharic
ar Arabic
hy Armenian
az Azerbaijani
eu Basque
be Belarusian
bem Bemba
bn Bengali
bh Bihari
bs Bosnian
br Breton
bg Bulgarian
km Cambodian
ca Catalan
ceb Cebuano
chr Cherokee
ny Chichewa
zh-CN Chinese (Simplified)
zh-TW Chinese (Traditional)
co Corsican
hr Croatian
cs Czech
da Danish
nl Dutch
en English
eo Esperanto
et Estonian
ee Ewe
fo Faroese
tl Filipino
fi Finnish
fr French
fy Frisian
gaa Ga
gl Galician
ka Georgian
de German
el Greek
gn Guarani
gu Gujarati
ht Haitian Creole
ha Hausa
haw Hawaiian
iw Hebrew
hi Hindi
hmn Hmong
hu Hungarian
is Icelandic
ig Igbo
id Indonesian
ia Interlingua
ga Irish
it Italian
ja Japanese
jw Javanese
kn Kannada
kk Kazakh
rw Kinyarwanda
rn Kirundi
kg Kongo
ko Korean
kri Krio (Sierra Leone)
ku Kurdish
ckb Kurdish (Soranî)
ky Kyrgyz
lo Laothian
la Latin
lv Latvian
ln Lingala
lt Lithuanian
loz Lozi
lg Luganda
ach Luo
lb Luxembourgish
mk Macedonian
mg Malagasy
ms Malay
ml Malayalam
mt Maltese
mi Maori
mr Marathi
mfe Mauritian Creole
mo Moldavian
mn Mongolian
my Myanmar (Burmese)
sr-ME Montenegrin
ne Nepali
pcm Nigerian Pidgin
nso Northern Sotho
no Norwegian
nn Norwegian (Nynorsk)
oc Occitan
or Oriya
om Oromo
ps Pashto
fa Persian
pl Polish
pt-BR Portuguese (Brazil)
pt Portuguese (Portugal)
pa Punjabi
qu Quechua
ro Romanian
rm Romansh
nyn Runyakitara
ru Russian
sm Samoan
gd Scots Gaelic
sr Serbian
sh Serbo-Croatian
st Sesotho
tn Setswana
crs Seychellois Creole
sn Shona
sd Sindhi
si Sinhalese
sk Slovak
sl Slovenian
so Somali
es Spanish
es-419 Spanish (Latin American)
su Sundanese
sw Swahili
sv Swedish
tg Tajik
ta Tamil
tt Tatar
te Telugu
th Thai
ti Tigrinya
to Tonga
lua Tshiluba
tum Tumbuka
tr Turkish
tk Turkmen
tw Twi
ug Uighur
uk Ukrainian
ur Urdu
uz Uzbek
vi Vietnamese Download
cy Welsh
wo Wolof
xh Xhosa
yi Yiddish
yo Yoruba
zu Zulu
Would you like to inspect the original subtitles? These are the user uploaded subtitles that are being translated: 1 00:00:00,090 --> 00:00:04,440 In the previous video, we have done training and have tried to detect face masks using the trained 2 00:00:04,440 --> 00:00:05,100 weights. 3 00:00:05,460 --> 00:00:10,860 In this video, we will measure the accuracy of the trained weights using mean average precision, mean 4 00:00:10,860 --> 00:00:15,180 average precision, or MFP is a metric for evaluating an object detection model. 5 00:00:15,480 --> 00:00:19,860 But before calculating MP, I will explain some of the arguments that can be used. 6 00:00:20,280 --> 00:00:21,860 First there is the weights argument. 7 00:00:21,870 --> 00:00:26,130 This is the goal of these seven weights file that will be used to calculate the MP value. 8 00:00:26,670 --> 00:00:28,830 This is an example of its application. 9 00:00:30,040 --> 00:00:32,140 Next is the bed size argument. 10 00:00:32,500 --> 00:00:35,650 This argument is the number of images processed at one time. 11 00:00:35,680 --> 00:00:37,960 This is an example of its application. 12 00:00:38,650 --> 00:00:40,390 Next is the device argument. 13 00:00:40,420 --> 00:00:44,950 This argument is used to select which CPU to use by writing down the index. 14 00:00:44,980 --> 00:00:49,920 The default value of this argument is zero, which means it selects the first available CPU with queue 15 00:00:49,960 --> 00:00:51,340 to support on the computer. 16 00:00:51,610 --> 00:00:57,250 If using CPU replace zero with CPU in this argument, the data argument comes next. 17 00:00:57,280 --> 00:01:02,200 This argument is a data file that contains the number of classes, the dataset product and the class 18 00:01:02,200 --> 00:01:02,680 name. 19 00:01:02,830 --> 00:01:05,080 This is an example of its application. 20 00:01:06,260 --> 00:01:07,860 Next is the IMT argument. 21 00:01:07,880 --> 00:01:10,580 This argument is the size of the image to be processed. 22 00:01:10,610 --> 00:01:13,070 The following is an example of its application. 23 00:01:14,460 --> 00:01:16,380 The confidence argument comes next. 24 00:01:16,740 --> 00:01:18,810 This is an object confidence threshold. 25 00:01:18,840 --> 00:01:21,360 The following is an example of its application. 26 00:01:21,450 --> 00:01:23,400 Next is the I.O.U. argument. 27 00:01:23,820 --> 00:01:25,400 This is the IOU threshold. 28 00:01:25,560 --> 00:01:30,390 I only use the ratio of the overlapping area between the predicted bounding box and the ground truth 29 00:01:30,390 --> 00:01:31,440 bounding box. 30 00:01:31,890 --> 00:01:36,760 The detection result is said to be correct if it has an IOU value greater than or equal to the threshold. 31 00:01:37,770 --> 00:01:39,930 This is an example of its application. 32 00:01:41,350 --> 00:01:42,760 Next is the name argument. 33 00:01:43,120 --> 00:01:47,320 This argument is the name of the folder that stores the P calculation results. 34 00:01:48,770 --> 00:01:50,960 This is an example of its application. 35 00:01:52,050 --> 00:01:53,910 Next is the Tusk argument. 36 00:01:54,450 --> 00:02:00,240 This argument specifies whether the calculations should be run on train validation or test data set. 37 00:02:01,430 --> 00:02:03,860 The following is an example of its application. 38 00:02:07,460 --> 00:02:12,170 In this video we will calculate upon validation and test of the face mask dataset. 39 00:02:16,480 --> 00:02:22,630 The validation dataset is located in the folder listed below and the test dataset is in the folder below. 40 00:02:27,130 --> 00:02:29,500 These are the images on the foundation dataset. 41 00:02:30,010 --> 00:02:31,420 There are 80 images. 42 00:02:34,910 --> 00:02:36,830 These are the images on the test dataset. 43 00:02:38,430 --> 00:02:39,930 There are 81 images. 44 00:02:46,490 --> 00:02:49,820 The first step to calculate MLP is to launch the Anaconda prompt. 45 00:02:50,570 --> 00:02:51,980 Press the Windows button. 46 00:02:53,200 --> 00:02:54,550 Then enter Anaconda. 47 00:02:54,580 --> 00:02:56,080 Click the Anaconda prompt. 48 00:03:00,530 --> 00:03:04,640 Then activate the all of seven CPU environment using the command. 49 00:03:05,620 --> 00:03:06,610 Activate yolo. 50 00:03:06,610 --> 00:03:08,560 v72 for you and for. 51 00:03:11,540 --> 00:03:12,350 Press enter. 52 00:03:16,480 --> 00:03:19,540 They never get to the goal of seven zips further. 53 00:03:29,590 --> 00:03:32,820 First, we will calculate MLP and the validation dataset. 54 00:03:33,080 --> 00:03:34,090 Use the command. 55 00:03:35,430 --> 00:03:37,350 Python test the PI. 56 00:03:37,440 --> 00:03:42,240 That's that's why it's we will use the train weights in this example in runs train. 57 00:03:43,130 --> 00:03:45,200 YOLO v seven Face mask. 58 00:03:46,130 --> 00:03:46,940 Waits. 59 00:03:47,950 --> 00:03:49,060 That's not pretty. 60 00:03:51,720 --> 00:03:53,550 In bedsides we write to. 61 00:03:56,930 --> 00:03:58,400 On device write zero. 62 00:04:02,890 --> 00:04:03,570 In the data. 63 00:04:03,610 --> 00:04:07,060 Write down the data file that was previously created in the training section. 64 00:04:10,760 --> 00:04:13,790 We use 640 pixels for the image size. 65 00:04:16,420 --> 00:04:21,399 In conference, we use 0.01, which is the default value for measuring accuracy. 66 00:04:24,470 --> 00:04:26,690 In IOU, we use 0.5. 67 00:04:30,740 --> 00:04:34,520 We write to all of his seven face must well in the name argument. 68 00:04:37,580 --> 00:04:39,080 In the last argument, right? 69 00:04:39,080 --> 00:04:42,230 Well, because we will calculate MLP in validation dataset. 70 00:04:45,050 --> 00:04:45,880 Chris enter. 71 00:04:51,790 --> 00:04:54,340 Wait until the calculation is finished. 72 00:05:02,770 --> 00:05:04,070 Here are the results. 73 00:05:04,090 --> 00:05:07,570 The calculation results will display precision recall. 74 00:05:08,510 --> 00:05:09,470 And LP. 75 00:05:10,180 --> 00:05:15,720 MP 0.5 indicates that the MLP calculates and employs an IOU threshold of 0.5. 76 00:05:16,180 --> 00:05:18,160 The detection result is said to be correct. 77 00:05:18,190 --> 00:05:21,320 If it has an IOU value of at least 0.5. 78 00:05:22,000 --> 00:05:26,470 In this example, the MLP value for all classes is 0.767. 79 00:05:28,610 --> 00:05:31,160 There are also MFP values for each class. 80 00:05:31,490 --> 00:05:35,990 This value can be used to determine whether the training results are suitable for all classes. 81 00:05:37,010 --> 00:05:42,320 In this example, the training results are good for mosque and no mosque, but not good for bad mosques. 82 00:05:45,300 --> 00:05:48,150 Next, we will calculate MLP on the test dataset. 83 00:05:48,390 --> 00:05:50,820 Use the common python. 84 00:05:50,820 --> 00:05:51,930 Test the python. 85 00:05:52,950 --> 00:05:54,240 That's that's why it's. 86 00:05:54,840 --> 00:05:56,520 We will use the train weights. 87 00:05:59,600 --> 00:06:01,460 In bed size we write through. 88 00:06:04,680 --> 00:06:06,120 On device write zero. 89 00:06:08,240 --> 00:06:08,920 In the data. 90 00:06:08,960 --> 00:06:12,410 Write down the data file that was previously created in the training section. 91 00:06:15,240 --> 00:06:18,270 We use 640 pixels for the image size. 92 00:06:20,110 --> 00:06:22,750 In contrast, we use 0.01. 93 00:06:26,450 --> 00:06:28,660 You know, you use 0.5. 94 00:06:32,660 --> 00:06:36,530 You write Your love is seven face must test in the name of human. 95 00:06:38,170 --> 00:06:42,640 In the first argument right test because we will calculate MLP in test dataset. 96 00:06:44,680 --> 00:06:45,490 Press enter. 97 00:06:49,630 --> 00:06:52,180 Wait until the MP calculation is finished. 98 00:07:00,400 --> 00:07:04,120 The following is the result of the map calculation on the test dataset. 99 00:07:07,250 --> 00:07:11,150 That's all explanation for measuring accuracy using mean average precision. 100 00:07:11,180 --> 00:07:11,780 Thank you. 101 00:07:11,780 --> 00:07:12,620 And see you then. 8487

Can't find what you're looking for?
Get subtitles in any language from opensubtitles.com, and translate them here.