All language subtitles for The Only AI Tools You Need (12-Minute Guide) [English (auto-generated)] [DownloadYoutubeSubtitles.com]

af Afrikaans
ak Akan
sq Albanian
am Amharic
ar Arabic Download
hy Armenian
az Azerbaijani
eu Basque
be Belarusian
bem Bemba
bn Bengali
bh Bihari
bs Bosnian
br Breton
bg Bulgarian
km Cambodian
ca Catalan
ceb Cebuano
chr Cherokee
ny Chichewa
zh-CN Chinese (Simplified)
zh-TW Chinese (Traditional)
co Corsican
hr Croatian
cs Czech
da Danish
nl Dutch
en English
eo Esperanto
et Estonian
ee Ewe
fo Faroese
tl Filipino
fi Finnish
fr French
fy Frisian
gaa Ga
gl Galician
ka Georgian
de German
el Greek
gn Guarani
gu Gujarati
ht Haitian Creole
ha Hausa
haw Hawaiian
iw Hebrew
hi Hindi
hmn Hmong
hu Hungarian
is Icelandic
ig Igbo
id Indonesian
ia Interlingua
ga Irish
it Italian
ja Japanese
jw Javanese
kn Kannada
kk Kazakh
rw Kinyarwanda
rn Kirundi
kg Kongo
ko Korean
kri Krio (Sierra Leone)
ku Kurdish
ckb Kurdish (Soranî)
ky Kyrgyz
lo Laothian
la Latin
lv Latvian
ln Lingala
lt Lithuanian
loz Lozi
lg Luganda
ach Luo
lb Luxembourgish
mk Macedonian
mg Malagasy
ms Malay
ml Malayalam
mt Maltese
mi Maori
mr Marathi
mfe Mauritian Creole
mo Moldavian
mn Mongolian
my Myanmar (Burmese)
sr-ME Montenegrin
ne Nepali
pcm Nigerian Pidgin
nso Northern Sotho
no Norwegian
nn Norwegian (Nynorsk)
oc Occitan
or Oriya
om Oromo
ps Pashto
fa Persian
pl Polish
pt-BR Portuguese (Brazil)
pt Portuguese (Portugal)
pa Punjabi
qu Quechua
ro Romanian
rm Romansh
nyn Runyakitara
ru Russian
sm Samoan
gd Scots Gaelic
sr Serbian
sh Serbo-Croatian
st Sesotho
tn Setswana
crs Seychellois Creole
sn Shona
sd Sindhi
si Sinhalese
sk Slovak
sl Slovenian
so Somali
es Spanish
es-419 Spanish (Latin American)
su Sundanese
sw Swahili
sv Swedish
tg Tajik
ta Tamil
tt Tatar
te Telugu
th Thai
ti Tigrinya
to Tonga
lua Tshiluba
tum Tumbuka
tr Turkish
tk Turkmen
tw Twi
ug Uighur
uk Ukrainian
ur Urdu
uz Uzbek
vi Vietnamese
cy Welsh
wo Wolof
xh Xhosa
yi Yiddish
yo Yoruba
zu Zulu
Would you like to inspect the original subtitles? These are the user uploaded subtitles that are being translated: 1 00:00:00,000 --> 00:00:04,560 I use around 10 AI tools for 90% of my 2 00:00:02,480 --> 00:00:07,200 work, [music] and each one excels in one 3 00:00:04,560 --> 00:00:09,920 specific area. But figuring out which 4 00:00:07,200 --> 00:00:12,240 tool works best for what task usually 5 00:00:09,920 --> 00:00:14,559 takes months of trial and error. So, 6 00:00:12,240 --> 00:00:16,480 I'll share the one thing each tool does 7 00:00:14,559 --> 00:00:18,640 better than alternatives, so you walk 8 00:00:16,480 --> 00:00:20,640 away with a clear mental model for when 9 00:00:18,640 --> 00:00:22,960 to use what. I've grouped these tools 10 00:00:20,640 --> 00:00:24,960 into four categories across a two-part 11 00:00:22,960 --> 00:00:26,640 series. There's just too much to cover. 12 00:00:24,960 --> 00:00:29,359 This video covers everyday and 13 00:00:26,640 --> 00:00:31,519 specialist AI, while part two covers the 14 00:00:29,359 --> 00:00:33,040 remaining two categories. Let's get 15 00:00:31,519 --> 00:00:35,120 started. Kicking things off with 16 00:00:33,040 --> 00:00:37,440 everyday AI. These are your general 17 00:00:35,120 --> 00:00:38,960 purpose chatbots. Chachi, Gemini, and 18 00:00:37,440 --> 00:00:40,960 Claude. And while they seem 19 00:00:38,960 --> 00:00:43,200 interchangeable, their quote unquote 20 00:00:40,960 --> 00:00:45,440 moes, the specific things they do best 21 00:00:43,200 --> 00:00:47,520 have actually become quite distinct. 22 00:00:45,440 --> 00:00:49,760 Starting with the OG Chachet. While 23 00:00:47,520 --> 00:00:52,399 Gemini and Claude are arguably just as 24 00:00:49,760 --> 00:00:54,879 capable in raw power, Chachib still 25 00:00:52,399 --> 00:00:57,120 holds the crown in one area. It's the 26 00:00:54,879 --> 00:00:59,760 most obedient model. [music] In plain 27 00:00:57,120 --> 00:01:02,000 English, Chachib drops fewer balls when 28 00:00:59,760 --> 00:01:04,000 you hand it a complex checklist. Other 29 00:01:02,000 --> 00:01:05,920 models might be just as smart, but give 30 00:01:04,000 --> 00:01:08,240 them a lengthy set of instructions, and 31 00:01:05,920 --> 00:01:10,159 they'll sometimes skip a step or decide 32 00:01:08,240 --> 00:01:12,400 they know better. If you want proof of 33 00:01:10,159 --> 00:01:14,720 this, just ask each model to optimize a 34 00:01:12,400 --> 00:01:16,880 rough prompt for itself. Chacht will 35 00:01:14,720 --> 00:01:19,119 generate a noticeably longer and more 36 00:01:16,880 --> 00:01:20,799 detailed prompt because it knows it can 37 00:01:19,119 --> 00:01:23,040 handle the complexity. And if you run 38 00:01:20,799 --> 00:01:25,520 that optimized chachib prompt through 39 00:01:23,040 --> 00:01:28,320 both chacht and gemini for example, 40 00:01:25,520 --> 00:01:30,400 you'll notice two things. First, chachib 41 00:01:28,320 --> 00:01:32,640 thinks longer because it's actually 42 00:01:30,400 --> 00:01:35,200 checking every requirement and it 43 00:01:32,640 --> 00:01:37,600 follows each instruction to the letter. 44 00:01:35,200 --> 00:01:40,159 Gemini on the other hand often takes 45 00:01:37,600 --> 00:01:42,320 shortcuts. Pro tip, I share the exact 46 00:01:40,159 --> 00:01:44,159 prompt optimizer in the essential power 47 00:01:42,320 --> 00:01:45,600 prompts template linked below, but you 48 00:01:44,159 --> 00:01:47,520 can test this yourself with something as 49 00:01:45,600 --> 00:01:50,479 simple as optimize this prompt for 50 00:01:47,520 --> 00:01:52,399 Chachib insert model number here. Here's 51 00:01:50,479 --> 00:01:54,479 my rough prompt. Diving into a real 52 00:01:52,399 --> 00:01:56,880 world example, I gave both Chachet and 53 00:01:54,479 --> 00:01:59,520 Gemini the same complex prompt, a hiring 54 00:01:56,880 --> 00:02:01,920 rubric with a dozen requirements. Chachi 55 00:01:59,520 --> 00:02:03,680 delivered every single one. Gemini's 56 00:02:01,920 --> 00:02:05,280 output looked right at first glance, but 57 00:02:03,680 --> 00:02:07,600 when I checked it against my original 58 00:02:05,280 --> 00:02:10,080 list, it had quietly dropped a few 59 00:02:07,600 --> 00:02:11,680 rules. That's the key difference. 60 00:02:10,080 --> 00:02:13,840 Chachib doesn't decide which 61 00:02:11,680 --> 00:02:16,000 instructions matter. It just follows 62 00:02:13,840 --> 00:02:17,440 them. Here's a second simpler example. 63 00:02:16,000 --> 00:02:19,280 Sometimes when you explicitly tell 64 00:02:17,440 --> 00:02:21,760 Gemini to search the web, it just 65 00:02:19,280 --> 00:02:24,000 doesn't, which is wild since Gemini and 66 00:02:21,760 --> 00:02:26,400 Google search are both Google products, 67 00:02:24,000 --> 00:02:28,800 right? Whereas with ChachiT, when you 68 00:02:26,400 --> 00:02:31,360 enable web search, it performs the web 69 00:02:28,800 --> 00:02:32,959 search every single [music] time. I know 70 00:02:31,360 --> 00:02:35,200 this is a small example, but it's 71 00:02:32,959 --> 00:02:37,519 downstream from Chachib's core 72 00:02:35,200 --> 00:02:39,920 superpower. Obedience means you can 73 00:02:37,519 --> 00:02:41,599 trust the behavior you ask for. So, as a 74 00:02:39,920 --> 00:02:43,440 rule of thumb, if a task has a lot of 75 00:02:41,599 --> 00:02:45,360 moving parts, and getting one wrong 76 00:02:43,440 --> 00:02:48,080 breaks the whole thing, start with 77 00:02:45,360 --> 00:02:50,560 Chachib. Next up, Gemini. Where ChachiT 78 00:02:48,080 --> 00:02:53,040 wins on obedience, Gemini wins on 79 00:02:50,560 --> 00:02:55,200 multiodality. In plain English, Gemini 80 00:02:53,040 --> 00:02:57,519 is able to process a massive amount of 81 00:02:55,200 --> 00:02:59,760 mixed media, video, audio, images, and 82 00:02:57,519 --> 00:03:01,840 text natively. Taking a look at this 83 00:02:59,760 --> 00:03:04,400 table, we see that only Gemini can 84 00:03:01,840 --> 00:03:06,239 handle all four types of media natively. 85 00:03:04,400 --> 00:03:08,640 It's able to quote unquote listen to 86 00:03:06,239 --> 00:03:11,040 audio and quote unquote watch videos, 87 00:03:08,640 --> 00:03:13,280 while Tragic and Claude use roundabout 88 00:03:11,040 --> 00:03:16,080 ways to access that information. What's 89 00:03:13,280 --> 00:03:18,560 more, Gemini's massive 1 million token 90 00:03:16,080 --> 00:03:20,720 context window means it can handle large 91 00:03:18,560 --> 00:03:23,519 video recordings, hour-long audio 92 00:03:20,720 --> 00:03:25,519 recordings, full slide decks, all 93 00:03:23,519 --> 00:03:27,120 together that would literally choke 94 00:03:25,519 --> 00:03:29,040 other models. If you watch my latest 95 00:03:27,120 --> 00:03:31,440 Gemini video, you'll remember the use 96 00:03:29,040 --> 00:03:33,680 case where I screen recorded a messy 97 00:03:31,440 --> 00:03:35,360 walkthrough of myself completing a task, 98 00:03:33,680 --> 00:03:36,959 uploading that video onto Gemini, and 99 00:03:35,360 --> 00:03:40,319 asking Gemini to turn it into a 100 00:03:36,959 --> 00:03:42,879 readytouse SOP with perfect formatting, 101 00:03:40,319 --> 00:03:45,599 which is an example of Gemini ingesting 102 00:03:42,879 --> 00:03:47,360 video and turning it into text. Now, 103 00:03:45,599 --> 00:03:48,959 let's take that a step further. Imagine 104 00:03:47,360 --> 00:03:50,959 you just finished a weekly meeting. You 105 00:03:48,959 --> 00:03:52,799 have a video recording of the call, a 20 106 00:03:50,959 --> 00:03:54,640 slide deck, and a photo of a messy 107 00:03:52,799 --> 00:03:56,560 whiteboard session. You can upload all 108 00:03:54,640 --> 00:03:58,400 three and ask Gemini to summarize what 109 00:03:56,560 --> 00:04:00,640 was discussed, pull out the key 110 00:03:58,400 --> 00:04:03,040 decisions, and draft the follow-up 111 00:04:00,640 --> 00:04:05,760 email. Gemini is the only tool that can 112 00:04:03,040 --> 00:04:07,599 synthesize all three in one go. All that 113 00:04:05,760 --> 00:04:10,000 said, I have to point out that Gemini's 114 00:04:07,599 --> 00:04:13,120 raw reasoning capabilities sometimes 115 00:04:10,000 --> 00:04:15,200 feels slightly behind CatchBT. But when 116 00:04:13,120 --> 00:04:17,280 the task involves video, audio, or 117 00:04:15,200 --> 00:04:19,120 massive files, the trade-off is 118 00:04:17,280 --> 00:04:20,720 obviously worth it. Speaking of matching 119 00:04:19,120 --> 00:04:22,639 the right tool to the task, today's 120 00:04:20,720 --> 00:04:24,800 sponsor HubSpot put together a free 121 00:04:22,639 --> 00:04:27,440 guide called the AI productivity stack 122 00:04:24,800 --> 00:04:29,280 that covers 50 tools organized by use 123 00:04:27,440 --> 00:04:31,360 case. Here's why I like it. While this 124 00:04:29,280 --> 00:04:33,199 video focuses on my personal favorites, 125 00:04:31,360 --> 00:04:35,440 your workflow probably needs something 126 00:04:33,199 --> 00:04:38,320 different. Maybe you're in marketing and 127 00:04:35,440 --> 00:04:39,759 need SEO specific tools or you manage a 128 00:04:38,320 --> 00:04:42,000 team and want to build automated 129 00:04:39,759 --> 00:04:43,520 workflows with reliable AI. This guide 130 00:04:42,000 --> 00:04:45,360 breaks down tools across business 131 00:04:43,520 --> 00:04:47,040 functions like research, design, and 132 00:04:45,360 --> 00:04:49,120 marketing. And for each tool, it shows 133 00:04:47,040 --> 00:04:51,759 you the best use case, key features, 134 00:04:49,120 --> 00:04:53,840 pricing, and a step-by-step workflow. 135 00:04:51,759 --> 00:04:56,240 What I found most useful is the decision 136 00:04:53,840 --> 00:04:58,720 logic at the end of each section. So, 137 00:04:56,240 --> 00:05:00,960 for example, the research category tells 138 00:04:58,720 --> 00:05:03,520 you exactly when to use Perplexity 139 00:05:00,960 --> 00:05:05,680 versus Claude versus Humatada based on 140 00:05:03,520 --> 00:05:07,680 what you're actually trying to do. It's 141 00:05:05,680 --> 00:05:09,440 a great way to quickly understand what 142 00:05:07,680 --> 00:05:11,039 each tool does. [music] Well, I'll leave 143 00:05:09,440 --> 00:05:12,880 a link to this free guide down below. 144 00:05:11,039 --> 00:05:14,800 Thank you, HubSpot, for sponsoring this 145 00:05:12,880 --> 00:05:16,960 video. Rounding out the everyday AI 146 00:05:14,800 --> 00:05:18,960 category, Claude. Claude superpower is 147 00:05:16,960 --> 00:05:21,280 producing higher quality first drafts 148 00:05:18,960 --> 00:05:23,199 than the other models. In plain English, 149 00:05:21,280 --> 00:05:25,680 that means Claude's first attempt is 150 00:05:23,199 --> 00:05:28,080 usually closer to done. This superpower 151 00:05:25,680 --> 00:05:30,320 shows up in two areas. First, coding. 152 00:05:28,080 --> 00:05:32,960 Here's a fun fact. The latest version of 153 00:05:30,320 --> 00:05:36,320 Gemini beat the older version of Claude 154 00:05:32,960 --> 00:05:39,520 in every single benchmark score except 155 00:05:36,320 --> 00:05:41,919 for the coding one, which is crazy. So 156 00:05:39,520 --> 00:05:44,720 obviously Anthropic has figured out 157 00:05:41,919 --> 00:05:47,680 something related to coding the others 158 00:05:44,720 --> 00:05:49,840 haven't. And in practice, developers 159 00:05:47,680 --> 00:05:51,919 universally agree that Claude writes 160 00:05:49,840 --> 00:05:54,080 functional code on the first try more 161 00:05:51,919 --> 00:05:56,160 consistently than alternatives. Here's a 162 00:05:54,080 --> 00:05:58,080 real world example. I needed to bulk 163 00:05:56,160 --> 00:05:59,680 export conversations from a customer 164 00:05:58,080 --> 00:06:01,759 service platform, but their support team 165 00:05:59,680 --> 00:06:03,520 said only developers could do it. I 166 00:06:01,759 --> 00:06:05,440 described the problem and Claude not 167 00:06:03,520 --> 00:06:08,240 only gave me step-by-step instructions 168 00:06:05,440 --> 00:06:10,400 but also wrote a script in Go that 169 00:06:08,240 --> 00:06:12,400 worked on the first try. I don't even 170 00:06:10,400 --> 00:06:14,080 know what Go is nor can I write code. 171 00:06:12,400 --> 00:06:16,160 Another example, I asked all three 172 00:06:14,080 --> 00:06:18,080 models to turn a static image into an 173 00:06:16,160 --> 00:06:20,160 interactive chart and Claude performed 174 00:06:18,080 --> 00:06:21,840 the best on the first try. So basically, 175 00:06:20,160 --> 00:06:24,319 anything that requires generating 176 00:06:21,840 --> 00:06:26,080 working code tends to favor Claude. Pro 177 00:06:24,319 --> 00:06:28,800 tip, when it comes to diagrams, you can 178 00:06:26,080 --> 00:06:31,039 ask Claw to generate mermaid code, which 179 00:06:28,800 --> 00:06:33,520 you can then paste directly into tools 180 00:06:31,039 --> 00:06:36,080 like Excaliraw to get clean visuals in 181 00:06:33,520 --> 00:06:38,160 minutes. Area two, polishing copy. 182 00:06:36,080 --> 00:06:40,400 Beyond code, Claude produces written 183 00:06:38,160 --> 00:06:42,319 drafts that sound human and need fewer 184 00:06:40,400 --> 00:06:44,319 revisions. When you need to tighten an 185 00:06:42,319 --> 00:06:46,720 argument or match a specific voice, 186 00:06:44,319 --> 00:06:49,360 Claude just gets it. Put simply, it's 187 00:06:46,720 --> 00:06:51,280 exceptionally good at style matching. 188 00:06:49,360 --> 00:06:53,840 Once you share examples of your existing 189 00:06:51,280 --> 00:06:55,600 work, it replicates your tone almost 190 00:06:53,840 --> 00:06:57,280 perfectly. When I was in corporate, I'd 191 00:06:55,600 --> 00:06:58,639 shared previous documents so Claude 192 00:06:57,280 --> 00:07:00,720 could replicate that voice across 193 00:06:58,639 --> 00:07:02,400 presentations and performance reviews. 194 00:07:00,720 --> 00:07:04,240 And now, as a creator, I feed it my 195 00:07:02,400 --> 00:07:06,080 existing YouTube scripts to help refine 196 00:07:04,240 --> 00:07:07,840 new drafts. At this point, you might be 197 00:07:06,080 --> 00:07:09,680 wondering how I use all three everyday 198 00:07:07,840 --> 00:07:12,319 AI tools together. In a nutshell, 199 00:07:09,680 --> 00:07:14,560 Chachip or Gemini usually handles the 200 00:07:12,319 --> 00:07:16,479 beginning of my work, ideation, 201 00:07:14,560 --> 00:07:18,720 research, drafting the outline of a 202 00:07:16,479 --> 00:07:21,039 presentation. Claude then handles the 203 00:07:18,720 --> 00:07:22,720 last mile, turning that rough output 204 00:07:21,039 --> 00:07:24,479 into something I'm ready to present or 205 00:07:22,720 --> 00:07:25,840 publish. Quick note on Grock. A lot of 206 00:07:24,479 --> 00:07:27,360 people ask why I don't use it. It's 207 00:07:25,840 --> 00:07:29,360 actually very simple. Uh Grock's 208 00:07:27,360 --> 00:07:31,919 superpower is its direct access to the 209 00:07:29,360 --> 00:07:33,680 Twitter/x fire hose, right? So it's the 210 00:07:31,919 --> 00:07:35,840 best option for people who need to 211 00:07:33,680 --> 00:07:37,680 analyze breaking news events in real 212 00:07:35,840 --> 00:07:39,599 time. I never needed that. And as a rule 213 00:07:37,680 --> 00:07:41,280 of thumb, we should never use tools just 214 00:07:39,599 --> 00:07:43,280 for the sake of using tools. We should 215 00:07:41,280 --> 00:07:45,120 only add them to our toolkit when they 216 00:07:43,280 --> 00:07:46,639 solve an actual problem we have. Here's 217 00:07:45,120 --> 00:07:47,840 a quick recap of the three models and 218 00:07:46,639 --> 00:07:49,440 when to use them. And if you're 219 00:07:47,840 --> 00:07:51,680 wondering whether you need all three, 220 00:07:49,440 --> 00:07:52,800 the short answer is no. Most people 221 00:07:51,680 --> 00:07:55,280 should stick with the paid version of 222 00:07:52,800 --> 00:07:57,280 ChachiBT and get really good at it. But 223 00:07:55,280 --> 00:07:59,199 if you can afford multiple subscriptions 224 00:07:57,280 --> 00:08:01,280 and your workflow can take advantage of 225 00:07:59,199 --> 00:08:03,039 their individual superpowers, mix and 226 00:08:01,280 --> 00:08:05,280 match as needed. Fun fact, according to 227 00:08:03,039 --> 00:08:07,440 this study on open router data, models 228 00:08:05,280 --> 00:08:10,400 from different labs like Chadypt and 229 00:08:07,440 --> 00:08:12,160 Gemini expand the pie of AI use cases 230 00:08:10,400 --> 00:08:13,680 precisely because they excel at 231 00:08:12,160 --> 00:08:15,759 different things. Onto the second 232 00:08:13,680 --> 00:08:17,680 category, specialist AI. Before diving 233 00:08:15,759 --> 00:08:20,400 in, let's clear up a very common 234 00:08:17,680 --> 00:08:22,800 misconception. Tools like Perplexity are 235 00:08:20,400 --> 00:08:25,440 not foundational models. Here's a simple 236 00:08:22,800 --> 00:08:28,400 visual. OpenAI, a Frontier AI lab, 237 00:08:25,440 --> 00:08:31,199 develops the GPT family of models. They 238 00:08:28,400 --> 00:08:31,560 also created ChatGpt as the userfriendly 239 00:08:31,199 --> 00:08:31,599 app 240 00:08:31,560 --> 00:08:34,240 >> [music] 241 00:08:31,599 --> 00:08:36,719 >> layer. Perplexity is different. It 242 00:08:34,240 --> 00:08:39,360 fine-tunes existing foundational models 243 00:08:36,719 --> 00:08:42,320 for speed and accuracy and is optimized 244 00:08:39,360 --> 00:08:44,720 for search. Their own sonar model, for 245 00:08:42,320 --> 00:08:47,279 example, is just a fine-tuned version of 246 00:08:44,720 --> 00:08:49,680 Meta's openweight llama model. So, on 247 00:08:47,279 --> 00:08:52,399 that note, Perplexity superpower is 248 00:08:49,680 --> 00:08:53,760 finding accurate information fast. In 249 00:08:52,399 --> 00:08:55,839 plain English, the general purpose 250 00:08:53,760 --> 00:08:57,519 chatpots are built for reasoning. You 251 00:08:55,839 --> 00:09:00,000 use them to help you think, brainstorm, 252 00:08:57,519 --> 00:09:02,320 or write a draft. Perplexity is built 253 00:09:00,000 --> 00:09:04,399 for fetching. You need a specific fact, 254 00:09:02,320 --> 00:09:07,040 and you need it now. Starting off with a 255 00:09:04,399 --> 00:09:09,040 simple real life example, I used chachib 256 00:09:07,040 --> 00:09:11,200 to plan a trip to Japan with my brother 257 00:09:09,040 --> 00:09:13,120 because that is a creative task. It 258 00:09:11,200 --> 00:09:14,800 requires weighing trade-offs, building a 259 00:09:13,120 --> 00:09:16,720 narrative, and for that kind of task, 260 00:09:14,800 --> 00:09:18,640 I'm happy to wait while the model 261 00:09:16,720 --> 00:09:20,240 thinks. But when I need grab-and-go 262 00:09:18,640 --> 00:09:21,760 information, like whether a specific 263 00:09:20,240 --> 00:09:23,600 restaurant is foreigner friendly because 264 00:09:21,760 --> 00:09:25,680 we don't speak Japanese, I'd want 265 00:09:23,600 --> 00:09:27,760 Perplexity to give me accurate and 266 00:09:25,680 --> 00:09:29,360 update information within seconds. 267 00:09:27,760 --> 00:09:31,120 Second example, going back to how I use 268 00:09:29,360 --> 00:09:33,440 the three everyday AI tools, let's say 269 00:09:31,120 --> 00:09:35,920 Gemini or Chachet helps me brainstorm 270 00:09:33,440 --> 00:09:38,640 and structure my newsletter. Claude 271 00:09:35,920 --> 00:09:40,959 produces the final draft. Perplexity in 272 00:09:38,640 --> 00:09:42,640 this case is the search scalpel that 273 00:09:40,959 --> 00:09:44,800 verifies information like whether 274 00:09:42,640 --> 00:09:46,640 Gemini's contact window is 1 million or 275 00:09:44,800 --> 00:09:48,240 2 million tokens. In case you're 276 00:09:46,640 --> 00:09:50,480 curious, consumers get 1 million, 277 00:09:48,240 --> 00:09:52,480 enterprises get 2 million. Pro tip, you 278 00:09:50,480 --> 00:09:55,200 can use Google style search operators 279 00:09:52,480 --> 00:09:56,880 like [music] site colon reddit.com to 280 00:09:55,200 --> 00:09:58,399 narrow your results to a specific 281 00:09:56,880 --> 00:09:59,839 source. [music] I have an entire video 282 00:09:58,399 --> 00:10:01,839 on the most useful Google search 283 00:09:59,839 --> 00:10:03,519 operators, so I'll link that down below. 284 00:10:01,839 --> 00:10:05,680 As a rule of thumb, think of perplexity 285 00:10:03,519 --> 00:10:07,680 as a replacement for Google AI mode. 286 00:10:05,680 --> 00:10:09,600 They're both for fetching information 287 00:10:07,680 --> 00:10:11,360 and not as a replacement for general 288 00:10:09,600 --> 00:10:12,640 purpose chatbots. Actually, let me know 289 00:10:11,360 --> 00:10:15,600 if you want an entire video breaking 290 00:10:12,640 --> 00:10:17,360 down the AI search apps like Perplexity, 291 00:10:15,600 --> 00:10:19,279 Google Search, Google AI overviews, 292 00:10:17,360 --> 00:10:20,560 Google AI mode, because they're all made 293 00:10:19,279 --> 00:10:23,040 for different things. Rounding out 294 00:10:20,560 --> 00:10:24,880 Specialist AI, Notebook LM superpower is 295 00:10:23,040 --> 00:10:27,040 that it only answers from the sources 296 00:10:24,880 --> 00:10:29,200 you give it, meaning it won't make 297 00:10:27,040 --> 00:10:31,279 things up. Think of it like a walled 298 00:10:29,200 --> 00:10:33,760 garden. You upload your sources and 299 00:10:31,279 --> 00:10:35,920 Notebook LM answers questions using only 300 00:10:33,760 --> 00:10:37,839 those documents. It can't really 301 00:10:35,920 --> 00:10:39,680 hallucinate because it has no outside 302 00:10:37,839 --> 00:10:41,680 knowledge to draw from. Going back to 303 00:10:39,680 --> 00:10:44,399 the visual around how perplexity is 304 00:10:41,680 --> 00:10:46,560 optimized for search, Notebook LM uses a 305 00:10:44,399 --> 00:10:48,640 fine-tuned Google Gemini model that 306 00:10:46,560 --> 00:10:50,399 minimizes hallucinations. For instance, 307 00:10:48,640 --> 00:10:52,480 when I was at Google before publishing 308 00:10:50,399 --> 00:10:54,399 marketing materials, I would upload the 309 00:10:52,480 --> 00:10:56,399 final draft alongside the source 310 00:10:54,399 --> 00:10:58,480 documents and ask Notebook LM if the 311 00:10:56,399 --> 00:11:00,720 draft made any claims that contradicted 312 00:10:58,480 --> 00:11:03,200 the sources and it would catch these 313 00:11:00,720 --> 00:11:04,880 tiny discrepancies other AI might have 314 00:11:03,200 --> 00:11:07,360 missed. I use a similar workflow today 315 00:11:04,880 --> 00:11:09,200 for my videos. Before I start filming, I 316 00:11:07,360 --> 00:11:11,519 upload my script and all my research 317 00:11:09,200 --> 00:11:13,920 into Notebook LM and ask it to flag 318 00:11:11,519 --> 00:11:16,320 anything not directly supported by the 319 00:11:13,920 --> 00:11:18,480 source material. The obvious caveat here 320 00:11:16,320 --> 00:11:20,800 is that the output is only as good as 321 00:11:18,480 --> 00:11:23,360 the sources we give it. So if the 322 00:11:20,800 --> 00:11:25,760 sources are incorrect, Notebook LM is 323 00:11:23,360 --> 00:11:27,680 going to be confidently incorrect. So as 324 00:11:25,760 --> 00:11:29,440 a rule of thumb, if the accuracy matters 325 00:11:27,680 --> 00:11:31,680 more than creativity and you have source 326 00:11:29,440 --> 00:11:33,519 materials to check against, use Notebook 327 00:11:31,680 --> 00:11:35,040 LM. There are a few more specialist AI 328 00:11:33,519 --> 00:11:36,640 tools I use but didn't make this list 329 00:11:35,040 --> 00:11:38,560 because I don't use them every day. But 330 00:11:36,640 --> 00:11:40,560 to quickly go through them, Gamma for 331 00:11:38,560 --> 00:11:43,600 presentations, 11 Labs for voice 332 00:11:40,560 --> 00:11:45,839 cloning, Zapier and N for automation, 333 00:11:43,600 --> 00:11:47,680 and Excaliraw and Napkin AI for quick 334 00:11:45,839 --> 00:11:49,600 visuals. As a reminder, I'll cover the 335 00:11:47,680 --> 00:11:51,200 remaining two categories in part two, so 336 00:11:49,600 --> 00:11:53,279 keep an eye out for that. See you on the 337 00:11:51,200 --> 00:11:56,440 next video. In the meantime, have a 338 00:11:53,279 --> 00:11:56,440 great one. 25806

Can't find what you're looking for?
Get subtitles in any language from opensubtitles.com, and translate them here.