Would you like to inspect the original subtitles? These are the user uploaded subtitles that are being translated:
1
00:00:00,400 --> 00:00:03,120
The CEO of Anthropics says that AI will
2
00:00:03,120 --> 00:00:05,200
eliminate half of entry- level jobs and
3
00:00:05,200 --> 00:00:07,680
push unemployment to 20%. But the newly
4
00:00:07,680 --> 00:00:09,519
released Apple research paints a
5
00:00:09,519 --> 00:00:11,679
different picture. I really worry,
6
00:00:11,679 --> 00:00:13,840
particularly at the entry level, that
7
00:00:13,840 --> 00:00:16,400
the AI models are are are, you know,
8
00:00:16,400 --> 00:00:18,320
very much at the center of what what an
9
00:00:18,320 --> 00:00:20,880
entry-level human worker would do. AI is
10
00:00:20,880 --> 00:00:22,800
already at the center of what a junior
11
00:00:22,800 --> 00:00:25,039
employee could do. And I can agree with
12
00:00:25,039 --> 00:00:26,960
this. If we're not concerned about
13
00:00:26,960 --> 00:00:30,160
quality, then I can agree that AI can be
14
00:00:30,160 --> 00:00:33,200
used for most junior level positions.
15
00:00:33,200 --> 00:00:35,200
But when I say this, I stand to gain
16
00:00:35,200 --> 00:00:38,239
nothing. He, however, is signaling to
17
00:00:38,239 --> 00:00:40,719
investors, to the stock market, and to
18
00:00:40,719 --> 00:00:42,960
the governments that AI will replace
19
00:00:42,960 --> 00:00:45,360
workers. There has been an estimated $2
20
00:00:45,360 --> 00:00:48,000
trillion invested into the AI space
21
00:00:48,000 --> 00:00:50,640
since Open AI released Chat GPT in
22
00:00:50,640 --> 00:00:53,120
November of 2022. These investors have
23
00:00:53,120 --> 00:00:55,039
put so much money into this space that
24
00:00:55,039 --> 00:00:57,280
the payoff needs to be replacement of
25
00:00:57,280 --> 00:00:59,199
human workers. It's not enough to just
26
00:00:59,199 --> 00:01:01,440
make us more efficient because well,
27
00:01:01,440 --> 00:01:03,440
we've already become more efficient with
28
00:01:03,440 --> 00:01:05,600
AI. That's why we continue to see this
29
00:01:05,600 --> 00:01:08,000
hype because if the hype dies down, the
30
00:01:08,000 --> 00:01:10,640
AI bubble bursts and so much money is
31
00:01:10,640 --> 00:01:12,560
lost. Significantly more money than in
32
00:01:12,560 --> 00:01:14,960
the dot bubble. The dot bubble saw an
33
00:01:14,960 --> 00:01:18,000
estimated $400 billion invested over a
34
00:01:18,000 --> 00:01:20,159
six-year period. AI infrastructure
35
00:01:20,159 --> 00:01:23,520
spending for 2025 alone is an estimated
36
00:01:23,520 --> 00:01:26,640
$320 billion. This is a completely
37
00:01:26,640 --> 00:01:29,280
different breed of bubble. If efficiency
38
00:01:29,280 --> 00:01:31,280
is not what these investors are after,
39
00:01:31,280 --> 00:01:33,200
if it really is about replacement, then
40
00:01:33,200 --> 00:01:35,840
these AI CEOs have to come on here and
41
00:01:35,840 --> 00:01:38,240
continue hyping up the AI. And I think
42
00:01:38,240 --> 00:01:39,759
that's what he's doing here. He's
43
00:01:39,759 --> 00:01:42,640
basically signaling to the public and to
44
00:01:42,640 --> 00:01:45,200
other companies that his product as of
45
00:01:45,200 --> 00:01:48,240
now is good enough to replace a junior
46
00:01:48,240 --> 00:01:50,399
worker. and therefore you should use his
47
00:01:50,399 --> 00:01:52,479
product. He's framing it as if he's
48
00:01:52,479 --> 00:01:54,240
warning us, but it's really just
49
00:01:54,240 --> 00:01:56,399
promotion for his product because fear
50
00:01:56,399 --> 00:01:58,719
sells. And and you know, these these
51
00:01:58,719 --> 00:02:00,399
technology changes have happened before,
52
00:02:00,399 --> 00:02:02,159
but I think what is striking to me about
53
00:02:02,159 --> 00:02:05,119
the this this AI boom is that it's
54
00:02:05,119 --> 00:02:07,200
bigger and it's broader and it's moving
55
00:02:07,200 --> 00:02:09,440
faster than anything has before. Again,
56
00:02:09,440 --> 00:02:11,520
I can agree that this is a lot broader.
57
00:02:11,520 --> 00:02:13,360
I often see people making the argument
58
00:02:13,360 --> 00:02:15,680
that this is similar to the invention of
59
00:02:15,680 --> 00:02:17,840
tractors and how that affected farmers
60
00:02:17,840 --> 00:02:20,480
or when Ford started doing automation.
61
00:02:20,480 --> 00:02:23,840
But AI is affecting various roles in
62
00:02:23,840 --> 00:02:26,239
almost all industries. And I recognize
63
00:02:26,239 --> 00:02:28,560
that people are losing their jobs to AI
64
00:02:28,560 --> 00:02:31,200
right now. Regardless if AI can perform
65
00:02:31,200 --> 00:02:33,680
on a junior employee level, these
66
00:02:33,680 --> 00:02:35,519
companies have to push the narrative
67
00:02:35,519 --> 00:02:37,360
that they can. Other companies are
68
00:02:37,360 --> 00:02:39,280
buying into it and laying off their
69
00:02:39,280 --> 00:02:42,560
workforce. Take Microsoft as an example.
70
00:02:42,560 --> 00:02:45,120
They recently said that 30% of their
71
00:02:45,120 --> 00:02:47,920
code is now AI generated. And they just
72
00:02:47,920 --> 00:02:50,560
recently laid off 6,000 employees.
73
00:02:50,560 --> 00:02:52,400
Notably, most of those employees being
74
00:02:52,400 --> 00:02:55,040
managers. Microsoft has to lay off
75
00:02:55,040 --> 00:02:56,720
people. They have to make these bold
76
00:02:56,720 --> 00:02:58,560
claims because they have to sell their
77
00:02:58,560 --> 00:03:00,480
own product. Microsoft is selling
78
00:03:00,480 --> 00:03:03,200
co-pilot subscriptions to corporations.
79
00:03:03,200 --> 00:03:05,760
And if Microsoft itself has a ton of
80
00:03:05,760 --> 00:03:07,519
engineers, then how good is their
81
00:03:07,519 --> 00:03:09,920
co-pilot? If Microsoft is not able to
82
00:03:09,920 --> 00:03:12,560
use its own AI co-pilot to scale back
83
00:03:12,560 --> 00:03:15,280
the amount of engineers it has, then how
84
00:03:15,280 --> 00:03:17,200
can they sell this to other enterprise
85
00:03:17,200 --> 00:03:18,959
level companies? Why would these
86
00:03:18,959 --> 00:03:20,640
companies want to buy their product?
87
00:03:20,640 --> 00:03:22,879
It's similar to the startup Lovable.
88
00:03:22,879 --> 00:03:24,640
They claim to be the last piece of
89
00:03:24,640 --> 00:03:26,560
software and they claim to allow
90
00:03:26,560 --> 00:03:28,400
ordinary people to build their own
91
00:03:28,400 --> 00:03:30,400
software. They are based in Stockholm
92
00:03:30,400 --> 00:03:33,200
and I am as well and Stockholm is a
93
00:03:33,200 --> 00:03:35,760
small community of engineers. We talk.
94
00:03:35,760 --> 00:03:37,360
So I hear about all of the software
95
00:03:37,360 --> 00:03:39,360
engineers being hired at Lovable. If
96
00:03:39,360 --> 00:03:41,680
your AI platform is that good, why do
97
00:03:41,680 --> 00:03:43,040
you still need to hire software
98
00:03:43,040 --> 00:03:45,680
engineers? You know this better better
99
00:03:45,680 --> 00:03:48,000
than anybody or as well as as all the
100
00:03:48,000 --> 00:03:49,680
names we know people Sam Alman and
101
00:03:49,680 --> 00:03:51,680
others who are working Elon Musk and AI.
102
00:03:51,680 --> 00:03:53,519
Why are you raising the alarm? because
103
00:03:53,519 --> 00:03:55,840
it's not necessarily I would think in
104
00:03:55,840 --> 00:03:57,280
your best interest because a lot of the
105
00:03:57,280 --> 00:03:59,599
messages we hear from you know some AI
106
00:03:59,599 --> 00:04:01,920
CEOs and stuff is is a little bit more
107
00:04:01,920 --> 00:04:03,599
calming like you know these agents are
108
00:04:03,599 --> 00:04:04,959
going to be great in your life a
109
00:04:04,959 --> 00:04:07,599
fantastic thing. It's interesting that
110
00:04:07,599 --> 00:04:09,680
the interviewer thinks he doesn't stand
111
00:04:09,680 --> 00:04:11,840
to gain anything from this interview.
112
00:04:11,840 --> 00:04:13,519
The first obvious thing he stands to
113
00:04:13,519 --> 00:04:15,680
gain is potential market share. If a
114
00:04:15,680 --> 00:04:18,000
person subscribed to ChatGpt watches
115
00:04:18,000 --> 00:04:19,919
this and feels that Claude is a better
116
00:04:19,919 --> 00:04:22,400
model or just that this CEO is more
117
00:04:22,400 --> 00:04:24,160
trustworthy, then maybe they switch from
118
00:04:24,160 --> 00:04:26,479
Chat GPT to Claude. The other thing I've
119
00:04:26,479 --> 00:04:28,160
been considering is how does this
120
00:04:28,160 --> 00:04:30,479
position him in public opinion? Because
121
00:04:30,479 --> 00:04:33,199
as the interviewer said, Sam Alman, Elon
122
00:04:33,199 --> 00:04:34,880
Musk, when they're interviewed, they
123
00:04:34,880 --> 00:04:37,040
focus on mainly positives that we will
124
00:04:37,040 --> 00:04:39,600
see in society thanks to AI. But here we
125
00:04:39,600 --> 00:04:41,919
have Daario sounding the alarms that
126
00:04:41,919 --> 00:04:43,680
there's going to be this hypothetical
127
00:04:43,680 --> 00:04:45,919
bloodbath and that the government's need
128
00:04:45,919 --> 00:04:47,840
to respond. But if we think about it,
129
00:04:47,840 --> 00:04:50,560
Elon Musk has alienated himself due to
130
00:04:50,560 --> 00:04:52,880
politics. The sentiment online about Sam
131
00:04:52,880 --> 00:04:55,120
Alman is that well a lot of people say
132
00:04:55,120 --> 00:04:58,240
he's soulless and which when I watch him
133
00:04:58,240 --> 00:05:00,320
I just don't get this like trustworthy
134
00:05:00,320 --> 00:05:02,160
feeling from him. And then you have
135
00:05:02,160 --> 00:05:05,120
people like Palunteer's CEO who every
136
00:05:05,120 --> 00:05:06,880
time I listen to him he's just going off
137
00:05:06,880 --> 00:05:08,960
the rails about something. I think this
138
00:05:08,960 --> 00:05:11,919
positions Daario as a white knight, that
139
00:05:11,919 --> 00:05:14,320
he went against his own self-interest
140
00:05:14,320 --> 00:05:16,639
because no other CEO had the guts to do
141
00:05:16,639 --> 00:05:18,240
it. And for that reason, we should be
142
00:05:18,240 --> 00:05:20,320
thankful. We should trust him. It feels
143
00:05:20,320 --> 00:05:22,479
like a bit of a savior complex with
144
00:05:22,479 --> 00:05:24,800
ulterior motives. I think he's examined
145
00:05:24,800 --> 00:05:26,960
what the other AI CEOs are doing, and
146
00:05:26,960 --> 00:05:28,880
he's just choosing to do the complete
147
00:05:28,880 --> 00:05:31,199
opposite in a way that's still strategic
148
00:05:31,199 --> 00:05:33,199
for him and his company, but in a way
149
00:05:33,199 --> 00:05:35,039
that positions him as an authority
150
00:05:35,039 --> 00:05:37,039
figure and someone that the public can
151
00:05:37,039 --> 00:05:39,440
trust. Yeah, I mean, you know, I think I
152
00:05:39,440 --> 00:05:40,720
think the reason I'm raising the alarm
153
00:05:40,720 --> 00:05:42,639
is that I think others others haven't as
154
00:05:42,639 --> 00:05:43,919
much and you know, I think I think
155
00:05:43,919 --> 00:05:45,759
someone needs to say it. You know, I am
156
00:05:45,759 --> 00:05:47,840
very skeptic of this entire interview
157
00:05:47,840 --> 00:05:49,600
because the timing is all a bit
158
00:05:49,600 --> 00:05:52,720
suspicious. Anthropic recently announced
159
00:05:52,720 --> 00:05:56,000
Claude Opus 4 and Claude Sonnet 4 pretty
160
00:05:56,000 --> 00:05:57,759
much right before this interview. We
161
00:05:57,759 --> 00:06:00,000
also recently saw that the CEO of CLA
162
00:06:00,000 --> 00:06:01,440
announced that they would no longer be
163
00:06:01,440 --> 00:06:03,759
AI first and would be hiring humans
164
00:06:03,759 --> 00:06:06,400
again because the AI approach led to
165
00:06:06,400 --> 00:06:09,120
lower quality. An IBM survey reveals
166
00:06:09,120 --> 00:06:11,280
this is a common occurrence for AI use
167
00:06:11,280 --> 00:06:13,280
and business where just one in four
168
00:06:13,280 --> 00:06:15,680
projects delivers the return it promised
169
00:06:15,680 --> 00:06:18,319
and even fewer are scaled up. We also
170
00:06:18,319 --> 00:06:20,639
saw the recent scandal with Builder AI.
171
00:06:20,639 --> 00:06:22,400
If you're not familiar, it was a
172
00:06:22,400 --> 00:06:25,039
Londonbased startup that claimed to
173
00:06:25,039 --> 00:06:27,440
essentially build software using AI.
174
00:06:27,440 --> 00:06:30,400
Their AI Natasha would ask you questions
175
00:06:30,400 --> 00:06:32,080
and present you with a full-fledged
176
00:06:32,080 --> 00:06:34,720
software application. However, their AI
177
00:06:34,720 --> 00:06:37,360
turned out to be actually Indians.
178
00:06:37,360 --> 00:06:39,440
Apparently, the startup was employing
179
00:06:39,440 --> 00:06:42,000
more than 700 software engineers in
180
00:06:42,000 --> 00:06:44,240
India, which is honestly quite
181
00:06:44,240 --> 00:06:46,400
impressive. These software engineers
182
00:06:46,400 --> 00:06:48,560
must have been using AI because I just
183
00:06:48,560 --> 00:06:50,319
don't know how they were able to have
184
00:06:50,319 --> 00:06:52,319
that quick of a turnaround time without
185
00:06:52,319 --> 00:06:55,280
using some type of AI. So, honestly, I'm
186
00:06:55,280 --> 00:06:58,319
impressed. Builder AI raised over $450
187
00:06:58,319 --> 00:07:00,720
million and received investment from
188
00:07:00,720 --> 00:07:02,720
Microsoft. They recently announced that
189
00:07:02,720 --> 00:07:04,319
they were filing for bankruptcy and
190
00:07:04,319 --> 00:07:06,720
closing down, citing historic challenges
191
00:07:06,720 --> 00:07:08,479
and past decisions that place
192
00:07:08,479 --> 00:07:10,400
significant strain on its financial
193
00:07:10,400 --> 00:07:13,280
position. Unfortunately for these AI
194
00:07:13,280 --> 00:07:15,520
CEOs, what we also saw recently was
195
00:07:15,520 --> 00:07:17,919
Apple releasing its research paper on
196
00:07:17,919 --> 00:07:20,160
large reasoning models. I will have this
197
00:07:20,160 --> 00:07:22,560
paper linked below because it is a long
198
00:07:22,560 --> 00:07:24,639
research paper. It's 30 pages, but I
199
00:07:24,639 --> 00:07:26,080
highly recommend you read it because
200
00:07:26,080 --> 00:07:28,160
it's very interesting. The title of the
201
00:07:28,160 --> 00:07:30,639
paper is the illusion of thinking,
202
00:07:30,639 --> 00:07:32,000
understanding the strengths and
203
00:07:32,000 --> 00:07:34,319
limitations of reasoning models via the
204
00:07:34,319 --> 00:07:36,800
lens of problem complexity. I won't go
205
00:07:36,800 --> 00:07:38,160
through everything in the research paper
206
00:07:38,160 --> 00:07:40,240
because again it is very long. But some
207
00:07:40,240 --> 00:07:42,720
key points for me is that when the AI is
208
00:07:42,720 --> 00:07:44,319
given a clear method to solve the
209
00:07:44,319 --> 00:07:46,960
problem, it doesn't use it. The AI can't
210
00:07:46,960 --> 00:07:49,360
follow the instructions properly to even
211
00:07:49,360 --> 00:07:51,120
solve the problem when given the
212
00:07:51,120 --> 00:07:52,880
solution algorithm. It's like giving
213
00:07:52,880 --> 00:07:55,280
someone a recipe and that person doing
214
00:07:55,280 --> 00:07:57,120
the recipe in different orders and not
215
00:07:57,120 --> 00:07:58,319
even checking if they're doing it
216
00:07:58,319 --> 00:08:00,479
properly. Another surprising thing was
217
00:08:00,479 --> 00:08:02,560
when the AI is presented with a more
218
00:08:02,560 --> 00:08:04,800
complex problem, it actually puts in
219
00:08:04,800 --> 00:08:07,120
less effort rather than more. To quote
220
00:08:07,120 --> 00:08:08,960
the paper, it says, "Accuracy
221
00:08:08,960 --> 00:08:10,960
progressively declines as problem
222
00:08:10,960 --> 00:08:13,280
complexity increases until reaching
223
00:08:13,280 --> 00:08:15,440
complete collapse. We observe that
224
00:08:15,440 --> 00:08:17,280
reasoning models initially increase
225
00:08:17,280 --> 00:08:19,280
their thinking tokens proportionally
226
00:08:19,280 --> 00:08:21,599
with problem complexity. However, upon
227
00:08:21,599 --> 00:08:23,680
approaching a critical threshold, which
228
00:08:23,680 --> 00:08:26,000
closely corresponds to their accuracy
229
00:08:26,000 --> 00:08:27,440
collapse point, models
230
00:08:27,440 --> 00:08:29,840
counterintuitively begin to reduce their
231
00:08:29,840 --> 00:08:32,000
reasoning effort despite increasing
232
00:08:32,000 --> 00:08:34,080
problem difficulty. I think this
233
00:08:34,080 --> 00:08:36,080
research paper is quite interesting
234
00:08:36,080 --> 00:08:38,399
because it sheds light on where are we
235
00:08:38,399 --> 00:08:40,560
actually with large language models
236
00:08:40,560 --> 00:08:42,800
being able to reason. If we only listen
237
00:08:42,800 --> 00:08:45,200
to these AI CEOs, then we would think
238
00:08:45,200 --> 00:08:46,800
these models are doing a lot of
239
00:08:46,800 --> 00:08:48,720
thinking, a lot of reasoning. But thanks
240
00:08:48,720 --> 00:08:50,720
to this research paper, it shows that
241
00:08:50,720 --> 00:08:53,200
actually these models are reasoning less
242
00:08:53,200 --> 00:08:55,440
the more complex a problem becomes. So
243
00:08:55,440 --> 00:08:58,399
between Claude 4 being released and CLA
244
00:08:58,399 --> 00:09:00,640
hiring humans again and builder AI
245
00:09:00,640 --> 00:09:03,200
proving to be actually Indians and
246
00:09:03,200 --> 00:09:05,760
Apple's damning research, I can only
247
00:09:05,760 --> 00:09:08,160
conclude that what the CEO is saying is
248
00:09:08,160 --> 00:09:10,399
all for free publicity. That's right.
249
00:09:10,399 --> 00:09:12,560
I'm aware of my position that I'm I'm
250
00:09:12,560 --> 00:09:14,240
building this technology while also
251
00:09:14,240 --> 00:09:15,760
expressing concerns about it. You know,
252
00:09:15,760 --> 00:09:17,120
the reason I'm doing both of those
253
00:09:17,120 --> 00:09:18,640
things is, you know, one, I think the
254
00:09:18,640 --> 00:09:20,160
benefits are massive. And you know the
255
00:09:20,160 --> 00:09:22,000
the the second thing I would say is look
256
00:09:22,000 --> 00:09:23,920
there are as you mentioned six or seven
257
00:09:23,920 --> 00:09:25,920
companies in the US building this
258
00:09:25,920 --> 00:09:27,600
technology right if we stopped doing it
259
00:09:27,600 --> 00:09:29,680
tomorrow the rest would continue if I
260
00:09:29,680 --> 00:09:31,600
stopped doing it well then other people
261
00:09:31,600 --> 00:09:33,760
would still do it this is like when
262
00:09:33,760 --> 00:09:36,320
investigative journalists interview
263
00:09:36,320 --> 00:09:38,399
dealers and ask them don't you feel bad
264
00:09:38,399 --> 00:09:40,480
about what you're doing and dealers are
265
00:09:40,480 --> 00:09:42,800
always like yes but if I didn't deal
266
00:09:42,800 --> 00:09:44,959
someone else would if Daario truly
267
00:09:44,959 --> 00:09:47,279
believes that what he's building will be
268
00:09:47,279 --> 00:09:49,839
a bloodbath as he says Then where are
269
00:09:49,839 --> 00:09:52,560
his morals? If all of us somehow stopped
270
00:09:52,560 --> 00:09:54,160
doing it tomorrow, then China would just
271
00:09:54,160 --> 00:09:55,920
beat us. And I don't think China winning
272
00:09:55,920 --> 00:09:57,920
this in this technology is, you know, I
273
00:09:57,920 --> 00:09:59,600
I I I don't think that helps anyone or
274
00:09:59,600 --> 00:10:01,600
makes the situation any better. Wasn't
275
00:10:01,600 --> 00:10:03,440
that the same argument that the
276
00:10:03,440 --> 00:10:05,600
researchers who built the hydrogen bomb
277
00:10:05,600 --> 00:10:08,160
used? Like if they stopped building,
278
00:10:08,160 --> 00:10:10,560
then Russia would win. Everyone I've
279
00:10:10,560 --> 00:10:12,720
talked to has said, "This technological
280
00:10:12,720 --> 00:10:15,680
change looks different. It looks faster.
281
00:10:15,680 --> 00:10:17,839
It looks harder to adapt to. It's
282
00:10:17,839 --> 00:10:20,320
broader. the pace of progress keeps
283
00:10:20,320 --> 00:10:22,399
catching people offguard. If we think
284
00:10:22,399 --> 00:10:24,720
about it from what these AI CEOs have
285
00:10:24,720 --> 00:10:27,120
been hyping and trying to sell everyone
286
00:10:27,120 --> 00:10:29,839
on, then I would make the argument that
287
00:10:29,839 --> 00:10:31,839
it's underwhelming, like it's
288
00:10:31,839 --> 00:10:33,760
underperforming. It's not living up to
289
00:10:33,760 --> 00:10:35,920
the hype that they keep promising. But I
290
00:10:35,920 --> 00:10:37,839
would say that's now why we're seeing
291
00:10:37,839 --> 00:10:39,760
this fear-mongering. It's like the next
292
00:10:39,760 --> 00:10:42,160
evolution of hype. What are practical
293
00:10:42,160 --> 00:10:44,560
steps people should take to be prepared?
294
00:10:44,560 --> 00:10:47,600
I mean ordinary citizens me what do you
295
00:10:47,600 --> 00:10:49,760
advise for ordinary citizens I think
296
00:10:49,760 --> 00:10:51,839
it's it's very important you know learn
297
00:10:51,839 --> 00:10:53,839
to use AI learn to understand where the
298
00:10:53,839 --> 00:10:55,440
technology is going if you're not
299
00:10:55,440 --> 00:10:57,360
blindsided you have a much better chance
300
00:10:57,360 --> 00:10:59,519
of adapting there's some better world
301
00:10:59,519 --> 00:11:00,800
you know at least in the short term at
302
00:11:00,800 --> 00:11:02,320
least for now we should take it take it
303
00:11:02,320 --> 00:11:04,240
bit by bit everyone learns to use AI
304
00:11:04,240 --> 00:11:05,839
better and and you know that that speeds
305
00:11:05,839 --> 00:11:08,079
up the adaptation he's basically coming
306
00:11:08,079 --> 00:11:11,279
on here fear-mongering offering no real
307
00:11:11,279 --> 00:11:13,839
solutions painting a picture that's very
308
00:11:13,839 --> 00:11:16,079
bleak weekend making people have this
309
00:11:16,079 --> 00:11:18,000
like AI doom. That's part of the reason
310
00:11:18,000 --> 00:11:19,920
why I wanted to make this video because
311
00:11:19,920 --> 00:11:22,399
when I read the comments, people are
312
00:11:22,399 --> 00:11:24,480
really upset by this video and they
313
00:11:24,480 --> 00:11:25,920
don't know what to do. So, I wanted to
314
00:11:25,920 --> 00:11:27,600
come on here and offer a different
315
00:11:27,600 --> 00:11:29,839
perspective to be skeptical of what he
316
00:11:29,839 --> 00:11:32,399
is saying, how he benefits from what
317
00:11:32,399 --> 00:11:34,320
he's saying. So, when asked for a
318
00:11:34,320 --> 00:11:37,279
solution, he essentially says people
319
00:11:37,279 --> 00:11:39,200
should learn more about AI, become more
320
00:11:39,200 --> 00:11:40,959
familiar with the tools. Do you happen
321
00:11:40,959 --> 00:11:42,800
to have an AI tool that I can learn
322
00:11:42,800 --> 00:11:46,000
about? Do you happen to have an AI tool
323
00:11:46,000 --> 00:11:47,760
that I can use that I can buy a
324
00:11:47,760 --> 00:11:49,920
subscription to? Like he's literally
325
00:11:49,920 --> 00:11:52,160
pitching people the solution to the
326
00:11:52,160 --> 00:11:54,320
problem that he's creating. It's
327
00:11:54,320 --> 00:11:56,320
interesting that he says if you're not
328
00:11:56,320 --> 00:11:58,000
blindsided, you have a much better
329
00:11:58,000 --> 00:12:00,000
chance of adapting. There's some better
330
00:12:00,000 --> 00:12:02,000
world, at least in the short term, at
331
00:12:02,000 --> 00:12:04,399
least for now. like he's really pushing
332
00:12:04,399 --> 00:12:07,920
this apocalyptic bloodbath as he put it
333
00:12:07,920 --> 00:12:10,399
like this narrative of fear-mongering
334
00:12:10,399 --> 00:12:12,880
and scaring people and the only solution
335
00:12:12,880 --> 00:12:15,440
that he is offering is for people to use
336
00:12:15,440 --> 00:12:17,600
AI tools that he just so happens to
337
00:12:17,600 --> 00:12:20,320
sell. And maybe he's right. Maybe me and
338
00:12:20,320 --> 00:12:23,040
many other engineers are wrong. Maybe
339
00:12:23,040 --> 00:12:25,200
artificial general intelligence is on
340
00:12:25,200 --> 00:12:27,600
the horizon. But as of right now, I
341
00:12:27,600 --> 00:12:29,760
don't see it. And again, I recognize
342
00:12:29,760 --> 00:12:31,360
people are being laid off. But my
343
00:12:31,360 --> 00:12:33,360
argument to that is these companies have
344
00:12:33,360 --> 00:12:35,440
to lay them off. They have to buy into
345
00:12:35,440 --> 00:12:37,680
the hype because so much money has been
346
00:12:37,680 --> 00:12:39,519
put into this space. It's almost
347
00:12:39,519 --> 00:12:41,279
becoming like the banking industry where
348
00:12:41,279 --> 00:12:43,040
it's too big to fell. And again, this
349
00:12:43,040 --> 00:12:44,800
puts him in a position of power, a
350
00:12:44,800 --> 00:12:46,639
position of authority, a trustworthy
351
00:12:46,639 --> 00:12:48,720
figure in this space that people can
352
00:12:48,720 --> 00:12:51,600
lean on and trust his guidance on. So
353
00:12:51,600 --> 00:12:53,920
again, this all just benefits him. It's
354
00:12:53,920 --> 00:12:56,720
just a one massive sales pitch that just
355
00:12:56,720 --> 00:12:59,360
so happens to be pitching fear rather
356
00:12:59,360 --> 00:13:02,079
than positivity. as the other AI CEOs
357
00:13:02,079 --> 00:13:04,000
have been doing. Before you buy into the
358
00:13:04,000 --> 00:13:06,720
AI hype or the AI doom, question the
359
00:13:06,720 --> 00:13:08,959
source, question the motive. Thanks for
360
00:13:08,959 --> 00:13:10,880
spending part of your day with me. I
361
00:13:10,880 --> 00:13:12,880
will see you in the next one. Have a
362
00:13:12,880 --> 00:13:15,920
great day.26279
Can't find what you're looking for?
Get subtitles in any language from opensubtitles.com, and translate them here.