Would you like to inspect the original subtitles? These are the user uploaded subtitles that are being translated:
1
00:00:01,735 --> 00:00:03,268
BEN: It's the DNA of the next tech revolution.
2
00:00:03,304 --> 00:00:06,872
This is such a huge data set
that there's no way that a human
3
00:00:06,907 --> 00:00:09,374
or even a team of humans
can look at all of it.
4
00:00:09,410 --> 00:00:12,144
The race for artificial intelligence is on.
5
00:00:12,179 --> 00:00:14,813
In quantum, you have this
thing called a qubit,
6
00:00:14,848 --> 00:00:16,682
which can be zero and
one at the same time,
7
00:00:16,717 --> 00:00:18,317
and that's where the
power comes from.
8
00:00:18,352 --> 00:00:20,986
AI might make our lives better...
9
00:00:21,021 --> 00:00:23,288
The life expectancy
for human civilization
10
00:00:23,324 --> 00:00:25,857
might then easily measure
in billions of years.
11
00:00:25,893 --> 00:00:27,893
...but could it destroy the human race?
12
00:00:27,928 --> 00:00:30,229
You ought to be really
concerned about the strong AI
13
00:00:30,264 --> 00:00:31,530
that has guns on it.
14
00:00:31,565 --> 00:00:41,573
♪
15
00:00:51,018 --> 00:00:53,151
As the host of Cyberwar, I've travelled the world to talk to
16
00:00:53,187 --> 00:00:56,088
brilliant hackers, scientists and programmers,
17
00:00:56,123 --> 00:00:58,257
and many of them have told me they're thinking about
18
00:00:58,292 --> 00:00:59,891
artificial intelligence.
19
00:01:01,228 --> 00:01:03,562
Today we're in the middle of an AI boom.
20
00:01:03,597 --> 00:01:05,330
Huge advancements in artificial intelligence
21
00:01:05,366 --> 00:01:08,867
have made things like Siri and self-driving cars possible.
22
00:01:08,902 --> 00:01:11,937
AI is in video games, security surveillance systems,
23
00:01:11,972 --> 00:01:15,107
smart home devices, and advanced weapons systems.
24
00:01:17,077 --> 00:01:19,611
But as researchers race to make the next breakthrough,
25
00:01:19,647 --> 00:01:22,014
a lot are warning that we haven't actually
26
00:01:22,049 --> 00:01:23,949
thought this through.
27
00:01:23,984 --> 00:01:26,852
And it's not necessarily the Terminator scenario
28
00:01:26,887 --> 00:01:29,621
they're afraid of, where an AI system becomes self aware
29
00:01:29,657 --> 00:01:31,290
and decides to kill us all.
30
00:01:33,294 --> 00:01:35,694
The real threat could lie in the unintended consequences
31
00:01:35,729 --> 00:01:37,763
no one sees coming.
32
00:01:37,798 --> 00:01:40,866
I'm at Stanford University,where researchers are designing
33
00:01:40,901 --> 00:01:44,436
AI that comes in a very non-threatening package.
34
00:01:44,471 --> 00:01:48,106
Meet Jackrabbot, a cute littlemachine that's built to navigate
35
00:01:48,142 --> 00:01:50,776
and move through crowds of human buddies.
36
00:01:50,811 --> 00:01:52,744
This is a robot that'sprogrammed to learn on its own,
37
00:01:52,780 --> 00:01:54,379
through example.
38
00:01:56,183 --> 00:01:58,250
This technique, called "deep learning",
39
00:01:58,285 --> 00:02:00,952
has been fuelling a lot of recent AI breakthroughs.
40
00:02:02,189 --> 00:02:05,123
Alexandre Alahi and Alexandre Robicquet are part of a team
41
00:02:05,159 --> 00:02:08,260
that built Jackrabbot, and they say their little creation
42
00:02:08,295 --> 00:02:10,595
is more advanced than a self-driving car.
43
00:02:12,366 --> 00:02:14,499
So the robot enter the scene,
analyze it, and then
44
00:02:14,535 --> 00:02:17,336
navigate through it according
to all the data that we gather.
45
00:02:17,371 --> 00:02:20,505
So it's just watching people,
and looking at people,
46
00:02:20,541 --> 00:02:23,108
and what they're doing,
and judging, you know,
47
00:02:23,143 --> 00:02:25,077
the space and the time?
48
00:02:25,112 --> 00:02:27,145
Yeah, and also trying to
understand how they interact
49
00:02:27,181 --> 00:02:28,980
between each other.
50
00:02:29,016 --> 00:02:31,683
The main aspect would be
to understand well, how...
51
00:02:31,719 --> 00:02:33,752
what is a safe distance
that I keep with someone else,
52
00:02:33,787 --> 00:02:36,254
or how do I accelerate or
decelerate when I get close
53
00:02:36,290 --> 00:02:38,623
to someone and
react accordingly?
54
00:02:38,659 --> 00:02:41,493
So how do you
gather that information?
55
00:02:41,528 --> 00:02:43,528
You just like...
this just looks like
56
00:02:43,564 --> 00:02:46,398
some sort of
surveillance footage.
57
00:02:46,433 --> 00:02:51,937
So last summer, our team spent
like 2 months gathering daily
58
00:02:51,972 --> 00:02:56,641
at rush hour, top view data
from the Stanford crowd,
59
00:02:56,677 --> 00:02:59,211
so we can actually
understand how they avoid,
60
00:02:59,246 --> 00:03:02,414
how they behave and how
they navigate in such a crowd.
61
00:03:02,449 --> 00:03:04,349
Let's go, robot.
62
00:03:09,022 --> 00:03:10,389
Free!
63
00:03:11,859 --> 00:03:13,692
So wait, he's being
controlled by you?
64
00:03:13,727 --> 00:03:15,227
For right now, yeah.
65
00:03:15,262 --> 00:03:17,295
So does he have a
fully autonomous mode?
66
00:03:17,331 --> 00:03:18,530
Yeah, he could.
67
00:03:18,565 --> 00:03:20,198
And what does he do?
He just follows?
68
00:03:20,234 --> 00:03:22,501
No, we actually can give him--
like if you map the place
69
00:03:22,536 --> 00:03:24,703
and the area, he can go
from point A to point B,
70
00:03:24,738 --> 00:03:27,072
but we're just having a simple
collision avoidance right now.
71
00:03:27,107 --> 00:03:29,508
But like you train to avoid
every single obstacle
72
00:03:29,543 --> 00:03:31,710
and go where you want him to go.
73
00:03:31,745 --> 00:03:32,911
Can we drive him
around a little bit?
74
00:03:32,946 --> 00:03:34,513
Yeah, absolutely.
75
00:03:34,548 --> 00:03:37,516
And those-- that spinning thing,
that's just like a 3D sensor?
76
00:03:37,551 --> 00:03:39,351
Right, right.
77
00:03:39,386 --> 00:03:42,354
So we're capturing
depth data, visual data,
78
00:03:42,389 --> 00:03:46,491
and we're combining all
these sensors to detect humans,
79
00:03:46,527 --> 00:03:49,728
understand its surrounding
to look at itself,
80
00:03:49,763 --> 00:03:52,030
and predict also
where people will go.
81
00:03:53,467 --> 00:03:54,900
Do people freak out
when they see it a lot?
82
00:03:54,935 --> 00:03:55,901
No, they like it a lot.
83
00:03:55,936 --> 00:03:57,736
They come hug it,
they talk to it.
84
00:03:57,771 --> 00:03:59,738
He doesn't... He
cannot talk yet.
85
00:03:59,773 --> 00:04:01,873
Is that what you hope the
rise of the machines is?
86
00:04:01,909 --> 00:04:03,542
It's a friendly
rise of the machines?
87
00:04:03,577 --> 00:04:04,676
For that one, yes.
88
00:04:04,711 --> 00:04:05,877
- Yeah.
- Absolutely.
89
00:04:07,548 --> 00:04:09,915
Jackrabbot's creators hope that as AI advances,
90
00:04:09,950 --> 00:04:12,250
machines like this will be built to carry luggage
91
00:04:12,286 --> 00:04:14,252
through airports, or help the blind navigate
92
00:04:14,288 --> 00:04:16,421
through pedestrian traffic.
93
00:04:16,457 --> 00:04:18,557
The Stanford team is working to get AI to the point
94
00:04:18,592 --> 00:04:20,225
where Jackrabbots can one day do
95
00:04:20,260 --> 00:04:22,928
what most people can just do by nature.
96
00:04:22,963 --> 00:04:25,931
But today's computers are already performing tasks
97
00:04:25,966 --> 00:04:28,200
no human is capable of performing,
98
00:04:28,235 --> 00:04:31,069
even if it's not quite AI.
99
00:04:31,104 --> 00:04:33,371
That technology continues to evolve,
100
00:04:33,407 --> 00:04:36,241
and the so-called supercomputer is at the cutting edge.
101
00:04:39,112 --> 00:04:42,047
Bryan Biegel works in NASA's Advanced Supercomputing
102
00:04:42,082 --> 00:04:44,783
Division, where they're workingwith machines that can actually
103
00:04:44,818 --> 00:04:46,318
help us see into the future...
104
00:04:46,353 --> 00:04:48,553
Let's go to the vislab, and I'll
show you some of the things
105
00:04:48,589 --> 00:04:50,922
that our users do
with our supercomputers.
106
00:04:50,958 --> 00:04:54,593
...using code so advanced it took 15 years to write.
107
00:04:56,163 --> 00:04:58,763
Yeah, so this is actually a
simulation of Earth's oceans.
108
00:04:58,799 --> 00:05:02,167
This is... It uses measured
data from NASA's satellites
109
00:05:02,202 --> 00:05:04,636
and puts it into a
really huge computer model
110
00:05:04,671 --> 00:05:08,273
on the supercomputers, and
predicts the ocean behaviour.
111
00:05:08,308 --> 00:05:12,277
So this single simulation
actually used up to 70,000
112
00:05:12,312 --> 00:05:16,014
processors, and generated
over 3 petabytes of data.
113
00:05:16,049 --> 00:05:18,750
That's like 10,000 times
what you would have
114
00:05:18,785 --> 00:05:20,285
on your entire laptop.
115
00:05:20,320 --> 00:05:21,853
Let me show you
this in full scale.
116
00:05:21,889 --> 00:05:24,322
This is actually scaled down so
you can see it on the monitors,
117
00:05:24,358 --> 00:05:25,924
but if I go to the next one,
118
00:05:25,959 --> 00:05:29,494
you can see what the full-scale
version of this simulation is.
119
00:05:29,530 --> 00:05:33,298
For example, this is the
ice over Antarctic Ocean,
120
00:05:33,333 --> 00:05:35,133
and you can see where it cracks,
121
00:05:35,168 --> 00:05:36,968
and it sends ripples
out into the ocean.
122
00:05:37,004 --> 00:05:38,803
And they're working on...
123
00:05:38,839 --> 00:05:41,139
they're still continuing to
advance the accuracy of this
124
00:05:41,174 --> 00:05:43,508
model so they can predict
decades into the future.
125
00:05:43,544 --> 00:05:46,945
Would you say this is
artificial intelligence?
126
00:05:46,980 --> 00:05:48,713
It seems like it's
getting there, right?
127
00:05:48,749 --> 00:05:52,350
It's so complex, but we have
programmed in this case
128
00:05:52,386 --> 00:05:54,352
every single line of this code.
129
00:05:54,388 --> 00:05:56,788
It's a little different
than artificial intelligence,
130
00:05:56,823 --> 00:05:58,890
where it's more of a--
even more of a black box,
131
00:05:58,926 --> 00:06:01,359
where we don't know exactly
how it's gonna work
132
00:06:01,395 --> 00:06:03,962
and how that artificial
brain is gonna evolve.
133
00:06:03,997 --> 00:06:07,299
Why would NASA be interested
in artificial intelligence?
134
00:06:07,334 --> 00:06:09,534
Well, there are a few reasons.
135
00:06:09,570 --> 00:06:12,537
Of course NASA's trying to send
artificial probes further and
136
00:06:12,573 --> 00:06:16,041
further out into the solar
system, and eventually beyond.
137
00:06:16,076 --> 00:06:20,078
We can't program those
for all of the things that
138
00:06:20,113 --> 00:06:23,982
it's gonna encounter, so we need
them to be able to do their job
139
00:06:24,017 --> 00:06:26,718
without us constantly telling
them everything to do.
140
00:06:26,753 --> 00:06:28,887
Even in interpreting
data like this,
141
00:06:28,922 --> 00:06:31,923
this is such a huge data set
that there's no way that
142
00:06:31,959 --> 00:06:34,593
a human or even a team of
humans can look at all of it.
143
00:06:34,628 --> 00:06:38,997
Computing is gonna continue to
expand dramatically over even
144
00:06:39,032 --> 00:06:42,734
the next 10 years, and so we'll
be able to do an even better job
145
00:06:42,769 --> 00:06:45,236
of modeling the
history of the universe,
146
00:06:45,272 --> 00:06:47,806
the future of the universe,
the history and future
147
00:06:47,841 --> 00:06:51,209
of our planet, and go further
in exploring our universe.
148
00:06:56,950 --> 00:06:59,351
to artificial
intelligence could be creating
149
00:06:59,386 --> 00:07:02,721
systems that take cues from the way our own brains work,
150
00:07:02,756 --> 00:07:05,023
using a process called "deep learning".
151
00:07:07,060 --> 00:07:09,194
I'm here at Berkeley to meet one of the leading minds
152
00:07:09,229 --> 00:07:11,029
in the development of AI.
153
00:07:11,064 --> 00:07:13,064
Stuart Russell has been at this for more than 20 years.
154
00:07:14,901 --> 00:07:17,769
He co-wrote THE definitive textbook on AI,
155
00:07:17,804 --> 00:07:21,072
and he understands both its promise and its dangers.
156
00:07:22,676 --> 00:07:26,144
So the brain has an
enormous network of neurons.
157
00:07:26,179 --> 00:07:28,813
So a neuron is a cell
that has these long,
158
00:07:28,849 --> 00:07:31,282
thin connections
to other neurons,
159
00:07:31,318 --> 00:07:34,619
so it kind of looks like a big
tangle of electrical spaghetti.
160
00:07:34,655 --> 00:07:39,090
And we have tens of
billions of neurons,
161
00:07:39,126 --> 00:07:43,995
and deep learning networks are
much, much, much simpler.
162
00:07:44,031 --> 00:07:48,466
What's similar about them is
that both the brain and these
163
00:07:48,502 --> 00:07:52,103
deep learning networks can learn
how to perform a given function,
164
00:07:52,139 --> 00:07:55,073
and they learn by being
given lots of examples.
165
00:07:55,108 --> 00:07:57,242
But deep learning
networks, for example,
166
00:07:57,277 --> 00:08:00,111
if you want to learn to
recognize cats and dogs
167
00:08:00,147 --> 00:08:04,115
and candles and... staplers
and things like that,
168
00:08:04,151 --> 00:08:07,118
you can show it millions of
photographs with these things,
169
00:08:07,154 --> 00:08:08,853
with labeled things
is what they are.
170
00:08:08,889 --> 00:08:13,425
So on tasks like recognizing
a wide range of categories,
171
00:08:13,460 --> 00:08:15,927
the computations they run
now have a thousand different
172
00:08:15,962 --> 00:08:17,429
categories of objects.
173
00:08:17,464 --> 00:08:20,865
And in the last five years,
we've gone from systems that
174
00:08:20,901 --> 00:08:24,602
might get 5% accuracy on
that thousand category task
175
00:08:24,638 --> 00:08:27,439
to systems that are
getting 98% accuracy.
176
00:08:27,474 --> 00:08:28,940
So you've been at
this for a while.
177
00:08:28,975 --> 00:08:31,276
I mean, how does it make you
feel, that the progress now
178
00:08:31,311 --> 00:08:34,612
is becoming exponential?
179
00:08:34,648 --> 00:08:36,548
The exponential word is
a dangerous one, 'cause...
180
00:08:36,583 --> 00:08:38,016
(Laughing)
181
00:08:38,051 --> 00:08:40,952
'cause it tends to suggest
that things will continue to
182
00:08:40,987 --> 00:08:43,455
accelerate without end, but
it may turn out that it will
183
00:08:43,490 --> 00:08:50,061
plateau, and that for other
tasks we need new breakthroughs.
184
00:08:50,097 --> 00:08:51,863
Quantum computing?
185
00:08:51,898 --> 00:08:54,632
Quantum computing possibly,
186
00:08:54,668 --> 00:08:58,369
but that's kind of a
cheat in some sense.
187
00:08:58,405 --> 00:09:01,740
That says that rather
than really understand
188
00:09:01,775 --> 00:09:04,743
how the brain manages
to do these amazing tasks
189
00:09:04,778 --> 00:09:08,012
with what is really
not that much hardware,
190
00:09:08,048 --> 00:09:11,015
and so I almost hope
that quantum computing...
191
00:09:11,051 --> 00:09:12,550
Doesn't happen.
192
00:09:12,586 --> 00:09:15,153
Doesn't happen, or happens a
long time in the future where--
193
00:09:15,188 --> 00:09:19,190
and you know, give us a chance
to keep working on finding
194
00:09:19,226 --> 00:09:22,527
the secrets of intelligence,
the things that make us smart,
195
00:09:22,562 --> 00:09:24,829
and gain real understanding.
196
00:09:24,865 --> 00:09:27,432
'Cause just brute forcing it
isn't really understanding
197
00:09:27,467 --> 00:09:29,033
what's going on.
198
00:09:30,804 --> 00:09:32,437
Quantum computing may be the next giant leap
199
00:09:32,472 --> 00:09:35,607
in building the artificial intelligence of the future.
200
00:09:35,642 --> 00:09:38,209
It's a technology so powerful that we really could use it
201
00:09:38,245 --> 00:09:40,879
to develop AI without having to understand
202
00:09:40,914 --> 00:09:43,581
how human learning works.
203
00:09:43,617 --> 00:09:45,850
And though Stuart Russell hopes that quantum computing
204
00:09:45,886 --> 00:09:47,752
won't happen for a long time,
205
00:09:47,788 --> 00:09:51,890
Rupak Biswas can't wait - and he isn't.
206
00:09:51,925 --> 00:09:55,293
He runs the Quantum Artificial Intelligence Lab at NASA's
207
00:09:55,328 --> 00:09:59,063
Ames Research Center, where scientists are working on
208
00:09:59,099 --> 00:10:03,635
a powerful experimental computer called the D-Wave.
209
00:10:03,670 --> 00:10:06,237
It's built on theprinciples of quantum mechanics,
210
00:10:06,273 --> 00:10:09,440
which I was hoping Rupak could explain to me.
211
00:10:09,476 --> 00:10:11,376
If someone tells you that
they'll explain to you
212
00:10:11,411 --> 00:10:13,778
what quantum mechanics is,
you should run away from them
213
00:10:13,814 --> 00:10:15,880
because no one really
understands this field.
214
00:10:15,916 --> 00:10:17,482
- (Laughing)
- It's a very complex field.
215
00:10:17,517 --> 00:10:21,286
Rupak agreed to show me NASA's quantum computer.
216
00:10:21,321 --> 00:10:24,556
This is the D-Wave
quantum annealing system.
217
00:10:24,591 --> 00:10:28,493
So what you'll see here
is basically a black box,
218
00:10:28,528 --> 00:10:31,629
and that is where the
D-Wave processor is.
219
00:10:31,665 --> 00:10:35,333
This is the Star Trek computer
that's more powerful--
220
00:10:35,368 --> 00:10:37,802
million of times more powerful
than anything Microsoft's got.
221
00:10:37,838 --> 00:10:40,672
Well sure, yeah, for certain
classes of problems, yes.
222
00:10:40,707 --> 00:10:42,640
This is something that
is really more powerful,
223
00:10:42,676 --> 00:10:44,742
and we expect to get a lot
of research done on this.
224
00:10:46,379 --> 00:10:48,613
NASA is hoping to use the quantum computer to develop
225
00:10:48,648 --> 00:10:51,916
quantum algorithms, and algorithms are a key component
226
00:10:51,952 --> 00:10:54,085
of the code running AI.
227
00:10:54,120 --> 00:10:57,355
In layman's terms, an algorithm is a kind of recipe
228
00:10:57,390 --> 00:10:59,424
for solving a problem.
229
00:10:59,459 --> 00:11:02,260
Basically, it's the set of step-by-step instructions
230
00:11:02,295 --> 00:11:05,697
given to a computer to help it accomplish a task.
231
00:11:05,732 --> 00:11:09,834
The question here though is that
the algorithm could be different
232
00:11:09,870 --> 00:11:12,170
on a supercomputer than
on a quantum computer,
233
00:11:12,205 --> 00:11:13,805
even though you are trying
to solve the same problem.
234
00:11:13,840 --> 00:11:15,139
So then what is the difference?
235
00:11:15,175 --> 00:11:17,141
Because, you know, to a
lot of people when you say
236
00:11:17,177 --> 00:11:19,010
supercomputer and
quantum computer,
237
00:11:19,045 --> 00:11:20,511
aren't they all
supercomputers?
238
00:11:20,547 --> 00:11:21,613
What's the difference?
239
00:11:21,648 --> 00:11:23,715
Right, so supercomputers
are basically--
240
00:11:23,750 --> 00:11:27,785
it's based on this transistor,
and the transistor is...
241
00:11:27,821 --> 00:11:32,223
in layman terms, is
basically a very small switch.
242
00:11:32,259 --> 00:11:34,125
So it's either zero or one.
243
00:11:34,160 --> 00:11:37,028
So a bit - and in a traditional
computer, it's called a bit -
244
00:11:37,063 --> 00:11:39,964
it's either zero or one, whereas
in quantum you have this thing
245
00:11:40,000 --> 00:11:43,134
called a qubit, which can be
zero and one at the same time.
246
00:11:43,169 --> 00:11:46,504
And that's essentially the
difference between a computer or
247
00:11:46,539 --> 00:11:49,974
a supercomputer or what we would
consider classical computing
248
00:11:50,010 --> 00:11:53,344
versus quantum computing, where
the quantum computer allows you
249
00:11:53,380 --> 00:11:55,647
to be in two stages
at the same time.
250
00:11:55,682 --> 00:11:57,415
And that's where
the power comes from.
251
00:11:57,450 --> 00:12:00,051
A quantum computer could help solve problems
252
00:12:00,086 --> 00:12:02,420
that no technology can solve today;
253
00:12:02,455 --> 00:12:05,890
finding cures for diseases or designing space colonies.
254
00:12:05,926 --> 00:12:09,027
But what if we choose to use that awesome computing power
255
00:12:09,062 --> 00:12:11,529
for destructive purposes?
256
00:12:11,564 --> 00:12:13,564
You know, if you think
about nuclear energy,
257
00:12:13,600 --> 00:12:16,901
you can use nuclear energy to
solve world's energy problems,
258
00:12:16,937 --> 00:12:18,903
but you can also use
it for bad things.
259
00:12:18,939 --> 00:12:20,772
You know, so all
of these things,
260
00:12:20,807 --> 00:12:23,408
if they are used improperly
and get in the wrong hands,
261
00:12:23,443 --> 00:12:24,542
it could lead to trouble.
262
00:12:32,218 --> 00:12:35,153
BEN: I'm in Oxford to meetthe founding engineer of Skype.
263
00:12:35,188 --> 00:12:37,522
Since leaving the company, Jaan Tallinn has become
264
00:12:37,557 --> 00:12:39,290
one of the most prominent voices
265
00:12:39,326 --> 00:12:41,693
in the field of artificial intelligence.
266
00:12:41,728 --> 00:12:44,495
He sees great advantages in developing AI,
267
00:12:44,531 --> 00:12:47,498
but he also warns of the risks of making the technology more
268
00:12:47,534 --> 00:12:52,236
powerful without understanding its potential harms.
269
00:12:52,272 --> 00:12:56,874
Once you have systems that are
basically smarter than humans
270
00:12:56,910 --> 00:13:01,145
when it comes to developing
further intelligent systems,
271
00:13:01,181 --> 00:13:03,181
then you have intelligent
systems developing
272
00:13:03,216 --> 00:13:05,249
intelligent systems that
in turn go on to develop
273
00:13:05,285 --> 00:13:07,385
even more intelligent systems.
274
00:13:07,420 --> 00:13:09,053
And you have this
intelligence explosion.
275
00:13:10,523 --> 00:13:12,256
He thinks the next step in AI
276
00:13:12,292 --> 00:13:14,826
is the creation of so-called "general intelligence".
277
00:13:16,229 --> 00:13:19,697
What's the difference between
what we're producing now and
278
00:13:19,733 --> 00:13:22,266
"general intelligence"
of the future?
279
00:13:22,302 --> 00:13:24,435
If you think about a
chess-playing computer,
280
00:13:24,471 --> 00:13:28,506
it's just modelling the
chessboard in its memory
281
00:13:28,541 --> 00:13:32,243
and looking at scenarios
how this game can play out,
282
00:13:32,278 --> 00:13:34,912
and what are the actions that
it can do on the chessboard.
283
00:13:34,948 --> 00:13:38,850
So it actually would be good
from a chess-playing perspective
284
00:13:38,885 --> 00:13:41,219
to not only model
the chessboard,
285
00:13:41,254 --> 00:13:44,522
but also what's going on
in the brain of your opponent.
286
00:13:44,557 --> 00:13:46,791
So it's one thing for a computer to understand
287
00:13:46,826 --> 00:13:49,761
the game of chess, but if it can look up from the board
288
00:13:49,796 --> 00:13:53,297
and understand me, that's a whole new level of scary.
289
00:13:53,333 --> 00:13:56,234
But I'm not the only one that's worried.
290
00:13:56,269 --> 00:14:00,104
In 2015, a group of prominentthinkers signed an open letter,
291
00:14:00,140 --> 00:14:03,041
warning researchers of therisks of making AI more powerful
292
00:14:03,076 --> 00:14:05,109
without understanding its potential harms.
293
00:14:07,013 --> 00:14:09,380
The signatories include Stuart Russell,
294
00:14:09,416 --> 00:14:12,750
Jaan Tallinn, Stephen Hawking, and Elon Musk.
295
00:14:14,154 --> 00:14:16,554
Nick Bostrom also signed the letter.
296
00:14:16,589 --> 00:14:18,923
He's a Swedish philosopher who's concerned that
297
00:14:18,958 --> 00:14:21,459
a mega-powerful AI that iscapable of fulfilling the goals
298
00:14:21,494 --> 00:14:25,063
we give it could cause our extinction.
299
00:14:25,098 --> 00:14:27,131
What if we don't understand the full consequences
300
00:14:27,167 --> 00:14:29,233
of what we're asking it to do?
301
00:14:29,269 --> 00:14:31,669
What if we leave a crucial detail out?
302
00:14:33,106 --> 00:14:36,474
Take the myth of King Midas.
303
00:14:36,509 --> 00:14:37,842
You know, he asked...
304
00:14:37,877 --> 00:14:40,144
everything he touches
should be turned into gold,
305
00:14:40,180 --> 00:14:42,346
which sounds like a great idea
because you'll be very wealthy
306
00:14:42,382 --> 00:14:45,450
if you can turn
coffee mugs into gold.
307
00:14:45,485 --> 00:14:47,752
Then he touches his
food, it turns into gold,
308
00:14:47,787 --> 00:14:50,455
he touches his daughter,
turns into a gold sculpture,
309
00:14:50,490 --> 00:14:52,457
so not such a cool idea.
310
00:14:52,492 --> 00:14:55,126
Turns out that it's actually
quite difficult to write down
311
00:14:55,161 --> 00:14:58,930
some objective functions such
that it would actually be good
312
00:14:58,965 --> 00:15:02,600
if that objective function
were maximally realized.
313
00:15:02,635 --> 00:15:06,337
It seemed to me that this could
be the most important thing
314
00:15:06,372 --> 00:15:08,506
in all of human history.
315
00:15:08,541 --> 00:15:12,343
What happens if AI succeeds
at its original ambition?
316
00:15:12,378 --> 00:15:14,378
Which has all along been
to achieve full general
317
00:15:14,414 --> 00:15:18,549
intelligence, where we have
potentially artificial agents
318
00:15:18,585 --> 00:15:22,887
that can strategize, can
deceive, that can form
319
00:15:22,922 --> 00:15:26,491
long goals and find creative
ways of achieving them.
320
00:15:26,526 --> 00:15:30,328
And at that point, that kind
of AI is not necessarily
321
00:15:30,363 --> 00:15:33,865
best thought of as merely a
tool, merely another gadget.
322
00:15:33,900 --> 00:15:36,067
At that point, really
talking about creating
323
00:15:36,102 --> 00:15:37,902
another intelligent life form.
324
00:15:37,937 --> 00:15:42,440
So could this potentially be -
AI - the last human invention?
325
00:15:42,475 --> 00:15:48,312
Yeah, so once you have general
intelligence at the human or
326
00:15:48,348 --> 00:15:51,682
superhuman level, then it's
not just that you have made
327
00:15:51,718 --> 00:15:54,552
a breakthrough in AI, but
you have made indirectly
328
00:15:54,587 --> 00:15:56,854
a breakthrough in every
other area as well.
329
00:15:56,890 --> 00:15:59,390
So the AI can do
research, science,
330
00:15:59,425 --> 00:16:01,659
development, all the
other things that humans do.
331
00:16:01,694 --> 00:16:04,529
So potentially what you have
is a kind of telescoping of
332
00:16:04,564 --> 00:16:07,698
the future, where all those
possible technologies that,
333
00:16:07,734 --> 00:16:10,701
you know, maybe we would have
developed given 40,000 years
334
00:16:10,737 --> 00:16:13,671
to work on it - you
know, space colonization,
335
00:16:13,706 --> 00:16:16,407
cures for aging, all
these other things.
336
00:16:16,442 --> 00:16:20,711
So in other words, we could
be this hyper-invincible,
337
00:16:20,747 --> 00:16:23,748
space-hopping species with AI,
338
00:16:23,783 --> 00:16:25,349
but we also could
go extinct by it?
339
00:16:25,385 --> 00:16:26,684
Yeah.
340
00:16:26,719 --> 00:16:31,122
I think... that machine
superintelligence is -
341
00:16:31,157 --> 00:16:34,559
depending on how optimistic
you feel on a given day -
342
00:16:34,594 --> 00:16:37,061
either the keyhole through which
343
00:16:37,096 --> 00:16:40,131
Earth-originating
intelligent life has to pass,
344
00:16:40,166 --> 00:16:41,465
and we could crash into the wall
345
00:16:41,501 --> 00:16:43,100
instead of actually
going through this.
346
00:16:43,136 --> 00:16:44,635
But if you make
it through there,
347
00:16:44,671 --> 00:16:47,672
then the life expectancy for
human civilization might then
348
00:16:47,707 --> 00:16:49,807
easily measure in
billions of years.
349
00:16:57,317 --> 00:16:59,116
BEN: The scientists and philosophers working on
350
00:16:59,152 --> 00:17:02,453
artificial intelligence are sure that AI can improve our lives,
351
00:17:02,488 --> 00:17:05,423
yet the same people worry that it could destroy us.
352
00:17:05,458 --> 00:17:06,791
But how would that happen?
353
00:17:06,826 --> 00:17:08,259
- Hi, I'm Ben.
- Hi!
354
00:17:08,294 --> 00:17:10,261
- Nice to meet you, Ben.
- Nice to meet you.
355
00:17:10,296 --> 00:17:13,464
What if the AI we build is actually designed to kill?
356
00:17:15,335 --> 00:17:18,269
Heather Roff has testified before the UN as an expert
357
00:17:18,304 --> 00:17:19,737
on autonomous weapons.
358
00:17:21,107 --> 00:17:24,742
I once sat next to a
grad student on a plane,
359
00:17:24,777 --> 00:17:27,144
and he told me that he
was an AI researcher.
360
00:17:27,180 --> 00:17:28,679
And so I was very
interested, saying,
361
00:17:28,715 --> 00:17:30,348
"What is it that you work on?"
362
00:17:30,383 --> 00:17:33,284
And he said, "Well, I
work on image recognition."
363
00:17:33,319 --> 00:17:35,152
And I was like, "Really?
Tell me more about that."
364
00:17:35,188 --> 00:17:38,256
And he said, "Well, I'm
looking at how we identify
365
00:17:38,291 --> 00:17:40,424
different birds,
and how we identify
366
00:17:40,460 --> 00:17:42,660
different types of
coral and starfish."
367
00:17:42,695 --> 00:17:44,128
And I said, "Who
funds your research?"
368
00:17:44,163 --> 00:17:46,597
And he said, "The
Office of Naval Research."
369
00:17:46,633 --> 00:17:49,367
And I said, "Do you really
think that your research
370
00:17:49,402 --> 00:17:51,502
is going to be confined
to birds and starfish?"
371
00:17:51,537 --> 00:17:53,537
And he didn't actually
have an answer for me.
372
00:17:53,573 --> 00:17:55,373
He didn't really think
through the next step.
373
00:17:55,408 --> 00:17:58,509
How bad would it be if
superintelligence is developed
374
00:17:58,544 --> 00:18:00,211
by the military?
375
00:18:00,246 --> 00:18:03,614
So I think if a
superintelligence emerges,
376
00:18:03,650 --> 00:18:07,385
we're all in trouble
for a variety of reasons.
377
00:18:07,420 --> 00:18:09,954
One of the things that we know
about military applications
378
00:18:09,989 --> 00:18:12,356
is that they're not for the
benefit of humanity, right?
379
00:18:12,392 --> 00:18:15,159
They're directed
towards offensive harming.
380
00:18:15,194 --> 00:18:17,695
We're not talking about
creating an AI that's gonna be
381
00:18:17,730 --> 00:18:21,032
trying to solve...
climate change
382
00:18:21,067 --> 00:18:23,567
or solve-- or create poetry.
383
00:18:23,603 --> 00:18:27,138
We're not worried about
those types of AI applications.
384
00:18:27,173 --> 00:18:29,907
We could be worried about those
ones becoming superintelligent,
385
00:18:29,942 --> 00:18:33,244
but you ought to be really
concerned about the strong AI
386
00:18:33,279 --> 00:18:34,812
that has guns on it.
387
00:18:34,847 --> 00:18:37,381
What does a superintelligent
weapon look like?
388
00:18:37,417 --> 00:18:40,251
A superintelligence could
be connected to everything.
389
00:18:40,286 --> 00:18:43,821
If it has a network, it has a
capability of being connected
390
00:18:43,856 --> 00:18:46,691
through Wi-Fi, or it
could figure out new ways of
391
00:18:46,726 --> 00:18:48,826
connecting itself if
you shut off that Wi-Fi.
392
00:18:48,861 --> 00:18:50,828
It could propagate itself
and its software
393
00:18:50,863 --> 00:18:52,997
on different servers
and different things
394
00:18:53,032 --> 00:18:54,899
so you could never
really truly get rid of it.
395
00:18:54,934 --> 00:18:58,035
It could hook itself into
missile defense systems
396
00:18:58,071 --> 00:19:00,905
and nuclear arsenals, and it
could do whatever it liked.
397
00:19:00,940 --> 00:19:03,040
I mean, that's the whole thing
about being a superintelligence,
398
00:19:03,076 --> 00:19:05,443
is you're everywhere.
399
00:19:05,478 --> 00:19:07,244
You're, you know...
400
00:19:07,280 --> 00:19:09,347
Think Skynet but scarier.
401
00:19:09,382 --> 00:19:11,749
(Chuckling)
402
00:19:11,784 --> 00:19:13,084
Wow, can't wait.
403
00:19:13,119 --> 00:19:14,518
(Laughing)
404
00:19:14,554 --> 00:19:16,620
Thanks to all those AI
researchers for getting that.
405
00:19:16,656 --> 00:19:19,890
Well, I mean, the
AI researchers...
406
00:19:19,926 --> 00:19:22,093
I think it's not
fair to blame them.
407
00:19:22,128 --> 00:19:25,463
I think they're trying to do
things that are good with AI.
408
00:19:25,498 --> 00:19:27,765
It's the moment someone takes
a scalpel from the surgeon
409
00:19:27,800 --> 00:19:29,200
and makes it a
knife for killing.
410
00:19:31,170 --> 00:19:33,070
But others think thegreatest dangers will come from
411
00:19:33,106 --> 00:19:36,774
unintended consequences, as Jaan Tallinn explained to me.
412
00:19:38,945 --> 00:19:41,412
I don't think it's correct to
say that AI is a technology
413
00:19:41,447 --> 00:19:43,881
just like any other technology,
or tool like any other tool.
414
00:19:43,916 --> 00:19:46,550
No, it's a technology that
can potentially create
415
00:19:46,586 --> 00:19:48,552
new technologies itself.
416
00:19:48,588 --> 00:19:52,123
Now, there are a lot of
smart people, as we know,
417
00:19:52,158 --> 00:19:54,959
looking into this problem
and this issue, and...
418
00:19:54,994 --> 00:19:56,093
Not enough though.
419
00:19:56,129 --> 00:19:57,495
- Not enough?
- No.
420
00:19:57,530 --> 00:20:00,464
Like imagine that we are
building a spaceship that's able
421
00:20:00,500 --> 00:20:02,900
to carry the entire humanity.
422
00:20:04,003 --> 00:20:07,171
And like the
boarding has already begun,
423
00:20:07,206 --> 00:20:09,740
and children are
already onboard.
424
00:20:09,776 --> 00:20:12,276
And then there's like a small
group of people who used to be
425
00:20:12,311 --> 00:20:13,911
completely ignored,
who are saying that,
426
00:20:13,946 --> 00:20:17,481
"Look, we're gonna need
some steering on this ship."
427
00:20:17,517 --> 00:20:20,317
And now people are going
like, "Oh, wait a minute.
428
00:20:20,353 --> 00:20:22,520
Yeah, steering
might become handy."
429
00:20:22,555 --> 00:20:23,788
(Laughing)
430
00:20:23,823 --> 00:20:25,322
But in this case,
steering is, what,
431
00:20:25,358 --> 00:20:27,992
programming the AI to make sure
that it doesn't kill us all?
432
00:20:28,027 --> 00:20:30,928
I think the more general
point is that whenever you're
433
00:20:30,963 --> 00:20:34,932
a technology developer, you
have the responsibility of...
434
00:20:36,869 --> 00:20:38,803
thinking through the
consequences of your actions.
435
00:20:38,838 --> 00:20:45,843
♪
436
00:20:45,878 --> 00:20:51,649
It might be an incredibly subtle
process which eventually ends up
437
00:20:51,684 --> 00:20:55,453
with the human race becoming
sort of enfeebled and dependent
438
00:20:55,488 --> 00:20:58,956
on machines in ways
that leave us vulnerable
439
00:20:58,991 --> 00:21:00,858
to any kind of unexpected event.
440
00:21:00,893 --> 00:21:03,627
Is it possible right now we're
almost tripping over ourselves,
441
00:21:03,663 --> 00:21:07,531
in that we're coming up
with discoveries about AI
442
00:21:07,567 --> 00:21:10,401
and we don't really realize
the full implications of
443
00:21:10,436 --> 00:21:13,170
what we'd just discovered,
and we just use it?
444
00:21:13,206 --> 00:21:15,172
You know, for centuries
or millennia or even--
445
00:21:15,208 --> 00:21:17,208
"It'll never happen,"
you hear that, you know.
446
00:21:17,243 --> 00:21:19,043
"We don't have to worry,
it's just impossible."
447
00:21:19,078 --> 00:21:23,080
The history of nuclear
physics, there was a...
448
00:21:23,115 --> 00:21:25,316
a speech by Rutherford.
449
00:21:25,351 --> 00:21:27,751
He's the guy who split the atom.
450
00:21:27,787 --> 00:21:31,689
In 1933, September 11th, he
said, "There is no possibility
451
00:21:31,724 --> 00:21:34,992
that we'll ever be able to
extract energy from atoms."
452
00:21:35,027 --> 00:21:39,763
Less than 24 hours
later, Szilard invented
453
00:21:39,799 --> 00:21:42,032
the neutron-based
nuclear chain reaction,
454
00:21:42,068 --> 00:21:45,169
and instantly realized what
it would mean in terms of
455
00:21:45,204 --> 00:21:47,905
the ability to create
nuclear explosions.
456
00:21:47,940 --> 00:21:49,740
So it went from
never to 24 hours.
457
00:21:51,911 --> 00:21:54,011
The scientist who split the atom didn't understand
458
00:21:54,046 --> 00:21:56,614
that his discovery would so quickly lead to the atom bomb.
459
00:21:59,051 --> 00:22:01,519
But the same discovery later gave us the energy that still
460
00:22:01,554 --> 00:22:04,388
provides electricity to millions around the world.
461
00:22:05,591 --> 00:22:07,791
When it comes to AI, it seems like we're about to split
462
00:22:07,827 --> 00:22:09,793
the proverbial atom again.
463
00:22:09,829 --> 00:22:12,897
The future of humanity could be at stake.
464
00:22:12,932 --> 00:22:16,734
And how we build AI now - what we design it to do -
465
00:22:16,769 --> 00:22:18,369
could make the difference.
44547
Can't find what you're looking for?
Get subtitles in any language from opensubtitles.com, and translate them here.