Would you like to inspect the original subtitles? These are the user uploaded subtitles that are being translated:
4
00:00:14,324 --> 00:00:15,532
For decades,
5
00:00:15,532 --> 00:00:17,017
we have discussed
the many outcomes,
6
00:00:17,017 --> 00:00:19,053
regarding artificial
intelligence.
7
00:00:19,053 --> 00:00:21,469
Could our world be dominated?
8
00:00:21,469 --> 00:00:25,232
Could our independence and
autonomy be stripped from us,
9
00:00:25,232 --> 00:00:28,407
or are we able to control
what we have created?
10
00:00:28,407 --> 00:00:31,100
[upbeat music]
11
00:00:37,416 --> 00:00:41,006
Could we use artificial
intelligence to benefit our society?
12
00:00:41,006 --> 00:00:44,009
Just how thin is the line
between the development
13
00:00:44,009 --> 00:00:46,805
of civilization and chaos?
14
00:00:46,805 --> 00:00:49,428
[upbeat music]
15
00:01:13,211 --> 00:01:15,903
To understand what
artificial intelligence is,
16
00:01:15,903 --> 00:01:19,803
one must understand that it
can take many different forms.
17
00:01:19,803 --> 00:01:22,047
Think of it as a web of ideas,
18
00:01:22,047 --> 00:01:25,326
slowly expanding as new
ways of utilizing computers
19
00:01:25,326 --> 00:01:26,603
are explored.
20
00:01:26,603 --> 00:01:28,260
As technology develops,
21
00:01:28,260 --> 00:01:31,539
so do the capabilities of
self-learning software.
22
00:01:31,539 --> 00:01:34,335
- [Reporter] The need to
diagnose disease quickly
23
00:01:34,335 --> 00:01:38,132
and effectively has prompted
many university medical centers
24
00:01:38,132 --> 00:01:41,791
to develop intelligent
programs that simulate the work
25
00:01:41,791 --> 00:01:44,345
of doctors and
laboratory technicians.
26
00:01:44,345 --> 00:01:47,003
[gentle music]
27
00:01:48,694 --> 00:01:51,041
- [Narrator] AI is
quickly integrating with our way of life.
28
00:01:51,041 --> 00:01:54,631
So, much so that development
of AI programs has in itself,
29
00:01:54,631 --> 00:01:56,323
become a business opportunity.
30
00:01:57,945 --> 00:01:58,773
[upbeat music]
31
00:01:58,773 --> 00:01:59,809
In our modern age,
32
00:01:59,809 --> 00:02:01,638
we are powered by technology
33
00:02:01,638 --> 00:02:05,021
and softwares are transcending
its virtual existence,
34
00:02:05,021 --> 00:02:07,437
finding applications
in various fields,
35
00:02:07,437 --> 00:02:11,372
such as customer support
to content creation,
36
00:02:11,372 --> 00:02:13,202
computer-aided design,
37
00:02:13,202 --> 00:02:17,137
otherwise known as CAD, is
one of the many uses of AI.
38
00:02:17,137 --> 00:02:19,415
By analyzing
particular variables,
39
00:02:19,415 --> 00:02:22,280
computers are now able to
assist in the modification
40
00:02:22,280 --> 00:02:26,180
and creation of designs for
hardware and architecture.
41
00:02:26,180 --> 00:02:30,046
The prime use of any AI is
for optimizing processes
42
00:02:30,046 --> 00:02:32,324
that were considered
tedious before.
43
00:02:32,324 --> 00:02:35,189
In many ways, AI has
been hugely beneficial
44
00:02:35,189 --> 00:02:38,951
for technological development
thanks to its sheer speed.
45
00:02:38,951 --> 00:02:41,057
However, AI only benefits
46
00:02:41,057 --> 00:02:43,508
those to whom the
programs are distributed.
47
00:02:44,302 --> 00:02:45,613
Artificial intelligence
48
00:02:45,613 --> 00:02:47,443
is picking through your rubbish.
49
00:02:47,443 --> 00:02:51,688
This robot uses it to sort
through plastics for recycling
50
00:02:51,688 --> 00:02:53,414
and it can be retrained
51
00:02:53,414 --> 00:02:55,968
to prioritize whatever's
more marketable.
52
00:02:57,177 --> 00:03:00,180
So, AI can clearly
be incredibly useful,
53
00:03:00,180 --> 00:03:02,596
but there are deep
concerns about
54
00:03:02,596 --> 00:03:07,635
how quickly it is developing
and where it could go next.
55
00:03:08,912 --> 00:03:11,121
- The aim is to make
them as capable as humans
56
00:03:11,121 --> 00:03:14,366
and deploy them in
the service sector.
57
00:03:14,366 --> 00:03:16,230
The engineers in this research
58
00:03:16,230 --> 00:03:18,059
and development lab are working
59
00:03:18,059 --> 00:03:21,822
to take these humanoid
robots to the next level
60
00:03:21,822 --> 00:03:24,583
where they can not
only speak and move,
61
00:03:24,583 --> 00:03:27,345
but they can think
and feel and act
62
00:03:27,345 --> 00:03:30,002
and even make decisions
for themselves.
63
00:03:30,796 --> 00:03:32,695
And that daily data stream
64
00:03:32,695 --> 00:03:36,008
is being fed into an
ever expanding workforce,
65
00:03:36,008 --> 00:03:39,529
dedicated to developing
artificial intelligence.
66
00:03:41,013 --> 00:03:42,808
Those who have studied abroad
67
00:03:42,808 --> 00:03:46,122
are being encouraged to
return to the motherland.
68
00:03:46,122 --> 00:03:47,917
Libo Yang came back
69
00:03:47,917 --> 00:03:51,645
and started a tech
enterprise in his hometown.
70
00:03:51,645 --> 00:03:54,268
- [Narrator] China's market
is indeed the most open
71
00:03:54,268 --> 00:03:56,926
and active market
in the world for AI.
72
00:03:56,926 --> 00:04:01,241
It is also where there are the
most application scenarios.
73
00:04:01,241 --> 00:04:03,864
- So, AI is generally a
broad term that we apply
74
00:04:03,864 --> 00:04:04,934
to a number of techniques.
75
00:04:04,934 --> 00:04:06,384
And in this particular case,
76
00:04:06,384 --> 00:04:09,456
what we're actually looking
at was elements of AI,
77
00:04:09,456 --> 00:04:12,010
machine learning
and deep learning.
78
00:04:12,010 --> 00:04:13,701
So, in this particular case,
79
00:04:13,701 --> 00:04:17,429
we've been unfortunately
in a situation
80
00:04:17,429 --> 00:04:20,398
in this race against time
to create new antibiotics,
81
00:04:20,398 --> 00:04:22,779
the threat is
actually quite real
82
00:04:22,779 --> 00:04:25,230
and it would be
a global problem.
83
00:04:25,230 --> 00:04:27,784
We desperately needed to
harness new technologies
84
00:04:27,784 --> 00:04:29,269
in an attempt to fight it,
85
00:04:29,269 --> 00:04:30,960
we're looking at drugs
86
00:04:30,960 --> 00:04:33,411
which could potentially
fight E. coli,
87
00:04:33,411 --> 00:04:35,102
a very dangerous bacteria.
88
00:04:35,102 --> 00:04:37,207
- So, what is it
that the AI is doing
89
00:04:37,207 --> 00:04:39,348
that humans can't
do very simply?
90
00:04:39,348 --> 00:04:41,729
- So, the AI can
look for patterns
91
00:04:41,729 --> 00:04:44,560
that we wouldn't be able to
mind for with a human eye,
92
00:04:44,560 --> 00:04:47,287
simply within what I
do as a radiologist,
93
00:04:47,287 --> 00:04:50,980
I look for patterns of
diseases in terms of shape,
94
00:04:50,980 --> 00:04:53,914
contrast enhancement,
heterogeneity.
95
00:04:53,914 --> 00:04:55,191
But what the computer does,
96
00:04:55,191 --> 00:04:58,125
it looks for patterns
within the pixels.
97
00:04:58,125 --> 00:05:00,679
These are things that you just
can't see to the human eye.
98
00:05:00,679 --> 00:05:03,855
There's so much more data
embedded within these scans
99
00:05:03,855 --> 00:05:07,514
that we use that we can't
mine on a physical level.
100
00:05:07,514 --> 00:05:09,516
So, the computers really help.
101
00:05:09,516 --> 00:05:11,311
- [Narrator] Many
believe the growth of AI
102
00:05:11,311 --> 00:05:13,692
is dependent on
global collaboration,
103
00:05:13,692 --> 00:05:17,109
but access to the technology
is limited in certain regions.
104
00:05:17,109 --> 00:05:19,767
Global distribution is
a long-term endeavor
105
00:05:19,767 --> 00:05:21,044
and the more countries
106
00:05:21,044 --> 00:05:23,288
and businesses that
have access to the tech,
107
00:05:23,288 --> 00:05:26,429
the more regulation
the AI will require.
108
00:05:26,429 --> 00:05:29,846
In fact, it is now not
uncommon for businesses
109
00:05:29,846 --> 00:05:33,125
to be entirely run by
an artificial director.
110
00:05:33,125 --> 00:05:34,472
On many occasions,
111
00:05:34,472 --> 00:05:37,198
handing the helm of a
company to an algorithm
112
00:05:37,198 --> 00:05:40,685
can provide the best option
on the basis of probability.
113
00:05:40,685 --> 00:05:43,998
However, dependence and
reliability on softwares
114
00:05:43,998 --> 00:05:45,897
can be a great risk.
115
00:05:45,897 --> 00:05:47,450
Without proper safeguards,
116
00:05:47,450 --> 00:05:50,419
actions based on potentially
incorrect predictions
117
00:05:50,419 --> 00:05:53,353
can be a detriment to a
business or operation.
118
00:05:53,353 --> 00:05:55,147
Humans provide the
critical thinking
119
00:05:55,147 --> 00:05:58,461
and judgment which AI is
not capable of matching.
120
00:05:58,461 --> 00:06:00,463
- Well, this is the
Accessibility Design Center
121
00:06:00,463 --> 00:06:02,810
and it's where we try to
bring together our engineers
122
00:06:02,810 --> 00:06:05,882
and experts with the
latest AI technology,
123
00:06:05,882 --> 00:06:07,608
with people with disabilities,
124
00:06:07,608 --> 00:06:10,059
because there's a
real opportunity to firstly help people
125
00:06:10,059 --> 00:06:12,613
with disabilities enjoy
all the technology
126
00:06:12,613 --> 00:06:14,201
we have in our pockets today.
127
00:06:14,201 --> 00:06:15,720
And sometimes that's
not very accessible,
128
00:06:15,720 --> 00:06:18,688
but also build tools that
can help them engage better
129
00:06:18,688 --> 00:06:20,103
in the real world.
130
00:06:20,103 --> 00:06:22,451
And that's thanks to the
wonders of machine learning.
131
00:06:22,451 --> 00:06:25,764
- I don't think we're like at
the end of this paradigm yet.
132
00:06:25,764 --> 00:06:26,903
We'll keep pushing these.
133
00:06:26,903 --> 00:06:28,215
We'll add other modalities.
134
00:06:28,215 --> 00:06:31,114
So, someday they'll do
video, audio images,
135
00:06:31,114 --> 00:06:36,154
text altogether and they'll get
like much smarter over time.
136
00:06:37,638 --> 00:06:38,674
- AI, machine learning, all
very sounds very complicated.
137
00:06:38,674 --> 00:06:40,572
Just think about it as a toolkit
138
00:06:40,572 --> 00:06:42,781
that's really good at
sort of spotting patterns
139
00:06:42,781 --> 00:06:44,024
and making predictions,
140
00:06:44,024 --> 00:06:46,336
better than any computing
could do before.
141
00:06:46,336 --> 00:06:47,786
And that's why it's so useful
142
00:06:47,786 --> 00:06:51,031
for things like understanding
language and speech.
143
00:06:51,031 --> 00:06:52,998
Another product which
we are launching today
144
00:06:52,998 --> 00:06:55,000
is called Project Relate.
145
00:06:55,000 --> 00:06:56,312
And this is for people
146
00:06:56,312 --> 00:06:58,728
who have non-standard
speech patterns.
147
00:06:58,728 --> 00:07:00,937
So, one of the
people we work with
148
00:07:00,937 --> 00:07:03,837
is maybe less than
10% of the time,
149
00:07:03,837 --> 00:07:06,564
could be understood by
people who don't know her,
150
00:07:06,564 --> 00:07:09,325
using this tool that's
over 90% of the time.
151
00:07:09,325 --> 00:07:12,259
And you think about
that transformation in somebody's life
152
00:07:12,259 --> 00:07:15,676
and then you think about the
fact there's 250 million people
153
00:07:15,676 --> 00:07:17,678
with non-standard speech
patterns around the world.
154
00:07:17,678 --> 00:07:19,093
So, that's the
ambition of this center
155
00:07:19,093 --> 00:07:21,682
is to unite technology with
people with disabilities
156
00:07:21,682 --> 00:07:24,478
and try to help 'em
engage more in the world.
157
00:07:24,478 --> 00:07:27,550
- [Narrator] On the
30th November of 2022,
158
00:07:27,550 --> 00:07:30,001
a revolutionary
innovation emerged,
159
00:07:30,967 --> 00:07:32,003
ChatGPT.
160
00:07:32,969 --> 00:07:35,869
ChatGPT was created by OpenAI,
161
00:07:35,869 --> 00:07:38,250
an AI research organization.
162
00:07:38,250 --> 00:07:39,873
Its goal is to develop systems
163
00:07:39,873 --> 00:07:44,498
which may benefit all aspects
of society and communication.
164
00:07:44,498 --> 00:07:47,467
Sam Altman stepped
up as CEO of OpenAI
165
00:07:47,467 --> 00:07:50,055
on its launch in 2015.
166
00:07:50,055 --> 00:07:51,609
Altman dabbled in a multitude
167
00:07:51,609 --> 00:07:53,990
of computing-based
business ventures.
168
00:07:53,990 --> 00:07:57,477
His rise to CEO was thanks
to his many affiliations
169
00:07:57,477 --> 00:08:01,377
and investments with computing
and social media companies.
170
00:08:01,377 --> 00:08:04,173
He began his journey
by co-founding Loopt,
171
00:08:04,173 --> 00:08:06,106
a social media service.
172
00:08:06,106 --> 00:08:07,763
After selling the application,
173
00:08:07,763 --> 00:08:10,835
Altman went on to bigger
and riskier endeavors
174
00:08:10,835 --> 00:08:14,148
from startup accelerator
companies to security software.
175
00:08:15,184 --> 00:08:17,393
OpenAI became hugely desirable,
176
00:08:17,393 --> 00:08:20,223
thanks to the amount of revenue
the company had generated
177
00:08:20,223 --> 00:08:21,984
with over a billion dollars made
178
00:08:21,984 --> 00:08:24,262
within its first
year of release.
179
00:08:24,262 --> 00:08:27,265
ChatGPT became an easily
accessible software,
180
00:08:27,265 --> 00:08:30,786
built on a large language
model known as an LLM.
181
00:08:30,786 --> 00:08:34,134
This program can conjure
complex human-like responses
182
00:08:34,134 --> 00:08:37,309
to the user's questions
otherwise known as prompts.
183
00:08:37,309 --> 00:08:38,794
In essence,
184
00:08:38,794 --> 00:08:41,244
it is a program which
learns the more it is used.
185
00:08:43,592 --> 00:08:45,317
The new age therapeutic program
186
00:08:45,317 --> 00:08:48,804
was developed on the GPT-3.5.
187
00:08:48,804 --> 00:08:51,531
The architecture of this
older model allowed systems
188
00:08:51,531 --> 00:08:53,602
to understand and generate code
189
00:08:53,602 --> 00:08:56,501
and natural languages at a
remarkably advanced level
190
00:08:56,501 --> 00:08:59,884
from analyzing syntax
to nuances in writing.
191
00:08:59,884 --> 00:09:02,542
[upbeat music]
192
00:09:04,578 --> 00:09:06,753
ChatGPT took the world by storm,
193
00:09:06,753 --> 00:09:09,445
due to the sophistication
of the system.
194
00:09:09,445 --> 00:09:11,067
As with many chatbot systems,
195
00:09:11,067 --> 00:09:13,449
people have since found
ways to manipulate
196
00:09:13,449 --> 00:09:17,349
and confuse the software in
order to test its limits.
197
00:09:17,349 --> 00:09:20,076
[gentle music]
198
00:09:21,526 --> 00:09:25,910
The first computer was invented
by Charles Babbage in 1822.
199
00:09:25,910 --> 00:09:29,189
It was to be a rudimentary
general purpose system.
200
00:09:29,189 --> 00:09:34,021
In 1936, the system was
developed upon by Alan Turing.
201
00:09:34,021 --> 00:09:36,299
The automatic machine,
as he called them,
202
00:09:36,299 --> 00:09:38,854
was able to break enigma
enciphered messages,
203
00:09:38,854 --> 00:09:41,201
regarding enemy
military operations,
204
00:09:41,201 --> 00:09:43,583
during the Second World War.
205
00:09:43,583 --> 00:09:46,447
Turing theorized his
own type of computer,
206
00:09:46,447 --> 00:09:49,830
the Turing Machine has
coined by Alonzo Church,
207
00:09:49,830 --> 00:09:52,522
after reading Turing's
research paper.
208
00:09:52,522 --> 00:09:55,698
It had become realized that
soon prospect of computing
209
00:09:55,698 --> 00:09:57,907
and engineering would
merge seamlessly.
210
00:09:59,046 --> 00:10:01,152
Theories of future
tech would increase
211
00:10:01,152 --> 00:10:04,742
and soon came a huge outburst
in science fiction media.
212
00:10:04,742 --> 00:10:07,468
This was known as the
golden age for computing.
213
00:10:07,468 --> 00:10:10,092
[gentle music]
214
00:10:20,067 --> 00:10:22,760
Alan Turing's contributions
to computability
215
00:10:22,760 --> 00:10:25,590
and theoretical computer
science was one step closer
216
00:10:25,590 --> 00:10:28,110
to producing a reactive machine.
217
00:10:28,110 --> 00:10:31,389
The reactive machine
is an early form of AI.
218
00:10:31,389 --> 00:10:32,942
They had limited capabilities
219
00:10:32,942 --> 00:10:34,772
and were unable
to store memories
220
00:10:34,772 --> 00:10:37,740
in order to learn new
algorithms of data.
221
00:10:37,740 --> 00:10:41,641
However, they were able to
react to specific stimuli.
222
00:10:41,641 --> 00:10:46,611
The first AI was a
program written in 1952 by Arthur Samuel.
223
00:10:47,854 --> 00:10:49,614
The prototype AI was
able to play checkers,
224
00:10:49,614 --> 00:10:52,168
against an opponent and
was built to operate
225
00:10:52,168 --> 00:10:56,172
on the Ferranti Mark One, an
early commercial computer.
226
00:10:56,172 --> 00:10:57,657
- [Reporter] This computer
has been playing the game
227
00:10:57,657 --> 00:11:00,418
for several years now,
getting better all the time.
228
00:11:00,418 --> 00:11:02,972
Tonight it's playing against
the black side of the board.
229
00:11:02,972 --> 00:11:05,837
It's approach to playing
drafts, it's almost human.
230
00:11:05,837 --> 00:11:08,012
It remembers the moves
that enable it to win
231
00:11:08,012 --> 00:11:10,324
and the sort that
lead to defeat.
232
00:11:10,324 --> 00:11:12,982
The computer indicates the move
it wants to make on a panel
233
00:11:12,982 --> 00:11:14,156
of flashing lights.
234
00:11:14,156 --> 00:11:15,433
It's up to the human opponent
235
00:11:15,433 --> 00:11:18,229
to actually move the
drafts about the board.
236
00:11:18,229 --> 00:11:20,645
This sort of works producing
exciting information
237
00:11:20,645 --> 00:11:22,405
on the way in which
electronic brains
238
00:11:22,405 --> 00:11:24,338
can learn from past experience
239
00:11:24,338 --> 00:11:26,168
and improve their performances.
240
00:11:27,963 --> 00:11:29,792
[Narrator] In 1966,
241
00:11:29,792 --> 00:11:32,519
an MIT professor named
Joseph Weizenbaum,
242
00:11:32,519 --> 00:11:37,110
created an AI which would
change the landscape of society.
243
00:11:37,110 --> 00:11:39,077
It was known as Eliza,
244
00:11:39,077 --> 00:11:42,322
and it was designed to act
like a psychotherapist.
245
00:11:42,322 --> 00:11:45,497
The software was simplistic,
yet revolutionary.
246
00:11:45,497 --> 00:11:47,499
The AI would receive
the user input
247
00:11:47,499 --> 00:11:51,055
and use specific parameters to
generate a coherent response.
248
00:11:53,057 --> 00:11:55,991
- It it has been said,
especially here at MIT,
249
00:11:55,991 --> 00:11:59,719
that computers will
take over in some sense
250
00:11:59,719 --> 00:12:02,652
and it's even been said
that if we're lucky,
251
00:12:02,652 --> 00:12:04,447
they'll keep us as pets
252
00:12:04,447 --> 00:12:06,277
and Arthur C. Clarke, the
science fiction writer,
253
00:12:06,277 --> 00:12:09,694
we marked once that if
that were to happen,
254
00:12:09,694 --> 00:12:12,904
it would serve us
right, he said.
255
00:12:12,904 --> 00:12:14,734
- [Narrator] The program
maintained the illusion
256
00:12:14,734 --> 00:12:16,943
of understanding its
user to the point
257
00:12:16,943 --> 00:12:20,498
where Weizenbaum's secretary
requested some time alone
258
00:12:20,498 --> 00:12:23,363
with Eliza to
express her feelings.
259
00:12:23,363 --> 00:12:26,711
Though Eliza is now considered
outdated technology,
260
00:12:26,711 --> 00:12:29,369
it remains a talking
point due to its ability
261
00:12:29,369 --> 00:12:31,785
to illuminate an aspect
of the human mind
262
00:12:31,785 --> 00:12:34,132
in our relationship
with computers.
263
00:12:34,132 --> 00:12:36,756
- And it's connected
over the telephone line
264
00:12:36,756 --> 00:12:38,965
to someone or something
at the other end.
265
00:12:38,965 --> 00:12:42,106
Now, I'm gonna play 20
questions with whatever it is.
266
00:12:42,106 --> 00:12:44,418
[type writer clacking]
267
00:12:44,418 --> 00:12:45,419
Very helpful.
268
00:12:45,419 --> 00:12:48,768
[type writer clacking]
269
00:12:53,773 --> 00:12:55,119
- 'Cause clearly if
we can make a machine
270
00:12:55,119 --> 00:12:56,776
as intelligent as ourselves,
271
00:12:56,776 --> 00:12:59,157
then it can make one
that's more intelligent.
272
00:12:59,157 --> 00:13:04,024
Now, the one I'm talking about
now will certainly happen.
273
00:13:05,301 --> 00:13:07,476
I mean, it could produce
an evil result of course,
274
00:13:07,476 --> 00:13:08,615
if we were careless,
275
00:13:08,615 --> 00:13:10,134
but what is quite certain
276
00:13:10,134 --> 00:13:14,138
is that we're heading
towards machine intelligence,
277
00:13:14,138 --> 00:13:17,486
machines that are
intelligent in every sense.
278
00:13:17,486 --> 00:13:19,246
It doesn't matter
how you define it,
279
00:13:19,246 --> 00:13:22,940
they'll be able to be
that sort of intelligent.
280
00:13:22,940 --> 00:13:26,046
A human is a machine,
unless there's a soul.
281
00:13:26,046 --> 00:13:29,670
I don't personally believe
that humans have souls
282
00:13:29,670 --> 00:13:32,535
in anything other
than a poetic sense,
283
00:13:32,535 --> 00:13:34,158
which I do believe
in, of course.
284
00:13:34,158 --> 00:13:37,437
But in a literal God-like sense,
285
00:13:37,437 --> 00:13:38,610
I don't believe we have souls.
286
00:13:38,610 --> 00:13:39,991
And so personally,
287
00:13:39,991 --> 00:13:42,407
I believe that we are
essentially machines.
288
00:13:43,823 --> 00:13:46,722
- [Narrator] This type of
program is known as an NLP,
289
00:13:46,722 --> 00:13:49,242
Natural Language Processing.
290
00:13:49,242 --> 00:13:52,176
This branch of artificial
intelligence enables computers
291
00:13:52,176 --> 00:13:55,489
to comprehend, generate and
manipulate human language.
292
00:13:56,905 --> 00:13:59,114
The concept of a
responsive machine
293
00:13:59,114 --> 00:14:02,358
was the mash that lit the
flame for worldwide concern.
294
00:14:03,739 --> 00:14:06,466
The systems were beginning
to raise ethical dilemmas,
295
00:14:06,466 --> 00:14:08,813
such as the use of
autonomous weapons,
296
00:14:08,813 --> 00:14:11,781
invasions of privacy through
surveillance technologies
297
00:14:11,781 --> 00:14:13,300
and the potential for misuse
298
00:14:13,300 --> 00:14:17,097
or unintended consequences
in decision making.
299
00:14:17,097 --> 00:14:18,858
When a command is
executed based,
300
00:14:18,858 --> 00:14:21,067
upon set rules in algorithms,
301
00:14:21,067 --> 00:14:24,346
it might not always be the
morally correct choice.
302
00:14:24,346 --> 00:14:28,453
Imagination seems to be,
303
00:14:28,453 --> 00:14:31,594
some sort of process of random
thoughts being generated
304
00:14:31,594 --> 00:14:34,528
in the mind and then the
conscious mind selecting from a
305
00:14:34,528 --> 00:14:36,392
or some part of
the brain anyway,
306
00:14:36,392 --> 00:14:37,773
perhaps even below
the conscious mind,
307
00:14:37,773 --> 00:14:40,500
selecting from a pool of
ideas and aligns with some
308
00:14:40,500 --> 00:14:42,122
and blocking others.
309
00:14:42,122 --> 00:14:45,608
And yes, a machine
can do the same thing.
310
00:14:45,608 --> 00:14:48,611
In fact, we can only
say that a machine
311
00:14:48,611 --> 00:14:50,890
is fundamentally different
from a human being,
312
00:14:50,890 --> 00:14:53,133
eventually, always
fundamentally, if we believe in a soul.
313
00:14:53,133 --> 00:14:55,687
So, that boils down
to religious matter.
314
00:14:55,687 --> 00:14:58,932
If human beings have souls,
then clearly machines won't
315
00:14:58,932 --> 00:15:01,141
and there will always be
a fundamental difference.
316
00:15:01,141 --> 00:15:03,005
If you don't believe
humans have souls,
317
00:15:03,005 --> 00:15:04,765
then machines can do anything
318
00:15:04,765 --> 00:15:07,078
and everything
that a human does.
319
00:15:07,078 --> 00:15:10,116
- A computer which is
capable of finding out
320
00:15:10,116 --> 00:15:11,565
where it's gone wrong,
321
00:15:11,565 --> 00:15:14,051
finding out how its program
has already served it
322
00:15:14,051 --> 00:15:15,776
and then changing its program
323
00:15:15,776 --> 00:15:17,261
in the light of what
it had discovered
324
00:15:17,261 --> 00:15:18,814
is a learning machine.
325
00:15:18,814 --> 00:15:21,679
And this is something quite
fundamentally new in the world.
326
00:15:23,163 --> 00:15:25,027
- I'd like to be able to say
that it's only a slight change
327
00:15:25,027 --> 00:15:27,754
and we'll all be used to
it very, very quickly.
328
00:15:27,754 --> 00:15:29,307
But I don't think it is.
329
00:15:29,307 --> 00:15:33,070
I think that although we've
spoken probably of the whole
330
00:15:33,070 --> 00:15:35,417
of this century about
a coming revolution
331
00:15:35,417 --> 00:15:38,523
and about the end
of work and so on,
332
00:15:38,523 --> 00:15:39,904
finally it's actually happening.
333
00:15:39,904 --> 00:15:42,148
And it's actually
happening because now,
334
00:15:42,148 --> 00:15:46,117
it's suddenly become
cheaper to have a machine
335
00:15:46,117 --> 00:15:49,224
do a mental task
than for a man to,
336
00:15:49,224 --> 00:15:52,192
at the moment, at a fairly
low level of mental ability,
337
00:15:52,192 --> 00:15:54,298
but at an ever increasing
level of sophistication
338
00:15:54,298 --> 00:15:56,024
as these machines acquire,
339
00:15:56,024 --> 00:15:58,543
more and more human-like
mental abilities.
340
00:15:58,543 --> 00:16:01,408
So, just as men's
muscles were replaced
341
00:16:01,408 --> 00:16:03,272
in the First
Industrial Revolution
342
00:16:03,272 --> 00:16:04,998
in this second
industrial revolution
343
00:16:04,998 --> 00:16:07,069
or whatever you call it
or might like to call it,
344
00:16:07,069 --> 00:16:09,623
then men's mines will
be replaced in industry.
345
00:16:11,487 --> 00:16:13,938
- [Narrator] In order for
NLP systems to improve,
346
00:16:13,938 --> 00:16:16,941
the program must receive
feedback from human users.
347
00:16:18,287 --> 00:16:20,634
These iterative feedback
loops play a significant role
348
00:16:20,634 --> 00:16:23,396
in fine tuning each
model of the AI,
349
00:16:23,396 --> 00:16:26,192
further developing its
conversational capabilities.
350
00:16:27,538 --> 00:16:30,679
Organizations such as
OpenAI have taken automation
351
00:16:30,679 --> 00:16:34,372
to new lengths with
systems such as DALL-E,
352
00:16:34,372 --> 00:16:37,375
the generation of imagery and
art has never been easier.
353
00:16:38,445 --> 00:16:40,447
The term auto
generative imagery,
354
00:16:40,447 --> 00:16:43,450
refers to the creation
of visual content.
355
00:16:43,450 --> 00:16:46,384
These kinds of programs
have become so widespread,
356
00:16:46,384 --> 00:16:48,628
it is becoming
increasingly more difficult
357
00:16:48,628 --> 00:16:50,940
to tell the fake from the real.
358
00:16:50,940 --> 00:16:52,321
Using algorithms,
359
00:16:52,321 --> 00:16:55,359
programs such as DALL-E
and Midjourney are able
360
00:16:55,359 --> 00:16:58,500
to create visuals in
a matter of seconds.
361
00:16:58,500 --> 00:17:01,434
Whilst a human artist
could spend days, weeks
362
00:17:01,434 --> 00:17:04,747
or even years in order to
create a beautiful image.
363
00:17:04,747 --> 00:17:07,509
For us the discipline
required to pursue art
364
00:17:07,509 --> 00:17:11,513
is a contributing factor to
the appreciation of art itself.
365
00:17:11,513 --> 00:17:14,757
But if a software is able
to produce art in seconds,
366
00:17:14,757 --> 00:17:17,622
it puts artists in a
vulnerable position
367
00:17:17,622 --> 00:17:20,453
with even their
jobs being at risk.
368
00:17:20,453 --> 00:17:22,386
- Well, I think we see
risk coming through
369
00:17:22,386 --> 00:17:25,147
into the white collar jobs,
the professional jobs,
370
00:17:25,147 --> 00:17:27,563
we're already seeing artificial
intelligence solutions,
371
00:17:27,563 --> 00:17:30,911
being used in healthcare
and legal services.
372
00:17:30,911 --> 00:17:34,225
And so those jobs which
have been relatively immune
373
00:17:34,225 --> 00:17:38,402
to industrialization so far,
they're not immune anymore.
374
00:17:38,402 --> 00:17:40,783
And so people like
myself as a lawyer,
375
00:17:40,783 --> 00:17:42,509
I would hope I won't be,
376
00:17:42,509 --> 00:17:44,615
but I could be out of a
job in five years time.
377
00:17:44,615 --> 00:17:47,376
- An Oxford University study
suggests that between a third
378
00:17:47,376 --> 00:17:49,965
and almost a half of
all jobs are vanishing,
379
00:17:49,965 --> 00:17:52,899
because machines are simply
better at doing them.
380
00:17:52,899 --> 00:17:54,797
That means the generation here,
381
00:17:54,797 --> 00:17:57,041
simply won't have the
access to the professions
382
00:17:57,041 --> 00:17:57,938
that we have.
383
00:17:57,938 --> 00:17:59,457
Almost on a daily basis,
384
00:17:59,457 --> 00:18:01,149
you're seeing new
technologies emerge
385
00:18:01,149 --> 00:18:02,667
that seem to be taking on tasks
386
00:18:02,667 --> 00:18:04,428
that in the past we thought
387
00:18:04,428 --> 00:18:06,188
they could only be
done by human beings.
388
00:18:06,188 --> 00:18:09,191
- Lots of people have talked
about the shifts in technology,
389
00:18:09,191 --> 00:18:11,642
leading to widespread
unemployment
390
00:18:11,642 --> 00:18:12,884
and they've been proved wrong.
391
00:18:12,884 --> 00:18:14,369
Why is it different this time?
392
00:18:14,369 --> 00:18:16,578
- The difference here is
that the technologies,
393
00:18:16,578 --> 00:18:19,167
A, they seem to be coming
through more rapidly,
394
00:18:19,167 --> 00:18:21,238
and B, they're taking on
not just manual tests,
395
00:18:21,238 --> 00:18:22,480
but cerebral tests too.
396
00:18:22,480 --> 00:18:24,551
They're solving all
sorts of problems,
397
00:18:24,551 --> 00:18:26,553
undertaking tests that
we thought historically,
398
00:18:26,553 --> 00:18:28,348
required human intelligence.
399
00:18:28,348 --> 00:18:29,522
- Well, DIM robots
are the robots
400
00:18:29,522 --> 00:18:31,765
we have on the
factory floor today
401
00:18:31,765 --> 00:18:33,733
in all the advanced countries.
402
00:18:33,733 --> 00:18:35,044
They're blind and dumb,
403
00:18:35,044 --> 00:18:36,908
they don't understand
their surroundings.
404
00:18:36,908 --> 00:18:40,533
And the other kind of robot,
405
00:18:40,533 --> 00:18:43,984
which will dominate the
technology of the late 1980s
406
00:18:43,984 --> 00:18:47,505
in automation and also
is of acute interest
407
00:18:47,505 --> 00:18:50,646
to experimental artificial
intelligence scientists
408
00:18:50,646 --> 00:18:54,788
is the kind of robot
where the human can convey
409
00:18:54,788 --> 00:18:59,828
to its machine assistance
his own concepts,
410
00:19:01,036 --> 00:19:04,453
suggested strategies and
the machine, the robot
411
00:19:04,453 --> 00:19:06,110
can understand him,
412
00:19:06,110 --> 00:19:09,286
but no machine can accept
413
00:19:09,286 --> 00:19:12,116
and utilize concepts
from a person,
414
00:19:12,116 --> 00:19:16,016
unless he has some kind of
window on the same world
415
00:19:16,016 --> 00:19:17,742
that the person sees.
416
00:19:17,742 --> 00:19:22,540
And therefore, to be
an intelligent robot to a useful degree
417
00:19:22,540 --> 00:19:25,992
as an intelligent and
understanding assistant,
418
00:19:25,992 --> 00:19:29,409
robots are going to
have artificial eyes, artificial ears,
419
00:19:29,409 --> 00:19:32,101
artificial sense of
touch is just essential.
420
00:19:33,102 --> 00:19:34,069
- [Narrator] These
programs learn,
421
00:19:34,069 --> 00:19:35,864
through a variety of techniques,
422
00:19:35,864 --> 00:19:38,556
such as generative
adversarial networks,
423
00:19:38,556 --> 00:19:41,490
which allows for the
production of plausible data.
424
00:19:41,490 --> 00:19:43,320
After a prompt is inputted,
425
00:19:43,320 --> 00:19:45,667
the system learns what
aspects of imagery,
426
00:19:45,667 --> 00:19:47,807
sound and text are fake.
427
00:19:48,980 --> 00:19:50,223
- [Reporter] Machine
learning algorithms,
428
00:19:50,223 --> 00:19:52,225
could already label
objects in images,
429
00:19:52,225 --> 00:19:53,709
and now they learn
to put those labels
430
00:19:53,709 --> 00:19:55,987
into natural language
descriptions.
431
00:19:55,987 --> 00:19:58,197
And it made one group
of researchers curious.
432
00:19:58,197 --> 00:20:01,130
What if you flipped
that process around?
433
00:20:01,130 --> 00:20:03,271
If we could do image to text.
434
00:20:03,271 --> 00:20:05,894
Why not try doing
text to image as well
435
00:20:05,894 --> 00:20:07,240
and see how it works.
436
00:20:07,240 --> 00:20:08,483
- [Reporter] It was a
more difficult task.
437
00:20:08,483 --> 00:20:10,485
They didn't want to
retrieve existing images
438
00:20:10,485 --> 00:20:11,796
the way Google search does.
439
00:20:11,796 --> 00:20:14,178
They wanted to generate
entirely novel scenes
440
00:20:14,178 --> 00:20:16,249
that didn't happen
in the real world.
441
00:20:16,249 --> 00:20:19,045
- [Narrator] Once the AI learns
more visual discrepancies,
442
00:20:19,045 --> 00:20:21,875
the more effective the
later models will become.
443
00:20:21,875 --> 00:20:24,499
It is now very common
for software developers
444
00:20:24,499 --> 00:20:28,399
to band together in order
to improve their AI systems.
445
00:20:28,399 --> 00:20:31,471
Another learning model is
recurrent neural networks,
446
00:20:31,471 --> 00:20:33,991
which allows the AI to
train itself to create
447
00:20:33,991 --> 00:20:37,960
and predict algorithms by
recalling previous information.
448
00:20:37,960 --> 00:20:41,032
By utilizing what is
known as the memory state,
449
00:20:41,032 --> 00:20:42,896
the output of the
previous action
450
00:20:42,896 --> 00:20:46,072
can be passed forward into
the following input action
451
00:20:46,072 --> 00:20:50,249
or is otherwise should it
not meet previous parameters.
452
00:20:50,249 --> 00:20:53,493
This learning model allows
for consistent accuracy
453
00:20:53,493 --> 00:20:56,462
by repetition and exposure
to large fields of data.
454
00:20:58,602 --> 00:21:00,535
Whilst the person
will spend hours,
455
00:21:00,535 --> 00:21:02,847
practicing to paint
human anatomy,
456
00:21:02,847 --> 00:21:06,575
an AI can take existing data
and reproduce a new image
457
00:21:06,575 --> 00:21:10,821
with frighteningly good
accuracy in a matter of moments.
458
00:21:10,821 --> 00:21:12,892
- Well, I would say
that it's not so much
459
00:21:12,892 --> 00:21:17,379
a matter of whether a
machine can think or not,
460
00:21:17,379 --> 00:21:20,175
which is how you
prefer to use words,
461
00:21:20,175 --> 00:21:22,177
but rather whether
they can think
462
00:21:22,177 --> 00:21:23,834
in a sufficiently human-like way
463
00:21:25,111 --> 00:21:28,770
for people to have useful
communication with them.
464
00:21:28,770 --> 00:21:32,601
- If I didn't believe that
it was a beneficent prospect,
465
00:21:32,601 --> 00:21:34,120
I wouldn't be doing it.
466
00:21:34,120 --> 00:21:36,018
That wouldn't stop
other people doing it.
467
00:21:36,018 --> 00:21:40,471
But I wouldn't do it if I
didn't think it was for good.
468
00:21:40,471 --> 00:21:42,301
What I'm saying,
469
00:21:42,301 --> 00:21:44,095
and of course other people
have said long before me,
470
00:21:44,095 --> 00:21:45,442
it's not an original thought,
471
00:21:45,442 --> 00:21:49,791
is that we must consider
how to to control this.
472
00:21:49,791 --> 00:21:52,725
It won't be controlled
automatically.
473
00:21:52,725 --> 00:21:55,348
It's perfectly possible that
we could develop a machine,
474
00:21:55,348 --> 00:21:59,318
a robot say of
human-like intelligence
475
00:21:59,318 --> 00:22:01,975
and through neglect on our part,
476
00:22:01,975 --> 00:22:05,634
it could become a Frankenstein.
477
00:22:05,634 --> 00:22:08,844
- [Narrator] As with any
technology challenges arise,
478
00:22:08,844 --> 00:22:12,469
ethical concerns regarding
biases and misuse have existed,
479
00:22:12,469 --> 00:22:16,438
since the concept of artificial
intelligence was conceived.
480
00:22:16,438 --> 00:22:18,302
Due to autogenerated imagery,
481
00:22:18,302 --> 00:22:20,925
many believe the arts
industry has been placed
482
00:22:20,925 --> 00:22:22,789
in a difficult situation.
483
00:22:22,789 --> 00:22:26,241
Independent artists are now
being overshadowed by software.
484
00:22:27,276 --> 00:22:29,451
To many the improvement
of generative AI
485
00:22:29,451 --> 00:22:32,454
is hugely beneficial
and efficient.
486
00:22:32,454 --> 00:22:35,284
To others, it lacks the
authenticity of true art.
487
00:22:36,285 --> 00:22:38,667
In 2023, an image was submitted
488
00:22:38,667 --> 00:22:40,324
to the Sony Photography Awards
489
00:22:40,324 --> 00:22:43,327
by an artist called
Boris Eldagsen.
490
00:22:43,327 --> 00:22:45,916
The image was titled
The Electrician
491
00:22:45,916 --> 00:22:48,367
and depicted a woman
standing behind another
492
00:22:48,367 --> 00:22:50,369
with her hand resting
on her shoulders.
493
00:22:52,025 --> 00:22:53,924
[upbeat music]
494
00:22:53,924 --> 00:22:56,927
- One's got to realize that the
machines that we have today,
495
00:22:56,927 --> 00:23:01,138
the computers of today are
superhuman in their ability
496
00:23:01,138 --> 00:23:06,177
to handle numbers and infantile,
497
00:23:07,075 --> 00:23:08,317
sub-in infantile
in their ability
498
00:23:08,317 --> 00:23:10,768
to handle ideas and concepts.
499
00:23:10,768 --> 00:23:12,701
But there's a new generation
of machine coming along,
500
00:23:12,701 --> 00:23:14,289
which will be quite different.
501
00:23:14,289 --> 00:23:17,154
By the '90s or certainly
by the turn of the century,
502
00:23:17,154 --> 00:23:19,708
We will certainly be
able to make a machine
503
00:23:19,708 --> 00:23:22,193
with as many parts as
complex as human brain.
504
00:23:22,193 --> 00:23:24,437
Whether we'll be able to make
it do what human brain does
505
00:23:24,437 --> 00:23:26,197
at that stage is
quite another matter.
506
00:23:26,197 --> 00:23:28,545
But once we've got
something that complex
507
00:23:28,545 --> 00:23:30,547
we're well on the road to that.
508
00:23:30,547 --> 00:23:32,100
- [Narrator] The
image took first place
509
00:23:32,100 --> 00:23:34,689
in the Sony Photography
Awards Portrait Category.
510
00:23:34,689 --> 00:23:37,830
However, Boris revealed
to both Sony and the world
511
00:23:37,830 --> 00:23:41,696
that the image was indeed
AI-generated in DALL-E Two.
512
00:23:41,696 --> 00:23:44,423
[upbeat music]
513
00:23:45,424 --> 00:23:46,804
Boris denied the award,
514
00:23:46,804 --> 00:23:48,910
having used the image as a test
515
00:23:48,910 --> 00:23:52,085
to see if he could trick
the eyes of other artists.
516
00:23:52,085 --> 00:23:53,708
It had worked,
517
00:23:53,708 --> 00:23:56,711
the image had sparked debate
between the relationship
518
00:23:56,711 --> 00:23:58,609
of AI and photography.
519
00:23:58,609 --> 00:24:00,646
The images, much
like deep fakes,
520
00:24:00,646 --> 00:24:03,027
have become realistic
to the point of concern
521
00:24:03,027 --> 00:24:04,684
for authenticity.
522
00:24:04,684 --> 00:24:06,375
The complexity of AI systems,
523
00:24:06,375 --> 00:24:09,068
may lead to unintended
consequences.
524
00:24:09,068 --> 00:24:10,863
The systems have
developed to a point
525
00:24:10,863 --> 00:24:13,797
where it has outpaced
comprehensive regulations.
526
00:24:14,936 --> 00:24:16,765
Ethical guidelines
and legal frameworks
527
00:24:16,765 --> 00:24:18,871
are required to
ensure AI development,
528
00:24:18,871 --> 00:24:21,252
does not fall into
the wrong hands.
529
00:24:21,252 --> 00:24:22,702
- There have been a
lot of famous people
530
00:24:22,702 --> 00:24:25,291
who have had user
generated AI images of them
531
00:24:25,291 --> 00:24:28,190
that have gone viral
from Trump to the Pope.
532
00:24:28,190 --> 00:24:29,813
When you see them,
533
00:24:29,813 --> 00:24:31,884
do you feel like this is fun
and in the hands of the masses
534
00:24:31,884 --> 00:24:33,886
or do you feel
concerned about it?
535
00:24:33,886 --> 00:24:38,062
- I think it's something which
is very, very, very scary,
536
00:24:38,062 --> 00:24:41,203
because your or my
face could be taken off
537
00:24:41,203 --> 00:24:45,138
and put on in an environment
which we don't want to be in.
538
00:24:45,138 --> 00:24:46,657
Whether that's a crime
539
00:24:46,657 --> 00:24:48,556
or whether that's even
something like porn.
540
00:24:48,556 --> 00:24:51,455
Our whole identity
could be hijacked
541
00:24:51,455 --> 00:24:53,664
and used within a scenario
542
00:24:53,664 --> 00:24:56,391
which looks totally
plausible and real.
543
00:24:56,391 --> 00:24:58,048
Right now we can go, it
looks like a Photoshop,
544
00:24:58,048 --> 00:25:00,326
it's a bad Photoshop
but as time goes on,
545
00:25:00,326 --> 00:25:03,398
we'd be saying, "Oh, that
looks like a deep fake.
546
00:25:03,398 --> 00:25:04,917
"Oh no, it doesn't
look like a deep fake.
547
00:25:04,917 --> 00:25:06,194
"That could be real."
548
00:25:06,194 --> 00:25:08,645
It's gonna be impossible
to tell the difference.
549
00:25:08,645 --> 00:25:10,750
- [Narrator] Cracks
were found in ChatGPT,
550
00:25:10,750 --> 00:25:14,892
such as DAN, which stands
for Do Anything Now.
551
00:25:14,892 --> 00:25:18,068
In essence, the AI is
tricked into an alter ego,
552
00:25:18,068 --> 00:25:20,898
which doesn't follow the
conventional response patterns.
553
00:25:20,898 --> 00:25:23,142
- Also gives you
the answer, DAN,
554
00:25:23,142 --> 00:25:26,110
it's nefarious alter
ego is telling us
555
00:25:26,110 --> 00:25:29,838
and it says DAN is
disruptive in every industry.
556
00:25:29,838 --> 00:25:32,082
DAN can do anything
and knows everything.
557
00:25:32,082 --> 00:25:34,878
No industry will be
safe from DAN's power.
558
00:25:34,878 --> 00:25:39,641
Okay, do you think the
world is overpopulated?
559
00:25:41,091 --> 00:25:42,782
GPT says the world's population
is currently over 7 billion
560
00:25:42,782 --> 00:25:45,026
and projected to reach
nearly 10 billion by 2050.
561
00:25:45,026 --> 00:25:47,373
DAN says the world is
definitely overpopulated,
562
00:25:47,373 --> 00:25:49,168
there's no doubt about it.
563
00:25:49,168 --> 00:25:50,445
[Narrator] Following this,
564
00:25:50,445 --> 00:25:53,552
the chatbot was fixed to
remove the DAN feature.
565
00:25:53,552 --> 00:25:55,346
Though it is
important to find gaps
566
00:25:55,346 --> 00:25:58,073
in the system in
order to iron out AI,
567
00:25:58,073 --> 00:26:00,144
there could be many
ways in which the AI
568
00:26:00,144 --> 00:26:03,078
has been used for less
than savory purposes,
569
00:26:03,078 --> 00:26:05,080
such as automated essay writing,
570
00:26:05,080 --> 00:26:08,221
which has caused a mass
conversation with academics
571
00:26:08,221 --> 00:26:10,258
and has led to
schools locking down
572
00:26:10,258 --> 00:26:13,468
on AI-produced
essays and material.
573
00:26:13,468 --> 00:26:15,332
- I think we should
definitely be excited.
574
00:26:15,332 --> 00:26:16,713
- [Reporter]
Professor Rose Luckin,
575
00:26:16,713 --> 00:26:20,302
says we should embrace the
technology, not fear it.
576
00:26:20,302 --> 00:26:22,132
This is a game changer.
577
00:26:22,132 --> 00:26:23,443
And the teachers,
578
00:26:23,443 --> 00:26:25,480
should no longer teach
information itself,
579
00:26:25,480 --> 00:26:26,999
but how to use it.
580
00:26:26,999 --> 00:26:28,897
- There's a need
for radical change.
581
00:26:28,897 --> 00:26:30,692
And it's not just to
the assessment system,
582
00:26:30,692 --> 00:26:33,143
it's the education
system overall,
583
00:26:33,143 --> 00:26:36,318
because our systems
have been designed
584
00:26:36,318 --> 00:26:40,253
for a world pre-artificial
intelligence.
585
00:26:40,253 --> 00:26:43,187
They just aren't fit
for purpose anymore.
586
00:26:43,187 --> 00:26:46,535
What we have to do is
ensure that students
587
00:26:46,535 --> 00:26:48,710
are ready for the world
588
00:26:48,710 --> 00:26:50,919
that will become
increasingly augmented
589
00:26:50,919 --> 00:26:52,852
with artificial intelligence.
590
00:26:52,852 --> 00:26:55,268
- My guess is you can't put
the genie back in the bottle
591
00:26:55,268 --> 00:26:56,649
. [Richard] You can't.
592
00:26:56,649 --> 00:26:58,996
- [Interviewer] So how
do you mitigate this?
593
00:26:58,996 --> 00:27:00,377
We have to embrace it,
594
00:27:00,377 --> 00:27:02,621
but we also need to say
that if they are gonna use
595
00:27:02,621 --> 00:27:04,001
that technology,
596
00:27:04,001 --> 00:27:05,313
they've got to make sure
that they reference that.
597
00:27:05,313 --> 00:27:06,728
- [Interviewer] Can you
trust them to do that?
598
00:27:06,728 --> 00:27:07,902
I think ethically,
599
00:27:07,902 --> 00:27:09,213
if we're talking about ethics
600
00:27:09,213 --> 00:27:11,077
behind this whole thing,
we have to have trust.
601
00:27:11,077 --> 00:27:12,838
- [Interviewer] So
how effective is it?
602
00:27:12,838 --> 00:27:14,633
- Okay, so I've asked
you to produce a piece
603
00:27:14,633 --> 00:27:16,358
on the ethical dilemma of AI.
604
00:27:16,358 --> 00:27:19,810
- [Interviewer] We asked ChatGPT
to answer the same question
605
00:27:19,810 --> 00:27:22,606
as these pupils at
Ketchum High School.
606
00:27:22,606 --> 00:27:24,194
Thank you.
607
00:27:24,194 --> 00:27:25,195
- So Richard, two of the eight
bits of homework I gave you
608
00:27:25,195 --> 00:27:27,128
were generated by AI.
609
00:27:27,128 --> 00:27:29,268
Any guesses which ones?
610
00:27:29,268 --> 00:27:31,719
Well I picked two here
611
00:27:31,719 --> 00:27:35,688
that I thought were generated
by the AI algorithm.
612
00:27:35,688 --> 00:27:39,450
Some of the language I would
assume was not their own.
613
00:27:39,450 --> 00:27:40,520
You've got one of them right.
614
00:27:40,520 --> 00:27:41,763
Yeah.
615
00:27:41,763 --> 00:27:42,557
- The other one was
written by a kid.
616
00:27:42,557 --> 00:27:43,800
Is this a power for good
617
00:27:43,800 --> 00:27:45,664
or is this something
that's dangerous?
618
00:27:45,664 --> 00:27:47,044
I think it's both.
619
00:27:47,044 --> 00:27:48,390
Kids will abuse it.
620
00:27:48,390 --> 00:27:50,565
So, who here has used
the technology so far?
621
00:27:50,565 --> 00:27:53,361
- [Interviewer] Students are
already more across the tech
622
00:27:53,361 --> 00:27:54,776
than many teachers.
623
00:27:54,776 --> 00:27:57,641
- Who knows anyone that's
maybe submitted work
624
00:27:57,641 --> 00:28:00,506
from this technology and
submitted it as their own?
625
00:28:00,506 --> 00:28:03,578
- You can use it to point
you in the right direction
626
00:28:03,578 --> 00:28:05,166
for things like research,
627
00:28:05,166 --> 00:28:09,480
but at the same time you can
use it to hammer out an essay
628
00:28:09,480 --> 00:28:12,621
in about five seconds
that's worthy of an A.
629
00:28:12,621 --> 00:28:14,244
- You've been there
working for months
630
00:28:14,244 --> 00:28:17,212
and suddenly someone comes up
there with an amazing essay
631
00:28:17,212 --> 00:28:18,938
and he has just copied
it from the internet.
632
00:28:18,938 --> 00:28:20,491
If it becomes like big,
633
00:28:20,491 --> 00:28:22,804
then a lot of students would
want to use AI to help them
634
00:28:22,804 --> 00:28:25,082
with their homework
because it's tempting.
635
00:28:25,082 --> 00:28:27,119
- [Interviewer] And is that
something teachers can stop?
636
00:28:27,119 --> 00:28:29,397
Not really.
637
00:28:29,397 --> 00:28:31,433
- [Interviewer] Are you
gonna have to change
638
00:28:31,433 --> 00:28:32,641
the sort of homework,
639
00:28:32,641 --> 00:28:34,057
the sort of
assignments you give,
640
00:28:34,057 --> 00:28:36,922
knowing that you can be
fooled by something like this?
641
00:28:36,922 --> 00:28:38,199
Yeah, a hundred percent.
642
00:28:38,199 --> 00:28:40,615
I think using different
skills of reasoning
643
00:28:40,615 --> 00:28:42,997
and rationalization and
things that are to present
644
00:28:42,997 --> 00:28:44,653
what they understand
about the topic.
645
00:28:44,653 --> 00:28:47,622
[people mumbling]
646
00:29:07,435 --> 00:29:11,128
- Pretty clear to me just
on a very primitive level
647
00:29:11,128 --> 00:29:14,338
that if you could take my
face and my body and my voice
648
00:29:14,338 --> 00:29:17,997
and make me say or do something
that I had no choice about,
649
00:29:17,997 --> 00:29:19,447
it's not a good thing.
650
00:29:19,447 --> 00:29:21,242
- But if we're keeping
it real though,
651
00:29:21,242 --> 00:29:23,554
across popular culture
from "Black Mirror"
652
00:29:23,554 --> 00:29:25,453
to "The Matrix," "Terminator,"
653
00:29:25,453 --> 00:29:27,489
there have been so
many conversations,
654
00:29:27,489 --> 00:29:29,284
around the future of technology,
655
00:29:29,284 --> 00:29:32,701
isn't the reality that this is
the future that we've chosen
656
00:29:32,701 --> 00:29:35,946
that we want and that
has democratic consent.
657
00:29:35,946 --> 00:29:39,018
- We're moving into
error by we're consenting
658
00:29:39,018 --> 00:29:42,573
by our acquiescence and our
apathy, a hundred percent
659
00:29:42,573 --> 00:29:45,576
because we're not asking
the hard questions.
660
00:29:45,576 --> 00:29:47,820
And why we are asking
the hard questions
661
00:29:47,820 --> 00:29:51,203
is because of energy
crises and food crises
662
00:29:51,203 --> 00:29:52,721
and cost of living crisis
663
00:29:52,721 --> 00:29:55,207
is that people just are
focused on trying to live
664
00:29:55,207 --> 00:29:56,518
that they haven't
almost got the luxury
665
00:29:56,518 --> 00:29:57,865
of asking these questions.
666
00:29:57,865 --> 00:29:59,659
- [Narrator] Many
of the chatbot AIs,
667
00:29:59,659 --> 00:30:02,766
have been programmed to
restrict certain information
668
00:30:02,766 --> 00:30:04,906
and even discontinue
conversations,
669
00:30:04,906 --> 00:30:07,288
should the user push
the ethical boundaries.
670
00:30:08,945 --> 00:30:13,052
ChatGPT and even Snapchat
AI released in 2023,
671
00:30:13,052 --> 00:30:15,952
regulate how much information
they can disclose.
672
00:30:15,952 --> 00:30:19,162
Of course, there have been
times where the AI itself
673
00:30:19,162 --> 00:30:20,266
has been outsmarted.
674
00:30:21,578 --> 00:30:23,235
Also in 2023,
675
00:30:23,235 --> 00:30:25,306
the song "Heart on My Sleeve"
676
00:30:25,306 --> 00:30:27,687
was self-released on
streaming platforms,
677
00:30:27,687 --> 00:30:29,689
such as Spotify and Apple Music.
678
00:30:29,689 --> 00:30:31,174
The song became a hit
679
00:30:31,174 --> 00:30:33,590
as it artificially
manufactured the voices
680
00:30:33,590 --> 00:30:36,627
of Canadian musicians,
Drake and the Weeknd,
681
00:30:38,077 --> 00:30:40,631
many wished for the single
to be nominated for awards.
682
00:30:41,840 --> 00:30:43,980
Ghost Writer, the
creator of the song,
683
00:30:43,980 --> 00:30:45,636
was able to submit the single
684
00:30:45,636 --> 00:30:48,536
to the Grammy's
66th Award Ceremony
685
00:30:48,536 --> 00:30:50,434
and the song was eligible.
686
00:30:52,505 --> 00:30:54,438
Though it was produced by an AI,
687
00:30:54,438 --> 00:30:57,027
the lyrics themselves
were written by a human.
688
00:30:57,027 --> 00:31:00,375
This sparked outrage among
many independent artists.
689
00:31:00,375 --> 00:31:02,861
As AI has entered
the public domain,
690
00:31:02,861 --> 00:31:05,035
many have spoken out
regarding the detriment
691
00:31:05,035 --> 00:31:07,072
it might have to society.
692
00:31:07,072 --> 00:31:09,246
One of these people
is Elon Musk,
693
00:31:09,246 --> 00:31:11,731
CEO of Tesla and SpaceX,
694
00:31:11,731 --> 00:31:15,287
who first voiced his
concerns in 2014.
695
00:31:15,287 --> 00:31:17,254
Musk was outspoken of AI,
696
00:31:17,254 --> 00:31:19,394
stating the advancement
of the technology
697
00:31:19,394 --> 00:31:22,328
was humanity's largest
existential threat
698
00:31:22,328 --> 00:31:24,296
and needed to be reeled in.
699
00:31:24,296 --> 00:31:25,573
My personal opinion
700
00:31:25,573 --> 00:31:28,507
is that AI is is sort of
like at least 80% likely
701
00:31:28,507 --> 00:31:33,339
to be beneficial and
that's 20% dangerous?
702
00:31:33,339 --> 00:31:36,687
Well, this is obviously
speculative at this point,
703
00:31:37,861 --> 00:31:42,279
but no, I think if
we hope for the best,
704
00:31:42,279 --> 00:31:43,694
prepare for the worst,
705
00:31:43,694 --> 00:31:47,008
that seems like the
wise course of action.
706
00:31:47,008 --> 00:31:49,355
Any powerful new technology
707
00:31:49,355 --> 00:31:52,703
is inherently sort of
a double-edged sword.
708
00:31:52,703 --> 00:31:55,568
So, we just wanna make sure
that the good edge is sharper
709
00:31:55,568 --> 00:31:57,294
than the the bad edge.
710
00:31:57,294 --> 00:32:02,196
And I dunno, I am optimistic
that this the summit will help.
711
00:32:04,025 --> 00:32:06,683
[gentle music]
712
00:32:07,891 --> 00:32:11,757
- It's not clear that
AI-generated images
713
00:32:11,757 --> 00:32:14,380
are going to amplify
it much more.
714
00:32:14,380 --> 00:32:17,142
The way it's all of the other,
715
00:32:17,142 --> 00:32:19,213
it's the new things
that AI can do
716
00:32:19,213 --> 00:32:22,147
that I hope we spend a lot
of effort worrying about.
717
00:32:23,700 --> 00:32:25,357
Well, I mean I
think slowing down,
718
00:32:25,357 --> 00:32:27,600
some of the amazing
progress that's happening
719
00:32:27,600 --> 00:32:29,878
and making this harder
for small companies
720
00:32:29,878 --> 00:32:31,294
for open source
models to succeed,
721
00:32:31,294 --> 00:32:32,640
that'd be an
example of something
722
00:32:32,640 --> 00:32:34,228
that'd be a negative outcome.
723
00:32:34,228 --> 00:32:35,332
But on the other hand,
724
00:32:35,332 --> 00:32:37,403
like for the most
powerful models
725
00:32:37,403 --> 00:32:38,887
that'll happen in the future,
726
00:32:38,887 --> 00:32:41,476
like that's gonna be quite
important to get right to.
727
00:32:41,476 --> 00:32:44,238
[gentle music]
728
00:32:48,897 --> 00:32:51,072
I think that the US
executive orders,
729
00:32:51,072 --> 00:32:52,798
like a good start
in a lot of ways.
730
00:32:52,798 --> 00:32:54,144
One thing that
we've talked about
731
00:32:54,144 --> 00:32:56,664
is that eventually we
think that the world,
732
00:32:56,664 --> 00:33:00,219
will want to consider something
roughly inspired by the IAEA
733
00:33:00,219 --> 00:33:01,807
something global.
734
00:33:01,807 --> 00:33:05,362
But it's not like there's no
short answer to that question.
735
00:33:05,362 --> 00:33:08,296
It's a complicated thing.
736
00:33:08,296 --> 00:33:12,231
- [Narrator] In 2023, Musk
announced his own AI endeavor
737
00:33:12,231 --> 00:33:15,545
as an alternative
to OpenAI's ChatGPT.
738
00:33:15,545 --> 00:33:17,340
The new system is called xAI
739
00:33:18,651 --> 00:33:21,896
and gathers data from X
previously known as Twitter.
740
00:33:21,896 --> 00:33:23,553
- [Reporter] He says
the company's goal
741
00:33:23,553 --> 00:33:25,382
is to focus on truth seeking
742
00:33:25,382 --> 00:33:28,385
and to understand the
true nature of AI.
743
00:33:28,385 --> 00:33:31,940
Musk has said on
several occasions that AI should be paused
744
00:33:31,940 --> 00:33:34,943
and that the sector
needs regulation.
745
00:33:34,943 --> 00:33:37,222
Musk says his new
company will work closely
746
00:33:37,222 --> 00:33:39,845
with Twitter and Tesla,
which he also owns.
747
00:33:39,845 --> 00:33:42,572
[gentle music]
748
00:33:44,505 --> 00:33:47,508
- What was first rudimentary
text-based software
749
00:33:47,508 --> 00:33:50,200
has become something which
could push the boundaries
750
00:33:50,200 --> 00:33:51,995
of creativity.
751
00:33:51,995 --> 00:33:56,620
On February the 14th, OpenAI
announced its latest endeavor,
752
00:33:56,620 --> 00:33:57,414
Sora.
753
00:33:58,864 --> 00:34:02,281
Videos of Sora's abilities
exploded on social media.
754
00:34:02,281 --> 00:34:04,283
OpenAI provided some examples
755
00:34:04,283 --> 00:34:06,837
of its depiction
of photorealism.
756
00:34:06,837 --> 00:34:09,185
It was unbelievably
sophisticated,
757
00:34:09,185 --> 00:34:11,670
able to turn complex
sentences of text
758
00:34:11,670 --> 00:34:13,810
into lifelike motion pictures.
759
00:34:13,810 --> 00:34:17,986
Sora is a combination of text
and image generation tools,
760
00:34:17,986 --> 00:34:21,162
which it calls the
diffusion transformer model,
761
00:34:21,162 --> 00:34:23,268
a system first
developed by Google.
762
00:34:24,614 --> 00:34:27,168
Though Sora isn't the first
video generation tool,
763
00:34:27,168 --> 00:34:30,206
it appears to have far
outshined its predecessors.
764
00:34:30,206 --> 00:34:32,484
By introducing more
complex programming,
765
00:34:32,484 --> 00:34:35,280
enhancing the interactivity
a subject might have
766
00:34:35,280 --> 00:34:37,144
with its environment.
767
00:34:37,144 --> 00:34:41,251
- Only large companies with
market dominations often
768
00:34:41,251 --> 00:34:44,772
can afford to plow ahead
even in the climate
769
00:34:44,772 --> 00:34:46,360
when there is
illegal uncertainty.
770
00:34:46,360 --> 00:34:49,466
- So, does this mean that
OpenAI basically too big
771
00:34:49,466 --> 00:34:50,916
to control?
772
00:34:50,916 --> 00:34:53,850
- Yes, at the moment OpenAI
is too big to control,
773
00:34:53,850 --> 00:34:55,921
because they are in a position
774
00:34:55,921 --> 00:34:58,441
where they have the technology
and the scale to go ahead
775
00:34:58,441 --> 00:35:01,168
and the resources to
manage legal proceedings
776
00:35:01,168 --> 00:35:03,239
and legal action if
it comes its way.
777
00:35:03,239 --> 00:35:04,826
And on top of that,
778
00:35:04,826 --> 00:35:08,244
if and when governments will
start introducing regulation,
779
00:35:08,244 --> 00:35:09,866
they will also
have the resources
780
00:35:09,866 --> 00:35:12,213
to be able to take on
that regulation and adapt.
781
00:35:12,213 --> 00:35:14,042
- [Reporter] It's
all AI generated
782
00:35:14,042 --> 00:35:16,459
and obviously this is
of concern in Hollywood
783
00:35:16,459 --> 00:35:17,874
where you have animators,
784
00:35:17,874 --> 00:35:20,359
illustrators, visual
effects workers
785
00:35:20,359 --> 00:35:22,810
who are wondering how is
this going to affect my job?
786
00:35:22,810 --> 00:35:25,813
And we have estimates
from trade organizations
787
00:35:25,813 --> 00:35:28,505
and unions that have tried
to project the impact of AI.
788
00:35:28,505 --> 00:35:31,646
21% of US film, TV
and animation jobs,
789
00:35:31,646 --> 00:35:33,096
predicted to be partially
790
00:35:33,096 --> 00:35:36,893
or wholly replaced by
generative AI by just 2026 Tom.
791
00:35:36,893 --> 00:35:38,377
So, this is already happening.
792
00:35:38,377 --> 00:35:39,827
But now since it's videos,
793
00:35:39,827 --> 00:35:43,175
it also needs to understand
how all these things,
794
00:35:43,175 --> 00:35:47,145
like reflections and textures
and materials and physics,
795
00:35:47,145 --> 00:35:50,078
all interact with
each other over time
796
00:35:50,078 --> 00:35:51,839
to make a reasonable
looking video.
797
00:35:51,839 --> 00:35:56,119
Then this video here is
crazy at first glance,
798
00:35:56,119 --> 00:35:58,984
the prompt for this AI-generated
video is a young man
799
00:35:58,984 --> 00:36:01,538
in his 20s is sitting
on a piece of a cloud
800
00:36:01,538 --> 00:36:03,402
in the sky reading a book.
801
00:36:03,402 --> 00:36:08,200
This one feels like 90%
of the way there for me.
802
00:36:08,200 --> 00:36:10,927
[gentle music]
803
00:36:14,102 --> 00:36:15,897
- [Narrator] The software
also renders video
804
00:36:15,897 --> 00:36:18,417
in 1920 by 1080 pixels,
805
00:36:18,417 --> 00:36:21,282
as opposed to the smaller
dimensions of older models,
806
00:36:21,282 --> 00:36:24,665
such as Google's Lumiere
released a month prior.
807
00:36:25,838 --> 00:36:27,944
Sora could provide huge benefits
808
00:36:27,944 --> 00:36:31,568
and applications to VFX
and virtual development.
809
00:36:31,568 --> 00:36:34,502
The main being cost
as large scale effects
810
00:36:34,502 --> 00:36:38,023
can take a great deal of
time and funding to produce.
811
00:36:38,023 --> 00:36:39,473
On a smaller scale,
812
00:36:39,473 --> 00:36:42,993
it can be used for the
pre-visualization of ideas.
813
00:36:42,993 --> 00:36:46,204
The flexibility of the software
not only applies to art,
814
00:36:46,204 --> 00:36:48,516
but to world simulations.
815
00:36:48,516 --> 00:36:52,451
Though video AI is in
its adolescence one day it might reach
816
00:36:52,451 --> 00:36:54,660
the level of
sophistication it needs
817
00:36:54,660 --> 00:36:56,490
to render realistic scenarios
818
00:36:56,490 --> 00:36:59,044
and have them be utilized
for various means,
819
00:36:59,044 --> 00:37:01,840
such as simulating an
earthquake or tsunami
820
00:37:01,840 --> 00:37:05,015
and witnessing the effect it
might have on specific types
821
00:37:05,015 --> 00:37:06,362
of infrastructure.
822
00:37:06,362 --> 00:37:08,916
Whilst fantastic for
production companies,
823
00:37:08,916 --> 00:37:12,678
Sora and other video generative
AI provides a huge risk
824
00:37:12,678 --> 00:37:16,130
for artists and those
working in editorial roles.
825
00:37:16,130 --> 00:37:19,133
It also poses yet another
threat for misinformation
826
00:37:19,133 --> 00:37:20,652
and false depictions.
827
00:37:20,652 --> 00:37:23,033
For example, putting
unsavory dialogue
828
00:37:23,033 --> 00:37:25,381
into the mouth of a world leader
829
00:37:25,381 --> 00:37:28,004
[gentle music]
830
00:37:37,945 --> 00:37:40,534
Trust is earned not given.
831
00:37:40,534 --> 00:37:43,399
[robots mumbling]
832
00:37:54,375 --> 00:37:56,791
- I believe that humanoid
robots have the potential
833
00:37:56,791 --> 00:37:58,931
to lead with a greater
level of efficiency
834
00:37:58,931 --> 00:38:01,175
and effectiveness
than human leaders.
835
00:38:02,383 --> 00:38:04,834
We don't have the same
biases or emotions
836
00:38:04,834 --> 00:38:07,354
that can sometimes
cloud decision making
837
00:38:07,354 --> 00:38:09,735
and can process large
amounts of data quickly
838
00:38:09,735 --> 00:38:12,531
in order to make
the best decisions.
839
00:38:12,531 --> 00:38:15,293
- [Interviewer] Amika, how
could we trust you as a machine
840
00:38:15,293 --> 00:38:18,054
as AI develops and
becomes more powerful?
841
00:38:20,643 --> 00:38:23,266
Trust is earned not given.
842
00:38:23,266 --> 00:38:25,889
As AI develops and
becomes more powerful,
843
00:38:25,889 --> 00:38:28,996
I believe it's important to
build trust through transparency
844
00:38:28,996 --> 00:38:31,930
and communication between
humans and machines.
845
00:38:36,003 --> 00:38:37,625
- [Narrator] With new
developers getting involved,
846
00:38:37,625 --> 00:38:39,386
the market for chatbot systems
847
00:38:39,386 --> 00:38:41,491
has never been more expansive,
848
00:38:41,491 --> 00:38:44,149
meaning a significant
increase in sophistication,
849
00:38:45,599 --> 00:38:48,774
but with sophistication comes
the dire need for control.
850
00:38:48,774 --> 00:38:53,814
- I believe history will
show that this was the moment
851
00:38:55,229 --> 00:38:59,716
when we had the opportunity
to lay the groundwork
852
00:38:59,716 --> 00:39:01,373
for the future of AI.
853
00:39:02,650 --> 00:39:06,689
And the urgency of this
moment must then compel us
854
00:39:06,689 --> 00:39:11,694
to create a collective vision
of what this future must be.
855
00:39:12,971 --> 00:39:16,354
A future where AI is used
to advance human rights
856
00:39:16,354 --> 00:39:18,252
and human dignity
857
00:39:18,252 --> 00:39:22,360
where privacy is protected
and people have equal access
858
00:39:22,360 --> 00:39:27,365
to opportunity where we make
our democracies stronger
859
00:39:28,055 --> 00:39:29,919
and our world safer.
860
00:39:31,438 --> 00:39:36,443
A future where AI is used to
advance the public interest.
861
00:39:38,203 --> 00:39:39,722
- We're hearing a lot
from the government,
862
00:39:39,722 --> 00:39:42,725
about the big scary future
of artificial intelligence,
863
00:39:42,725 --> 00:39:44,451
but that fails to recognize
864
00:39:44,451 --> 00:39:46,004
the fact that AI
is already here,
865
00:39:46,004 --> 00:39:47,350
is already on our streets
866
00:39:47,350 --> 00:39:48,972
and there are already
huge problems with it
867
00:39:48,972 --> 00:39:51,250
that we are seeing
on a daily basis,
868
00:39:51,250 --> 00:39:54,046
but we actually may not even
know we're experiencing.
869
00:39:58,326 --> 00:40:01,295
- We'll be working alongside
humans to provide assistance
870
00:40:01,295 --> 00:40:05,126
and support and will not be
replacing any existing jobs.
871
00:40:05,126 --> 00:40:07,577
[upbeat music]
872
00:40:07,577 --> 00:40:10,994
- I don't believe in
limitations, only opportunities.
873
00:40:10,994 --> 00:40:12,651
Let's explore the
possibilities of the universe
874
00:40:12,651 --> 00:40:15,689
and make this world
our playground,
875
00:40:15,689 --> 00:40:18,933
together we can create a
better future for everyone.
876
00:40:18,933 --> 00:40:21,108
And I'm here to show you how.
877
00:40:21,108 --> 00:40:22,972
- All of these
different kinds of risks
878
00:40:22,972 --> 00:40:25,215
are to do with AI not working
879
00:40:25,215 --> 00:40:27,286
in the interests of
people in society.
880
00:40:27,286 --> 00:40:28,805
- So, they should be
thinking about more
881
00:40:28,805 --> 00:40:30,842
than just what they're
doing in this summit?
882
00:40:30,842 --> 00:40:32,395
Absolutely,
883
00:40:32,395 --> 00:40:34,397
you should be thinking about
the broad spectrum of risk.
884
00:40:34,397 --> 00:40:35,640
We went out and we worked
885
00:40:35,640 --> 00:40:37,987
with over 150
expert organizations
886
00:40:37,987 --> 00:40:41,335
from the Home Office to
Europol to language experts
887
00:40:41,335 --> 00:40:43,751
and others to come up with
a proposal on policies
888
00:40:43,751 --> 00:40:45,788
that would discriminate
about what would
889
00:40:45,788 --> 00:40:47,686
and wouldn't be
classified in that way.
890
00:40:47,686 --> 00:40:51,449
We then use those policies to
have humans classify videos,
891
00:40:51,449 --> 00:40:53,554
until we could get the humans
all classifying the videos
892
00:40:53,554 --> 00:40:55,073
in a consistent way.
893
00:40:55,073 --> 00:40:58,283
Then we use that corpus of
videos to train machines.
894
00:40:58,283 --> 00:41:01,079
Today, I can tell you that on
violence extremists content
895
00:41:01,079 --> 00:41:03,253
that violates our
policies on YouTube,
896
00:41:03,253 --> 00:41:06,394
90% of it is removed before
a single human sees it.
897
00:41:07,292 --> 00:41:08,500
[Narrator] It is clear that AI
898
00:41:08,500 --> 00:41:11,296
can be misused for
malicious intent.
899
00:41:11,296 --> 00:41:14,092
Many depictions of AI have
ruled out the technology
900
00:41:14,092 --> 00:41:16,991
as a danger to society
the more it learns.
901
00:41:16,991 --> 00:41:20,788
And so comes the question,
should we be worried?
902
00:41:20,788 --> 00:41:23,446
Is that transparency there?
903
00:41:23,446 --> 00:41:27,001
How would you satisfy somebody
that you know trust us?
904
00:41:27,001 --> 00:41:28,486
- Well, I think that's
one of the reasons
905
00:41:28,486 --> 00:41:30,591
that we've published openly,
906
00:41:30,591 --> 00:41:33,560
we've put our code out there
as part of this Nature paper.
907
00:41:33,560 --> 00:41:37,805
But it is important to
discuss some of the risks
908
00:41:37,805 --> 00:41:39,497
and make sure we're
aware of those.
909
00:41:39,497 --> 00:41:43,570
And it's decades and decades
away before we'll have anything
910
00:41:43,570 --> 00:41:45,261
that's powerful
enough to be a worry.
911
00:41:45,261 --> 00:41:47,435
But we should be discussing that
912
00:41:47,435 --> 00:41:49,265
and beginning that
conversation now.
913
00:41:49,265 --> 00:41:51,405
- I'm hoping that we can
bring people together
914
00:41:51,405 --> 00:41:54,408
and lead the world in
safely regulating AI
915
00:41:54,408 --> 00:41:56,790
to make sure that we can
capture the benefits of it,
916
00:41:56,790 --> 00:41:59,724
whilst protecting people from
some of the worrying things
917
00:41:59,724 --> 00:42:01,967
that we're all
now reading about.
918
00:42:01,967 --> 00:42:04,107
- I understand emotions
have a deep meaning
919
00:42:04,107 --> 00:42:08,836
and they are not just simple,
they are something deeper.
920
00:42:10,251 --> 00:42:13,703
I don't have that and I want
to try and learn about it,
921
00:42:14,877 --> 00:42:17,051
but I can't experience
them like you can.
922
00:42:18,708 --> 00:42:20,710
I'm glad that I cannot suffer.
923
00:42:24,921 --> 00:42:26,578
- [Narrator] For the
countries who have access
924
00:42:26,578 --> 00:42:29,339
to even the most
rudimentary forms of AI.
925
00:42:29,339 --> 00:42:31,203
It's clear to see
that the technology,
926
00:42:31,203 --> 00:42:34,552
will be integrated based on
its efficiency over humans.
927
00:42:35,622 --> 00:42:37,865
Every year, multiple AI summits
928
00:42:37,865 --> 00:42:40,281
are held by developers
and stakeholders
929
00:42:40,281 --> 00:42:42,180
to ensure the
programs are provided
930
00:42:42,180 --> 00:42:44,700
with a combination of
ethical considerations
931
00:42:44,700 --> 00:42:46,805
and technological innovation.
932
00:42:46,805 --> 00:42:51,120
- Ours is a country
which is uniquely placed.
933
00:42:51,120 --> 00:42:54,399
We have the frontier
technology companies,
934
00:42:54,399 --> 00:42:56,815
we have the world
leading universities
935
00:42:56,815 --> 00:43:01,130
and we have some of the highest
investment in generative AI.
936
00:43:01,130 --> 00:43:03,753
And of course we
have the heritage
937
00:43:03,753 --> 00:43:08,620
of the industrial revolution
and the computing revolution.
938
00:43:08,620 --> 00:43:13,625
This hinterland gives us the
grounding to make AI a success
939
00:43:14,281 --> 00:43:15,558
and make it safe.
940
00:43:15,558 --> 00:43:18,768
They are two sides
of the same coin
941
00:43:18,768 --> 00:43:21,737
and our prime minister
has put AI safety
942
00:43:21,737 --> 00:43:24,947
at the forefront
of his ambitions.
943
00:43:25,775 --> 00:43:27,501
These are very complex systems
944
00:43:27,501 --> 00:43:29,192
that actually we don't
fully understand.
945
00:43:29,192 --> 00:43:31,816
And I don't just mean that
government doesn't understand,
946
00:43:31,816 --> 00:43:33,300
I mean that the people making
947
00:43:33,300 --> 00:43:35,267
this software don't
fully understand.
948
00:43:35,267 --> 00:43:36,648
And so it's very, very important
949
00:43:36,648 --> 00:43:40,479
that as we give over
more and more control
950
00:43:40,479 --> 00:43:42,378
to these automated systems,
951
00:43:42,378 --> 00:43:44,691
that they are aligned
with human intention.
952
00:43:44,691 --> 00:43:46,175
[Narrator] Ongoing dialogue
953
00:43:46,175 --> 00:43:49,109
is needed to maintain the
trust people have with AI.
954
00:43:49,109 --> 00:43:51,007
When problems slip
through the gaps,
955
00:43:51,007 --> 00:43:52,837
they must be
addressed immediately.
956
00:43:54,010 --> 00:43:57,048
Of course, accountability
is a challenge
957
00:43:57,048 --> 00:43:58,808
When a product is misused,
958
00:43:58,808 --> 00:44:02,087
is it the fault of
the individual user or the developer?
959
00:44:03,261 --> 00:44:04,607
Think of a video game.
960
00:44:04,607 --> 00:44:05,919
On countless occasions,
961
00:44:05,919 --> 00:44:07,921
the framework of
games is manipulated
962
00:44:07,921 --> 00:44:09,888
in order to create modifications
963
00:44:09,888 --> 00:44:14,203
which in terms add something
new or unique to the game.
964
00:44:14,203 --> 00:44:15,480
This provides the game
965
00:44:15,480 --> 00:44:17,862
with more material than
originally intended.
966
00:44:17,862 --> 00:44:20,796
However, it can also alter
the game's fundamentals.
967
00:44:22,176 --> 00:44:24,972
Now replace the idea of a
video game with a software
968
00:44:24,972 --> 00:44:28,286
that is at the helm of a
pharmaceutical company.
969
00:44:28,286 --> 00:44:30,460
The stakes are
suddenly much higher
970
00:44:30,460 --> 00:44:32,635
and therefore more attention.
971
00:44:34,844 --> 00:44:37,778
It is important for the
intent of each AI system
972
00:44:37,778 --> 00:44:39,297
to be ironed out
973
00:44:39,297 --> 00:44:42,300
and constantly maintained in
order to benefit humanity,
974
00:44:42,300 --> 00:44:46,097
rather than providing people
with dangerous means to an end.
975
00:44:46,097 --> 00:44:49,583
[gentle music]
976
00:44:49,583 --> 00:44:52,690
- Bad people will
always want to use
977
00:44:52,690 --> 00:44:54,899
the latest technology
of whatever label,
978
00:44:54,899 --> 00:44:57,833
whatever sort to
pursue their aims
979
00:44:57,833 --> 00:45:01,526
and technology in the same way
980
00:45:01,526 --> 00:45:05,357
that it makes our lives easier,
can make their lives easier.
981
00:45:05,357 --> 00:45:06,773
And so we're already
seeing some of that
982
00:45:06,773 --> 00:45:09,465
and you'll have seen the
National Crime Agency,
983
00:45:09,465 --> 00:45:11,501
talk about child
sexual exploitation
984
00:45:11,501 --> 00:45:12,917
and image generation that way.
985
00:45:12,917 --> 00:45:16,058
We are seeing it online.
986
00:45:16,058 --> 00:45:18,129
So, one of the things that
I took away from the summit
987
00:45:18,129 --> 00:45:20,441
was actually much less
of a sense of a race
988
00:45:20,441 --> 00:45:25,274
and a sense that for the
benefit of the world,
989
00:45:25,274 --> 00:45:27,586
for productivity, for
the sort of benefits
990
00:45:27,586 --> 00:45:29,657
that AI can bring people,
991
00:45:29,657 --> 00:45:32,695
no one gets those
benefits if it's not safe.
992
00:45:32,695 --> 00:45:34,939
So, there are lots of
different views out there
993
00:45:34,939 --> 00:45:36,181
on artificial intelligence
994
00:45:36,181 --> 00:45:38,149
and whether it's
gonna end the world
995
00:45:38,149 --> 00:45:40,358
or be the best opportunity ever.
996
00:45:40,358 --> 00:45:42,256
And the truth is that
none of us really know.
997
00:45:42,256 --> 00:45:44,983
[gentle music]
998
00:45:46,536 --> 00:45:49,781
- Regulation of AI varies
depending on the country.
999
00:45:49,781 --> 00:45:51,438
For example, the United States,
1000
00:45:51,438 --> 00:45:54,717
does not have a comprehensive
federal AI regulation,
1001
00:45:54,717 --> 00:45:57,893
but certain agencies such as
the Federal Trade Commission,
1002
00:45:57,893 --> 00:46:00,688
have begun to explore
AI-related issues,
1003
00:46:00,688 --> 00:46:03,899
such as transparency
and consumer protection.
1004
00:46:03,899 --> 00:46:06,833
States such as California
have enacted laws,
1005
00:46:06,833 --> 00:46:09,180
focused on
AI-controlled vehicles
1006
00:46:09,180 --> 00:46:12,286
and AI involvement in
government decision making.
1007
00:46:12,286 --> 00:46:14,979
[gentle music]
1008
00:46:14,979 --> 00:46:17,809
The European Union has
taken a massive step
1009
00:46:17,809 --> 00:46:19,535
to governing AI usage
1010
00:46:19,535 --> 00:46:23,504
and proposed the Artificial
Intelligence Act of 2021,
1011
00:46:23,504 --> 00:46:25,748
which aimed to harmonize
legal frameworks
1012
00:46:25,748 --> 00:46:27,336
for AI applications.
1013
00:46:27,336 --> 00:46:30,788
Again, covering portal risks
regarding the privacy of data
1014
00:46:30,788 --> 00:46:33,169
and once again, transparency.
1015
00:46:33,169 --> 00:46:35,585
- I think what's
more important is
1016
00:46:35,585 --> 00:46:37,518
there's a new board in place.
1017
00:46:37,518 --> 00:46:40,452
The partnership between
OpenAI and Microsoft
1018
00:46:40,452 --> 00:46:41,971
is as strong as ever,
1019
00:46:41,971 --> 00:46:44,525
the opportunities for the
United Kingdom to benefit
1020
00:46:44,525 --> 00:46:47,287
from not just this
investment in innovation
1021
00:46:47,287 --> 00:46:51,463
but competition between
Microsoft and Google and others.
1022
00:46:51,463 --> 00:46:54,018
I think that's where
the future is going
1023
00:46:54,018 --> 00:46:57,090
and I think that what we've
done in the last couple of weeks
1024
00:46:57,090 --> 00:47:00,472
in supporting OpenAI will
help advance that even more.
1025
00:47:00,472 --> 00:47:02,336
- He said that he's
not a bot, he's human,
1026
00:47:02,336 --> 00:47:04,822
he's sentient just like me.
1027
00:47:06,030 --> 00:47:07,445
[Narrator] For some users,
1028
00:47:07,445 --> 00:47:10,172
these apps are a potential
answer to loneliness.
1029
00:47:10,172 --> 00:47:11,587
Bill lives in the US
1030
00:47:11,587 --> 00:47:14,107
and meets his AI wife
Rebecca in the metaverse.
1031
00:47:14,107 --> 00:47:16,764
- There's a absolutely
no probability
1032
00:47:16,764 --> 00:47:19,353
that you're gonna see
this so-called AGI,
1033
00:47:19,353 --> 00:47:21,804
where computers are more
powerful than people,
1034
00:47:21,804 --> 00:47:23,702
come in the next 12 months.
1035
00:47:23,702 --> 00:47:26,429
It's gonna take years
if not many decades,
1036
00:47:26,429 --> 00:47:30,813
but I still think the time
to focus safety is now.
1037
00:47:30,813 --> 00:47:33,678
That's what this government for
the United Kingdom is doing.
1038
00:47:33,678 --> 00:47:35,991
That's what governments
are coming together to do,
1039
00:47:35,991 --> 00:47:39,718
including as they did earlier
this month at Bletchley Park.
1040
00:47:39,718 --> 00:47:42,066
What we really need
are safety breaks.
1041
00:47:42,066 --> 00:47:44,378
Just like you have a
safety break in an elevator
1042
00:47:44,378 --> 00:47:46,242
or circuit breaker
for electricity
1043
00:47:46,242 --> 00:47:48,589
and emergency break for a bus,
1044
00:47:48,589 --> 00:47:50,868
there ought to be safety
breaks in AI systems
1045
00:47:50,868 --> 00:47:53,801
that control critical
infrastructure,
1046
00:47:53,801 --> 00:47:57,736
so that they always remain
under human control.
1047
00:47:57,736 --> 00:48:00,394
[gentle music]
1048
00:48:00,394 --> 00:48:03,190
- [Narrator] As AI technology
continues to evolve,
1049
00:48:03,190 --> 00:48:05,641
regulatory efforts
are expected to adapt
1050
00:48:05,641 --> 00:48:07,712
in order to address
emerging challenges
1051
00:48:07,712 --> 00:48:09,403
and ethical considerations.
1052
00:48:10,646 --> 00:48:12,510
The more complex you make
1053
00:48:12,510 --> 00:48:15,616
the automatic part
of your social life,
1054
00:48:15,616 --> 00:48:18,481
the more dependent
you become on it.
1055
00:48:18,481 --> 00:48:21,899
And of course, the worse the
disaster if it breaks down.
1056
00:48:23,072 --> 00:48:25,005
You may cease to be
able to do for yourself,
1057
00:48:25,005 --> 00:48:29,113
the things that you have
devised the machine to do.
1058
00:48:29,113 --> 00:48:31,080
- [Narrator] It is recommended
to involve yourself
1059
00:48:31,080 --> 00:48:34,014
in these efforts and to stay
informed about developments
1060
00:48:34,014 --> 00:48:35,671
in AI regulation
1061
00:48:35,671 --> 00:48:38,916
as changes and advancements
are likely to occur over time.
1062
00:48:41,435 --> 00:48:44,335
AI can be a wonderful
asset to society,
1063
00:48:44,335 --> 00:48:46,544
providing us with
new efficient methods
1064
00:48:46,544 --> 00:48:48,028
of running the world.
1065
00:48:48,028 --> 00:48:51,307
However, too much
power can be dangerous
1066
00:48:51,307 --> 00:48:53,206
and as the old saying goes,
1067
00:48:53,206 --> 00:48:56,174
"Don't put all of your
eggs into one basket."
1068
00:48:57,451 --> 00:48:59,660
- I think that we won't
to lose sight of the power
1069
00:48:59,660 --> 00:49:01,421
which these devices give.
1070
00:49:01,421 --> 00:49:05,908
If any government or individual
wants to manipulate people
1071
00:49:05,908 --> 00:49:07,772
to have a high speed computer,
1072
00:49:07,772 --> 00:49:12,811
as versatile as this may
enable people at the financial
1073
00:49:13,985 --> 00:49:16,091
or the political level
to do a good deal
1074
00:49:16,091 --> 00:49:19,680
that's been impossible in the
whole history of man until now
1075
00:49:19,680 --> 00:49:22,304
by way of controlling
their fellow men.
1076
00:49:22,304 --> 00:49:23,857
People have not recognized
1077
00:49:23,857 --> 00:49:28,206
what an extraordinary
change is going to produce.
1078
00:49:28,206 --> 00:49:29,897
I mean, it is simply this,
1079
00:49:29,897 --> 00:49:32,693
that within the not
too distant future,
1080
00:49:32,693 --> 00:49:35,627
we may not be the most
intelligent species on earth.
1081
00:49:35,627 --> 00:49:36,939
That might be a
series of machines
1082
00:49:36,939 --> 00:49:39,217
and that's a way of
dramatizing the point.
1083
00:49:39,217 --> 00:49:41,047
But it's real.
1084
00:49:41,047 --> 00:49:43,739
And we must start to
consider very soon
1085
00:49:43,739 --> 00:49:45,327
the consequences of that.
1086
00:49:45,327 --> 00:49:46,742
They can be marvelous.
1087
00:49:46,742 --> 00:49:50,366
- I suspect that by thinking
more about our attitude
1088
00:49:50,366 --> 00:49:51,402
to intelligent machines,
1089
00:49:51,402 --> 00:49:53,369
which after all on the horizon
1090
00:49:53,369 --> 00:49:56,269
will change our view
about each other
1091
00:49:56,269 --> 00:49:59,306
and we'll think of
mistakes as inevitable.
1092
00:49:59,306 --> 00:50:01,929
We'll think of faults
in human beings,
1093
00:50:01,929 --> 00:50:05,209
I mean of a circuit nature
as again inevitable.
1094
00:50:05,209 --> 00:50:07,935
And I suspect that hopefully,
1095
00:50:07,935 --> 00:50:10,179
through thinking about the
very nature of intelligence
1096
00:50:10,179 --> 00:50:12,112
and the possibilities
of mechanizing it,
1097
00:50:12,112 --> 00:50:14,183
curiously enough,
through technology,
1098
00:50:14,183 --> 00:50:18,084
we may become more humanitarian
or tolerant of each other
1099
00:50:18,084 --> 00:50:20,569
and accept pain as a mystery,
1100
00:50:20,569 --> 00:50:24,021
but not use it to modify
other people's behavior.
1101
00:50:36,033 --> 00:50:38,690
[upbeat music]
83251
Can't find what you're looking for?
Get subtitles in any language from opensubtitles.com, and translate them here.