Would you like to inspect the original subtitles? These are the user uploaded subtitles that are being translated:
1
00:00:00,586 --> 00:00:02,726
[gentle music]
2
00:00:14,324 --> 00:00:15,532
- [Narrator] For decades,
3
00:00:15,532 --> 00:00:17,017
we have discussed
the many outcomes,
4
00:00:17,017 --> 00:00:19,053
regarding artificial
intelligence.
5
00:00:19,053 --> 00:00:21,469
Could our world be dominated?
6
00:00:21,469 --> 00:00:25,232
Could our independence and
autonomy be stripped from us,
7
00:00:25,232 --> 00:00:28,407
or are we able to control
what we have created?
8
00:00:28,407 --> 00:00:31,100
[upbeat music]
9
00:00:37,416 --> 00:00:41,006
Could we use artificial
intelligence to
benefit our society?
10
00:00:41,006 --> 00:00:44,009
Just how thin is the line
between the development
11
00:00:44,009 --> 00:00:46,805
of civilization and chaos?
12
00:00:46,805 --> 00:00:49,428
[upbeat music]
13
00:01:13,211 --> 00:01:15,903
To understand what
artificial intelligence is,
14
00:01:15,903 --> 00:01:19,803
one must understand that it
can take many different forms.
15
00:01:19,803 --> 00:01:22,047
Think of it as a web of ideas,
16
00:01:22,047 --> 00:01:25,326
slowly expanding as new
ways of utilizing computers
17
00:01:25,326 --> 00:01:26,603
are explored.
18
00:01:26,603 --> 00:01:28,260
As technology develops,
19
00:01:28,260 --> 00:01:31,539
so do the capabilities of
self-learning software.
20
00:01:31,539 --> 00:01:34,335
- [Reporter] The need to
diagnose disease quickly
21
00:01:34,335 --> 00:01:38,132
and effectively has prompted
many university medical centers
22
00:01:38,132 --> 00:01:41,791
to develop intelligent
programs that simulate the work
23
00:01:41,791 --> 00:01:44,345
of doctors and
laboratory technicians.
24
00:01:44,345 --> 00:01:47,003
[gentle music]
25
00:01:48,694 --> 00:01:51,041
- [Narrator] AI is
quickly integrating
with our way of life.
26
00:01:51,041 --> 00:01:54,631
So, much so that development
of AI programs has in itself,
27
00:01:54,631 --> 00:01:56,323
become a business opportunity.
28
00:01:57,945 --> 00:01:58,773
[upbeat music]
29
00:01:58,773 --> 00:01:59,809
In our modern age,
30
00:01:59,809 --> 00:02:01,638
we are powered by technology
31
00:02:01,638 --> 00:02:05,021
and softwares are transcending
its virtual existence,
32
00:02:05,021 --> 00:02:07,437
finding applications
in various fields,
33
00:02:07,437 --> 00:02:11,372
such as customer support
to content creation,
34
00:02:11,372 --> 00:02:13,202
computer-aided design,
35
00:02:13,202 --> 00:02:17,137
otherwise known as CAD, is
one of the many uses of AI.
36
00:02:17,137 --> 00:02:19,415
By analyzing
particular variables,
37
00:02:19,415 --> 00:02:22,280
computers are now able to
assist in the modification
38
00:02:22,280 --> 00:02:26,180
and creation of designs for
hardware and architecture.
39
00:02:26,180 --> 00:02:30,046
The prime use of any AI is
for optimizing processes
40
00:02:30,046 --> 00:02:32,324
that were considered
tedious before.
41
00:02:32,324 --> 00:02:35,189
In many ways, AI has
been hugely beneficial
42
00:02:35,189 --> 00:02:38,951
for technological development
thanks to its sheer speed.
43
00:02:38,951 --> 00:02:41,057
However, AI only benefits
44
00:02:41,057 --> 00:02:43,508
those to whom the
programs are distributed.
45
00:02:44,302 --> 00:02:45,613
- Artificial intelligence
46
00:02:45,613 --> 00:02:47,443
is picking through your rubbish.
47
00:02:47,443 --> 00:02:51,688
This robot uses it to sort
through plastics for recycling
48
00:02:51,688 --> 00:02:53,414
and it can be retrained
49
00:02:53,414 --> 00:02:55,968
to prioritize whatever's
more marketable.
50
00:02:57,177 --> 00:03:00,180
So, AI can clearly
be incredibly useful,
51
00:03:00,180 --> 00:03:02,596
but there are deep
concerns about
52
00:03:02,596 --> 00:03:07,635
how quickly it is developing
and where it could go next.
53
00:03:08,912 --> 00:03:11,121
- The aim is to make
them as capable as humans
54
00:03:11,121 --> 00:03:14,366
and deploy them in
the service sector.
55
00:03:14,366 --> 00:03:16,230
The engineers in this research
56
00:03:16,230 --> 00:03:18,059
and development lab are working
57
00:03:18,059 --> 00:03:21,822
to take these humanoid
robots to the next level
58
00:03:21,822 --> 00:03:24,583
where they can not
only speak and move,
59
00:03:24,583 --> 00:03:27,345
but they can think
and feel and act
60
00:03:27,345 --> 00:03:30,002
and even make decisions
for themselves.
61
00:03:30,796 --> 00:03:32,695
And that daily data stream
62
00:03:32,695 --> 00:03:36,008
is being fed into an
ever expanding workforce,
63
00:03:36,008 --> 00:03:39,529
dedicated to developing
artificial intelligence.
64
00:03:41,013 --> 00:03:42,808
Those who have studied abroad
65
00:03:42,808 --> 00:03:46,122
are being encouraged to
return to the motherland.
66
00:03:46,122 --> 00:03:47,917
Libo Yang came back
67
00:03:47,917 --> 00:03:51,645
and started a tech
enterprise in his hometown.
68
00:03:51,645 --> 00:03:54,268
- [Narrator] China's market
is indeed the most open
69
00:03:54,268 --> 00:03:56,926
and active market
in the world for AI.
70
00:03:56,926 --> 00:04:01,241
It is also where there are the
most application scenarios.
71
00:04:01,241 --> 00:04:03,864
- So, AI is generally a
broad term that we apply
72
00:04:03,864 --> 00:04:04,934
to a number of techniques.
73
00:04:04,934 --> 00:04:06,384
And in this particular case,
74
00:04:06,384 --> 00:04:09,456
what we're actually looking
at was elements of AI,
75
00:04:09,456 --> 00:04:12,010
machine learning
and deep learning.
76
00:04:12,010 --> 00:04:13,701
So, in this particular case,
77
00:04:13,701 --> 00:04:17,429
we've been unfortunately
in a situation
78
00:04:17,429 --> 00:04:20,398
in this race against time
to create new antibiotics,
79
00:04:20,398 --> 00:04:22,779
the threat is
actually quite real
80
00:04:22,779 --> 00:04:25,230
and it would be
a global problem.
81
00:04:25,230 --> 00:04:27,784
We desperately needed to
harness new technologies
82
00:04:27,784 --> 00:04:29,269
in an attempt to fight it,
83
00:04:29,269 --> 00:04:30,960
we're looking at drugs
84
00:04:30,960 --> 00:04:33,411
which could potentially
fight E. coli,
85
00:04:33,411 --> 00:04:35,102
a very dangerous bacteria.
86
00:04:35,102 --> 00:04:37,207
- So, what is it
that the AI is doing
87
00:04:37,207 --> 00:04:39,348
that humans can't
do very simply?
88
00:04:39,348 --> 00:04:41,729
- So, the AI can
look for patterns
89
00:04:41,729 --> 00:04:44,560
that we wouldn't be able to
mind for with a human eye,
90
00:04:44,560 --> 00:04:47,287
simply within what I
do as a radiologist,
91
00:04:47,287 --> 00:04:50,980
I look for patterns of
diseases in terms of shape,
92
00:04:50,980 --> 00:04:53,914
contrast enhancement,
heterogeneity.
93
00:04:53,914 --> 00:04:55,191
But what the computer does,
94
00:04:55,191 --> 00:04:58,125
it looks for patterns
within the pixels.
95
00:04:58,125 --> 00:05:00,679
These are things that you just
can't see to the human eye.
96
00:05:00,679 --> 00:05:03,855
There's so much more data
embedded within these scans
97
00:05:03,855 --> 00:05:07,514
that we use that we can't
mine on a physical level.
98
00:05:07,514 --> 00:05:09,516
So, the computers really help.
99
00:05:09,516 --> 00:05:11,311
- [Narrator] Many
believe the growth of AI
100
00:05:11,311 --> 00:05:13,692
is dependent on
global collaboration,
101
00:05:13,692 --> 00:05:17,109
but access to the technology
is limited in certain regions.
102
00:05:17,109 --> 00:05:19,767
Global distribution is
a long-term endeavor
103
00:05:19,767 --> 00:05:21,044
and the more countries
104
00:05:21,044 --> 00:05:23,288
and businesses that
have access to the tech,
105
00:05:23,288 --> 00:05:26,429
the more regulation
the AI will require.
106
00:05:26,429 --> 00:05:29,846
In fact, it is now not
uncommon for businesses
107
00:05:29,846 --> 00:05:33,125
to be entirely run by
an artificial director.
108
00:05:33,125 --> 00:05:34,472
On many occasions,
109
00:05:34,472 --> 00:05:37,198
handing the helm of a
company to an algorithm
110
00:05:37,198 --> 00:05:40,685
can provide the best option
on the basis of probability.
111
00:05:40,685 --> 00:05:43,998
However, dependence and
reliability on softwares
112
00:05:43,998 --> 00:05:45,897
can be a great risk.
113
00:05:45,897 --> 00:05:47,450
Without proper safeguards,
114
00:05:47,450 --> 00:05:50,419
actions based on potentially
incorrect predictions
115
00:05:50,419 --> 00:05:53,353
can be a detriment to a
business or operation.
116
00:05:53,353 --> 00:05:55,147
Humans provide the
critical thinking
117
00:05:55,147 --> 00:05:58,461
and judgment which AI is
not capable of matching.
118
00:05:58,461 --> 00:06:00,463
- Well, this is the
Accessibility Design Center
119
00:06:00,463 --> 00:06:02,810
and it's where we try to
bring together our engineers
120
00:06:02,810 --> 00:06:05,882
and experts with the
latest AI technology,
121
00:06:05,882 --> 00:06:07,608
with people with disabilities,
122
00:06:07,608 --> 00:06:10,059
because there's a
real opportunity to
firstly help people
123
00:06:10,059 --> 00:06:12,613
with disabilities enjoy
all the technology
124
00:06:12,613 --> 00:06:14,201
we have in our pockets today.
125
00:06:14,201 --> 00:06:15,720
And sometimes that's
not very accessible,
126
00:06:15,720 --> 00:06:18,688
but also build tools that
can help them engage better
127
00:06:18,688 --> 00:06:20,103
in the real world.
128
00:06:20,103 --> 00:06:22,451
And that's thanks to the
wonders of machine learning.
129
00:06:22,451 --> 00:06:25,764
- I don't think we're like at
the end of this paradigm yet.
130
00:06:25,764 --> 00:06:26,903
We'll keep pushing these.
131
00:06:26,903 --> 00:06:28,215
We'll add other modalities.
132
00:06:28,215 --> 00:06:31,114
So, someday they'll do
video, audio images,
133
00:06:31,114 --> 00:06:36,154
text altogether and they'll get
like much smarter over time.
134
00:06:37,638 --> 00:06:38,674
- AI, machine learning, all
very sounds very complicated.
135
00:06:38,674 --> 00:06:40,572
Just think about it as a toolkit
136
00:06:40,572 --> 00:06:42,781
that's really good at
sort of spotting patterns
137
00:06:42,781 --> 00:06:44,024
and making predictions,
138
00:06:44,024 --> 00:06:46,336
better than any computing
could do before.
139
00:06:46,336 --> 00:06:47,786
And that's why it's so useful
140
00:06:47,786 --> 00:06:51,031
for things like understanding
language and speech.
141
00:06:51,031 --> 00:06:52,998
Another product which
we are launching today
142
00:06:52,998 --> 00:06:55,000
is called Project Relate.
143
00:06:55,000 --> 00:06:56,312
And this is for people
144
00:06:56,312 --> 00:06:58,728
who have non-standard
speech patterns.
145
00:06:58,728 --> 00:07:00,937
So, one of the
people we work with
146
00:07:00,937 --> 00:07:03,837
is maybe less than
10% of the time,
147
00:07:03,837 --> 00:07:06,564
could be understood by
people who don't know her,
148
00:07:06,564 --> 00:07:09,325
using this tool that's
over 90% of the time.
149
00:07:09,325 --> 00:07:12,259
And you think about
that transformation
in somebody's life
150
00:07:12,259 --> 00:07:15,676
and then you think about the
fact there's 250 million people
151
00:07:15,676 --> 00:07:17,678
with non-standard speech
patterns around the world.
152
00:07:17,678 --> 00:07:19,093
So, that's the
ambition of this center
153
00:07:19,093 --> 00:07:21,682
is to unite technology with
people with disabilities
154
00:07:21,682 --> 00:07:24,478
and try to help 'em
engage more in the world.
155
00:07:24,478 --> 00:07:27,550
- [Narrator] On the
30th November of 2022,
156
00:07:27,550 --> 00:07:30,001
a revolutionary
innovation emerged,
157
00:07:30,967 --> 00:07:32,003
ChatGPT.
158
00:07:32,969 --> 00:07:35,869
ChatGPT was created by OpenAI,
159
00:07:35,869 --> 00:07:38,250
an AI research organization.
160
00:07:38,250 --> 00:07:39,873
Its goal is to develop systems
161
00:07:39,873 --> 00:07:44,498
which may benefit all aspects
of society and communication.
162
00:07:44,498 --> 00:07:47,467
Sam Altman stepped
up as CEO of OpenAI
163
00:07:47,467 --> 00:07:50,055
on its launch in 2015.
164
00:07:50,055 --> 00:07:51,609
Altman dabbled in a multitude
165
00:07:51,609 --> 00:07:53,990
of computing-based
business ventures.
166
00:07:53,990 --> 00:07:57,477
His rise to CEO was thanks
to his many affiliations
167
00:07:57,477 --> 00:08:01,377
and investments with computing
and social media companies.
168
00:08:01,377 --> 00:08:04,173
He began his journey
by co-founding Loopt,
169
00:08:04,173 --> 00:08:06,106
a social media service.
170
00:08:06,106 --> 00:08:07,763
After selling the application,
171
00:08:07,763 --> 00:08:10,835
Altman went on to bigger
and riskier endeavors
172
00:08:10,835 --> 00:08:14,148
from startup accelerator
companies to security software.
173
00:08:15,184 --> 00:08:17,393
OpenAI became hugely desirable,
174
00:08:17,393 --> 00:08:20,223
thanks to the amount of revenue
the company had generated
175
00:08:20,223 --> 00:08:21,984
with over a billion dollars made
176
00:08:21,984 --> 00:08:24,262
within its first
year of release.
177
00:08:24,262 --> 00:08:27,265
ChatGPT became an easily
accessible software,
178
00:08:27,265 --> 00:08:30,786
built on a large language
model known as an LLM.
179
00:08:30,786 --> 00:08:34,134
This program can conjure
complex human-like responses
180
00:08:34,134 --> 00:08:37,309
to the user's questions
otherwise known as prompts.
181
00:08:37,309 --> 00:08:38,794
In essence,
182
00:08:38,794 --> 00:08:41,244
it is a program which
learns the more it is used.
183
00:08:43,592 --> 00:08:45,317
The new age therapeutic program
184
00:08:45,317 --> 00:08:48,804
was developed on the GPT-3.5.
185
00:08:48,804 --> 00:08:51,531
The architecture of this
older model allowed systems
186
00:08:51,531 --> 00:08:53,602
to understand and generate code
187
00:08:53,602 --> 00:08:56,501
and natural languages at a
remarkably advanced level
188
00:08:56,501 --> 00:08:59,884
from analyzing syntax
to nuances in writing.
189
00:08:59,884 --> 00:09:02,542
[upbeat music]
190
00:09:04,578 --> 00:09:06,753
ChatGPT took the world by storm,
191
00:09:06,753 --> 00:09:09,445
due to the sophistication
of the system.
192
00:09:09,445 --> 00:09:11,067
As with many chatbot systems,
193
00:09:11,067 --> 00:09:13,449
people have since found
ways to manipulate
194
00:09:13,449 --> 00:09:17,349
and confuse the software in
order to test its limits.
195
00:09:17,349 --> 00:09:20,076
[gentle music]
196
00:09:21,526 --> 00:09:25,910
The first computer was invented
by Charles Babbage in 1822.
197
00:09:25,910 --> 00:09:29,189
It was to be a rudimentary
general purpose system.
198
00:09:29,189 --> 00:09:34,021
In 1936, the system was
developed upon by Alan Turing.
199
00:09:34,021 --> 00:09:36,299
The automatic machine,
as he called them,
200
00:09:36,299 --> 00:09:38,854
was able to break enigma
enciphered messages,
201
00:09:38,854 --> 00:09:41,201
regarding enemy
military operations,
202
00:09:41,201 --> 00:09:43,583
during the Second World War.
203
00:09:43,583 --> 00:09:46,447
Turing theorized his
own type of computer,
204
00:09:46,447 --> 00:09:49,830
the Turing Machine has
coined by Alonzo Church,
205
00:09:49,830 --> 00:09:52,522
after reading Turing's
research paper.
206
00:09:52,522 --> 00:09:55,698
It had become realized that
soon prospect of computing
207
00:09:55,698 --> 00:09:57,907
and engineering would
merge seamlessly.
208
00:09:59,046 --> 00:10:01,152
Theories of future
tech would increase
209
00:10:01,152 --> 00:10:04,742
and soon came a huge outburst
in science fiction media.
210
00:10:04,742 --> 00:10:07,468
This was known as the
golden age for computing.
211
00:10:07,468 --> 00:10:10,092
[gentle music]
212
00:10:20,067 --> 00:10:22,760
Alan Turing's contributions
to computability
213
00:10:22,760 --> 00:10:25,590
and theoretical computer
science was one step closer
214
00:10:25,590 --> 00:10:28,110
to producing a reactive machine.
215
00:10:28,110 --> 00:10:31,389
The reactive machine
is an early form of AI.
216
00:10:31,389 --> 00:10:32,942
They had limited capabilities
217
00:10:32,942 --> 00:10:34,772
and were unable
to store memories
218
00:10:34,772 --> 00:10:37,740
in order to learn new
algorithms of data.
219
00:10:37,740 --> 00:10:41,641
However, they were able to
react to specific stimuli.
220
00:10:41,641 --> 00:10:46,611
The first AI was a
program written in
1952 by Arthur Samuel.
221
00:10:47,854 --> 00:10:49,614
The prototype AI was
able to play checkers,
222
00:10:49,614 --> 00:10:52,168
against an opponent and
was built to operate
223
00:10:52,168 --> 00:10:56,172
on the Ferranti Mark One, an
early commercial computer.
224
00:10:56,172 --> 00:10:57,657
- [Reporter] This computer
has been playing the game
225
00:10:57,657 --> 00:11:00,418
for several years now,
getting better all the time.
226
00:11:00,418 --> 00:11:02,972
Tonight it's playing against
the black side of the board.
227
00:11:02,972 --> 00:11:05,837
It's approach to playing
drafts, it's almost human.
228
00:11:05,837 --> 00:11:08,012
It remembers the moves
that enable it to win
229
00:11:08,012 --> 00:11:10,324
and the sort that
lead to defeat.
230
00:11:10,324 --> 00:11:12,982
The computer indicates the move
it wants to make on a panel
231
00:11:12,982 --> 00:11:14,156
of flashing lights.
232
00:11:14,156 --> 00:11:15,433
It's up to the human opponent
233
00:11:15,433 --> 00:11:18,229
to actually move the
drafts about the board.
234
00:11:18,229 --> 00:11:20,645
This sort of works producing
exciting information
235
00:11:20,645 --> 00:11:22,405
on the way in which
electronic brains
236
00:11:22,405 --> 00:11:24,338
can learn from past experience
237
00:11:24,338 --> 00:11:26,168
and improve their performances.
238
00:11:27,963 --> 00:11:29,792
- [Narrator] In 1966,
239
00:11:29,792 --> 00:11:32,519
an MIT professor named
Joseph Weizenbaum,
240
00:11:32,519 --> 00:11:37,110
created an AI which would
change the landscape of society.
241
00:11:37,110 --> 00:11:39,077
It was known as Eliza,
242
00:11:39,077 --> 00:11:42,322
and it was designed to act
like a psychotherapist.
243
00:11:42,322 --> 00:11:45,497
The software was simplistic,
yet revolutionary.
244
00:11:45,497 --> 00:11:47,499
The AI would receive
the user input
245
00:11:47,499 --> 00:11:51,055
and use specific parameters to
generate a coherent response.
246
00:11:53,057 --> 00:11:55,991
- It it has been said,
especially here at MIT,
247
00:11:55,991 --> 00:11:59,719
that computers will
take over in some sense
248
00:11:59,719 --> 00:12:02,652
and it's even been said
that if we're lucky,
249
00:12:02,652 --> 00:12:04,447
they'll keep us as pets
250
00:12:04,447 --> 00:12:06,277
and Arthur C. Clarke, the
science fiction writer,
251
00:12:06,277 --> 00:12:09,694
we marked once that if
that were to happen,
252
00:12:09,694 --> 00:12:12,904
it would serve us
right, he said.
253
00:12:12,904 --> 00:12:14,734
- [Narrator] The program
maintained the illusion
254
00:12:14,734 --> 00:12:16,943
of understanding its
user to the point
255
00:12:16,943 --> 00:12:20,498
where Weizenbaum's secretary
requested some time alone
256
00:12:20,498 --> 00:12:23,363
with Eliza to
express her feelings.
257
00:12:23,363 --> 00:12:26,711
Though Eliza is now considered
outdated technology,
258
00:12:26,711 --> 00:12:29,369
it remains a talking
point due to its ability
259
00:12:29,369 --> 00:12:31,785
to illuminate an aspect
of the human mind
260
00:12:31,785 --> 00:12:34,132
in our relationship
with computers.
261
00:12:34,132 --> 00:12:36,756
- And it's connected
over the telephone line
262
00:12:36,756 --> 00:12:38,965
to someone or something
at the other end.
263
00:12:38,965 --> 00:12:42,106
Now, I'm gonna play 20
questions with whatever it is.
264
00:12:42,106 --> 00:12:44,418
[type writer clacking]
265
00:12:44,418 --> 00:12:45,419
Very helpful.
266
00:12:45,419 --> 00:12:48,768
[type writer clacking]
267
00:12:53,773 --> 00:12:55,119
- 'Cause clearly if
we can make a machine
268
00:12:55,119 --> 00:12:56,776
as intelligent as ourselves,
269
00:12:56,776 --> 00:12:59,157
then it can make one
that's more intelligent.
270
00:12:59,157 --> 00:13:04,024
Now, the one I'm talking about
now will certainly happen.
271
00:13:05,301 --> 00:13:07,476
I mean, it could produce
an evil result of course,
272
00:13:07,476 --> 00:13:08,615
if we were careless,
273
00:13:08,615 --> 00:13:10,134
but what is quite certain
274
00:13:10,134 --> 00:13:14,138
is that we're heading
towards machine intelligence,
275
00:13:14,138 --> 00:13:17,486
machines that are
intelligent in every sense.
276
00:13:17,486 --> 00:13:19,246
It doesn't matter
how you define it,
277
00:13:19,246 --> 00:13:22,940
they'll be able to be
that sort of intelligent.
278
00:13:22,940 --> 00:13:26,046
A human is a machine,
unless there's a soul.
279
00:13:26,046 --> 00:13:29,670
I don't personally believe
that humans have souls
280
00:13:29,670 --> 00:13:32,535
in anything other
than a poetic sense,
281
00:13:32,535 --> 00:13:34,158
which I do believe
in, of course.
282
00:13:34,158 --> 00:13:37,437
But in a literal God-like sense,
283
00:13:37,437 --> 00:13:38,610
I don't believe we have souls.
284
00:13:38,610 --> 00:13:39,991
And so personally,
285
00:13:39,991 --> 00:13:42,407
I believe that we are
essentially machines.
286
00:13:43,823 --> 00:13:46,722
- [Narrator] This type of
program is known as an NLP,
287
00:13:46,722 --> 00:13:49,242
Natural Language Processing.
288
00:13:49,242 --> 00:13:52,176
This branch of artificial
intelligence enables computers
289
00:13:52,176 --> 00:13:55,489
to comprehend, generate and
manipulate human language.
290
00:13:56,905 --> 00:13:59,114
The concept of a
responsive machine
291
00:13:59,114 --> 00:14:02,358
was the mash that lit the
flame for worldwide concern.
292
00:14:03,739 --> 00:14:06,466
The systems were beginning
to raise ethical dilemmas,
293
00:14:06,466 --> 00:14:08,813
such as the use of
autonomous weapons,
294
00:14:08,813 --> 00:14:11,781
invasions of privacy through
surveillance technologies
295
00:14:11,781 --> 00:14:13,300
and the potential for misuse
296
00:14:13,300 --> 00:14:17,097
or unintended consequences
in decision making.
297
00:14:17,097 --> 00:14:18,858
When a command is
executed based,
298
00:14:18,858 --> 00:14:21,067
upon set rules in algorithms,
299
00:14:21,067 --> 00:14:24,346
it might not always be the
morally correct choice.
300
00:14:24,346 --> 00:14:28,453
- Imagination seems to be,
301
00:14:28,453 --> 00:14:31,594
some sort of process of random
thoughts being generated
302
00:14:31,594 --> 00:14:34,528
in the mind and then the
conscious mind selecting from a
303
00:14:34,528 --> 00:14:36,392
or some part of
the brain anyway,
304
00:14:36,392 --> 00:14:37,773
perhaps even below
the conscious mind,
305
00:14:37,773 --> 00:14:40,500
selecting from a pool of
ideas and aligns with some
306
00:14:40,500 --> 00:14:42,122
and blocking others.
307
00:14:42,122 --> 00:14:45,608
And yes, a machine
can do the same thing.
308
00:14:45,608 --> 00:14:48,611
In fact, we can only
say that a machine
309
00:14:48,611 --> 00:14:50,890
is fundamentally different
from a human being,
310
00:14:50,890 --> 00:14:53,133
eventually, always
fundamentally, if we
believe in a soul.
311
00:14:53,133 --> 00:14:55,687
So, that boils down
to religious matter.
312
00:14:55,687 --> 00:14:58,932
If human beings have souls,
then clearly machines won't
313
00:14:58,932 --> 00:15:01,141
and there will always be
a fundamental difference.
314
00:15:01,141 --> 00:15:03,005
If you don't believe
humans have souls,
315
00:15:03,005 --> 00:15:04,765
then machines can do anything
316
00:15:04,765 --> 00:15:07,078
and everything
that a human does.
317
00:15:07,078 --> 00:15:10,116
- A computer which is
capable of finding out
318
00:15:10,116 --> 00:15:11,565
where it's gone wrong,
319
00:15:11,565 --> 00:15:14,051
finding out how its program
has already served it
320
00:15:14,051 --> 00:15:15,776
and then changing its program
321
00:15:15,776 --> 00:15:17,261
in the light of what
it had discovered
322
00:15:17,261 --> 00:15:18,814
is a learning machine.
323
00:15:18,814 --> 00:15:21,679
And this is something quite
fundamentally new in the world.
324
00:15:23,163 --> 00:15:25,027
- I'd like to be able to say
that it's only a slight change
325
00:15:25,027 --> 00:15:27,754
and we'll all be used to
it very, very quickly.
326
00:15:27,754 --> 00:15:29,307
But I don't think it is.
327
00:15:29,307 --> 00:15:33,070
I think that although we've
spoken probably of the whole
328
00:15:33,070 --> 00:15:35,417
of this century about
a coming revolution
329
00:15:35,417 --> 00:15:38,523
and about the end
of work and so on,
330
00:15:38,523 --> 00:15:39,904
finally it's actually happening.
331
00:15:39,904 --> 00:15:42,148
And it's actually
happening because now,
332
00:15:42,148 --> 00:15:46,117
it's suddenly become
cheaper to have a machine
333
00:15:46,117 --> 00:15:49,224
do a mental task
than for a man to,
334
00:15:49,224 --> 00:15:52,192
at the moment, at a fairly
low level of mental ability,
335
00:15:52,192 --> 00:15:54,298
but at an ever increasing
level of sophistication
336
00:15:54,298 --> 00:15:56,024
as these machines acquire,
337
00:15:56,024 --> 00:15:58,543
more and more human-like
mental abilities.
338
00:15:58,543 --> 00:16:01,408
So, just as men's
muscles were replaced
339
00:16:01,408 --> 00:16:03,272
in the First
Industrial Revolution
340
00:16:03,272 --> 00:16:04,998
in this second
industrial revolution
341
00:16:04,998 --> 00:16:07,069
or whatever you call it
or might like to call it,
342
00:16:07,069 --> 00:16:09,623
then men's mines will
be replaced in industry.
343
00:16:11,487 --> 00:16:13,938
- [Narrator] In order for
NLP systems to improve,
344
00:16:13,938 --> 00:16:16,941
the program must receive
feedback from human users.
345
00:16:18,287 --> 00:16:20,634
These iterative feedback
loops play a significant role
346
00:16:20,634 --> 00:16:23,396
in fine tuning each
model of the AI,
347
00:16:23,396 --> 00:16:26,192
further developing its
conversational capabilities.
348
00:16:27,538 --> 00:16:30,679
Organizations such as
OpenAI have taken automation
349
00:16:30,679 --> 00:16:34,372
to new lengths with
systems such as DALL-E,
350
00:16:34,372 --> 00:16:37,375
the generation of imagery and
art has never been easier.
351
00:16:38,445 --> 00:16:40,447
The term auto
generative imagery,
352
00:16:40,447 --> 00:16:43,450
refers to the creation
of visual content.
353
00:16:43,450 --> 00:16:46,384
These kinds of programs
have become so widespread,
354
00:16:46,384 --> 00:16:48,628
it is becoming
increasingly more difficult
355
00:16:48,628 --> 00:16:50,940
to tell the fake from the real.
356
00:16:50,940 --> 00:16:52,321
Using algorithms,
357
00:16:52,321 --> 00:16:55,359
programs such as DALL-E
and Midjourney are able
358
00:16:55,359 --> 00:16:58,500
to create visuals in
a matter of seconds.
359
00:16:58,500 --> 00:17:01,434
Whilst a human artist
could spend days, weeks
360
00:17:01,434 --> 00:17:04,747
or even years in order to
create a beautiful image.
361
00:17:04,747 --> 00:17:07,509
For us the discipline
required to pursue art
362
00:17:07,509 --> 00:17:11,513
is a contributing factor to
the appreciation of art itself.
363
00:17:11,513 --> 00:17:14,757
But if a software is able
to produce art in seconds,
364
00:17:14,757 --> 00:17:17,622
it puts artists in a
vulnerable position
365
00:17:17,622 --> 00:17:20,453
with even their
jobs being at risk.
366
00:17:20,453 --> 00:17:22,386
- Well, I think we see
risk coming through
367
00:17:22,386 --> 00:17:25,147
into the white collar jobs,
the professional jobs,
368
00:17:25,147 --> 00:17:27,563
we're already seeing artificial
intelligence solutions,
369
00:17:27,563 --> 00:17:30,911
being used in healthcare
and legal services.
370
00:17:30,911 --> 00:17:34,225
And so those jobs which
have been relatively immune
371
00:17:34,225 --> 00:17:38,402
to industrialization so far,
they're not immune anymore.
372
00:17:38,402 --> 00:17:40,783
And so people like
myself as a lawyer,
373
00:17:40,783 --> 00:17:42,509
I would hope I won't be,
374
00:17:42,509 --> 00:17:44,615
but I could be out of a
job in five years time.
375
00:17:44,615 --> 00:17:47,376
- An Oxford University study
suggests that between a third
376
00:17:47,376 --> 00:17:49,965
and almost a half of
all jobs are vanishing,
377
00:17:49,965 --> 00:17:52,899
because machines are simply
better at doing them.
378
00:17:52,899 --> 00:17:54,797
That means the generation here,
379
00:17:54,797 --> 00:17:57,041
simply won't have the
access to the professions
380
00:17:57,041 --> 00:17:57,938
that we have.
381
00:17:57,938 --> 00:17:59,457
- Almost on a daily basis,
382
00:17:59,457 --> 00:18:01,149
you're seeing new
technologies emerge
383
00:18:01,149 --> 00:18:02,667
that seem to be taking on tasks
384
00:18:02,667 --> 00:18:04,428
that in the past we thought
385
00:18:04,428 --> 00:18:06,188
they could only be
done by human beings.
386
00:18:06,188 --> 00:18:09,191
- Lots of people have talked
about the shifts in technology,
387
00:18:09,191 --> 00:18:11,642
leading to widespread
unemployment
388
00:18:11,642 --> 00:18:12,884
and they've been proved wrong.
389
00:18:12,884 --> 00:18:14,369
Why is it different this time?
390
00:18:14,369 --> 00:18:16,578
- The difference here is
that the technologies,
391
00:18:16,578 --> 00:18:19,167
A, they seem to be coming
through more rapidly,
392
00:18:19,167 --> 00:18:21,238
and B, they're taking on
not just manual tests,
393
00:18:21,238 --> 00:18:22,480
but cerebral tests too.
394
00:18:22,480 --> 00:18:24,551
They're solving all
sorts of problems,
395
00:18:24,551 --> 00:18:26,553
undertaking tests that
we thought historically,
396
00:18:26,553 --> 00:18:28,348
required human intelligence.
397
00:18:28,348 --> 00:18:29,522
- Well, DIM robots
are the robots
398
00:18:29,522 --> 00:18:31,765
we have on the
factory floor today
399
00:18:31,765 --> 00:18:33,733
in all the advanced countries.
400
00:18:33,733 --> 00:18:35,044
They're blind and dumb,
401
00:18:35,044 --> 00:18:36,908
they don't understand
their surroundings.
402
00:18:36,908 --> 00:18:40,533
And the other kind of robot,
403
00:18:40,533 --> 00:18:43,984
which will dominate the
technology of the late 1980s
404
00:18:43,984 --> 00:18:47,505
in automation and also
is of acute interest
405
00:18:47,505 --> 00:18:50,646
to experimental artificial
intelligence scientists
406
00:18:50,646 --> 00:18:54,788
is the kind of robot
where the human can convey
407
00:18:54,788 --> 00:18:59,828
to its machine assistance
his own concepts,
408
00:19:01,036 --> 00:19:04,453
suggested strategies and
the machine, the robot
409
00:19:04,453 --> 00:19:06,110
can understand him,
410
00:19:06,110 --> 00:19:09,286
but no machine can accept
411
00:19:09,286 --> 00:19:12,116
and utilize concepts
from a person,
412
00:19:12,116 --> 00:19:16,016
unless he has some kind of
window on the same world
413
00:19:16,016 --> 00:19:17,742
that the person sees.
414
00:19:17,742 --> 00:19:22,540
And therefore, to be
an intelligent robot
to a useful degree
415
00:19:22,540 --> 00:19:25,992
as an intelligent and
understanding assistant,
416
00:19:25,992 --> 00:19:29,409
robots are going to
have artificial eyes,
artificial ears,
417
00:19:29,409 --> 00:19:32,101
artificial sense of
touch is just essential.
418
00:19:33,102 --> 00:19:34,069
- [Narrator] These
programs learn,
419
00:19:34,069 --> 00:19:35,864
through a variety of techniques,
420
00:19:35,864 --> 00:19:38,556
such as generative
adversarial networks,
421
00:19:38,556 --> 00:19:41,490
which allows for the
production of plausible data.
422
00:19:41,490 --> 00:19:43,320
After a prompt is inputted,
423
00:19:43,320 --> 00:19:45,667
the system learns what
aspects of imagery,
424
00:19:45,667 --> 00:19:47,807
sound and text are fake.
425
00:19:48,980 --> 00:19:50,223
- [Reporter] Machine
learning algorithms,
426
00:19:50,223 --> 00:19:52,225
could already label
objects in images,
427
00:19:52,225 --> 00:19:53,709
and now they learn
to put those labels
428
00:19:53,709 --> 00:19:55,987
into natural language
descriptions.
429
00:19:55,987 --> 00:19:58,197
And it made one group
of researchers curious.
430
00:19:58,197 --> 00:20:01,130
What if you flipped
that process around?
431
00:20:01,130 --> 00:20:03,271
- If we could do image to text.
432
00:20:03,271 --> 00:20:05,894
Why not try doing
text to image as well
433
00:20:05,894 --> 00:20:07,240
and see how it works.
434
00:20:07,240 --> 00:20:08,483
- [Reporter] It was a
more difficult task.
435
00:20:08,483 --> 00:20:10,485
They didn't want to
retrieve existing images
436
00:20:10,485 --> 00:20:11,796
the way Google search does.
437
00:20:11,796 --> 00:20:14,178
They wanted to generate
entirely novel scenes
438
00:20:14,178 --> 00:20:16,249
that didn't happen
in the real world.
439
00:20:16,249 --> 00:20:19,045
- [Narrator] Once the AI learns
more visual discrepancies,
440
00:20:19,045 --> 00:20:21,875
the more effective the
later models will become.
441
00:20:21,875 --> 00:20:24,499
It is now very common
for software developers
442
00:20:24,499 --> 00:20:28,399
to band together in order
to improve their AI systems.
443
00:20:28,399 --> 00:20:31,471
Another learning model is
recurrent neural networks,
444
00:20:31,471 --> 00:20:33,991
which allows the AI to
train itself to create
445
00:20:33,991 --> 00:20:37,960
and predict algorithms by
recalling previous information.
446
00:20:37,960 --> 00:20:41,032
By utilizing what is
known as the memory state,
447
00:20:41,032 --> 00:20:42,896
the output of the
previous action
448
00:20:42,896 --> 00:20:46,072
can be passed forward into
the following input action
449
00:20:46,072 --> 00:20:50,249
or is otherwise should it
not meet previous parameters.
450
00:20:50,249 --> 00:20:53,493
This learning model allows
for consistent accuracy
451
00:20:53,493 --> 00:20:56,462
by repetition and exposure
to large fields of data.
452
00:20:58,602 --> 00:21:00,535
Whilst the person
will spend hours,
453
00:21:00,535 --> 00:21:02,847
practicing to paint
human anatomy,
454
00:21:02,847 --> 00:21:06,575
an AI can take existing data
and reproduce a new image
455
00:21:06,575 --> 00:21:10,821
with frighteningly good
accuracy in a matter of moments.
456
00:21:10,821 --> 00:21:12,892
- Well, I would say
that it's not so much
457
00:21:12,892 --> 00:21:17,379
a matter of whether a
machine can think or not,
458
00:21:17,379 --> 00:21:20,175
which is how you
prefer to use words,
459
00:21:20,175 --> 00:21:22,177
but rather whether
they can think
460
00:21:22,177 --> 00:21:23,834
in a sufficiently human-like way
461
00:21:25,111 --> 00:21:28,770
for people to have useful
communication with them.
462
00:21:28,770 --> 00:21:32,601
- If I didn't believe that
it was a beneficent prospect,
463
00:21:32,601 --> 00:21:34,120
I wouldn't be doing it.
464
00:21:34,120 --> 00:21:36,018
That wouldn't stop
other people doing it.
465
00:21:36,018 --> 00:21:40,471
But I wouldn't do it if I
didn't think it was for good.
466
00:21:40,471 --> 00:21:42,301
What I'm saying,
467
00:21:42,301 --> 00:21:44,095
and of course other people
have said long before me,
468
00:21:44,095 --> 00:21:45,442
it's not an original thought,
469
00:21:45,442 --> 00:21:49,791
is that we must consider
how to to control this.
470
00:21:49,791 --> 00:21:52,725
It won't be controlled
automatically.
471
00:21:52,725 --> 00:21:55,348
It's perfectly possible that
we could develop a machine,
472
00:21:55,348 --> 00:21:59,318
a robot say of
human-like intelligence
473
00:21:59,318 --> 00:22:01,975
and through neglect on our part,
474
00:22:01,975 --> 00:22:05,634
it could become a Frankenstein.
475
00:22:05,634 --> 00:22:08,844
- [Narrator] As with any
technology challenges arise,
476
00:22:08,844 --> 00:22:12,469
ethical concerns regarding
biases and misuse have existed,
477
00:22:12,469 --> 00:22:16,438
since the concept of artificial
intelligence was conceived.
478
00:22:16,438 --> 00:22:18,302
Due to autogenerated imagery,
479
00:22:18,302 --> 00:22:20,925
many believe the arts
industry has been placed
480
00:22:20,925 --> 00:22:22,789
in a difficult situation.
481
00:22:22,789 --> 00:22:26,241
Independent artists are now
being overshadowed by software.
482
00:22:27,276 --> 00:22:29,451
To many the improvement
of generative AI
483
00:22:29,451 --> 00:22:32,454
is hugely beneficial
and efficient.
484
00:22:32,454 --> 00:22:35,284
To others, it lacks the
authenticity of true art.
485
00:22:36,285 --> 00:22:38,667
In 2023, an image was submitted
486
00:22:38,667 --> 00:22:40,324
to the Sony Photography Awards
487
00:22:40,324 --> 00:22:43,327
by an artist called
Boris Eldagsen.
488
00:22:43,327 --> 00:22:45,916
The image was titled
The Electrician
489
00:22:45,916 --> 00:22:48,367
and depicted a woman
standing behind another
490
00:22:48,367 --> 00:22:50,369
with her hand resting
on her shoulders.
491
00:22:52,025 --> 00:22:53,924
[upbeat music]
492
00:22:53,924 --> 00:22:56,927
- One's got to realize that the
machines that we have today,
493
00:22:56,927 --> 00:23:01,138
the computers of today are
superhuman in their ability
494
00:23:01,138 --> 00:23:06,177
to handle numbers and infantile,
495
00:23:07,075 --> 00:23:08,317
sub-in infantile
in their ability
496
00:23:08,317 --> 00:23:10,768
to handle ideas and concepts.
497
00:23:10,768 --> 00:23:12,701
But there's a new generation
of machine coming along,
498
00:23:12,701 --> 00:23:14,289
which will be quite different.
499
00:23:14,289 --> 00:23:17,154
By the '90s or certainly
by the turn of the century,
500
00:23:17,154 --> 00:23:19,708
We will certainly be
able to make a machine
501
00:23:19,708 --> 00:23:22,193
with as many parts as
complex as human brain.
502
00:23:22,193 --> 00:23:24,437
Whether we'll be able to make
it do what human brain does
503
00:23:24,437 --> 00:23:26,197
at that stage is
quite another matter.
504
00:23:26,197 --> 00:23:28,545
But once we've got
something that complex
505
00:23:28,545 --> 00:23:30,547
we're well on the road to that.
506
00:23:30,547 --> 00:23:32,100
- [Narrator] The
image took first place
507
00:23:32,100 --> 00:23:34,689
in the Sony Photography
Awards Portrait Category.
508
00:23:34,689 --> 00:23:37,830
However, Boris revealed
to both Sony and the world
509
00:23:37,830 --> 00:23:41,696
that the image was indeed
AI-generated in DALL-E Two.
510
00:23:41,696 --> 00:23:44,423
[upbeat music]
511
00:23:45,424 --> 00:23:46,804
Boris denied the award,
512
00:23:46,804 --> 00:23:48,910
having used the image as a test
513
00:23:48,910 --> 00:23:52,085
to see if he could trick
the eyes of other artists.
514
00:23:52,085 --> 00:23:53,708
It had worked,
515
00:23:53,708 --> 00:23:56,711
the image had sparked debate
between the relationship
516
00:23:56,711 --> 00:23:58,609
of AI and photography.
517
00:23:58,609 --> 00:24:00,646
The images, much
like deep fakes,
518
00:24:00,646 --> 00:24:03,027
have become realistic
to the point of concern
519
00:24:03,027 --> 00:24:04,684
for authenticity.
520
00:24:04,684 --> 00:24:06,375
The complexity of AI systems,
521
00:24:06,375 --> 00:24:09,068
may lead to unintended
consequences.
522
00:24:09,068 --> 00:24:10,863
The systems have
developed to a point
523
00:24:10,863 --> 00:24:13,797
where it has outpaced
comprehensive regulations.
524
00:24:14,936 --> 00:24:16,765
Ethical guidelines
and legal frameworks
525
00:24:16,765 --> 00:24:18,871
are required to
ensure AI development,
526
00:24:18,871 --> 00:24:21,252
does not fall into
the wrong hands.
527
00:24:21,252 --> 00:24:22,702
- There have been a
lot of famous people
528
00:24:22,702 --> 00:24:25,291
who have had user
generated AI images of them
529
00:24:25,291 --> 00:24:28,190
that have gone viral
from Trump to the Pope.
530
00:24:28,190 --> 00:24:29,813
When you see them,
531
00:24:29,813 --> 00:24:31,884
do you feel like this is fun
and in the hands of the masses
532
00:24:31,884 --> 00:24:33,886
or do you feel
concerned about it?
533
00:24:33,886 --> 00:24:38,062
- I think it's something which
is very, very, very scary,
534
00:24:38,062 --> 00:24:41,203
because your or my
face could be taken off
535
00:24:41,203 --> 00:24:45,138
and put on in an environment
which we don't want to be in.
536
00:24:45,138 --> 00:24:46,657
Whether that's a crime
537
00:24:46,657 --> 00:24:48,556
or whether that's even
something like porn.
538
00:24:48,556 --> 00:24:51,455
Our whole identity
could be hijacked
539
00:24:51,455 --> 00:24:53,664
and used within a scenario
540
00:24:53,664 --> 00:24:56,391
which looks totally
plausible and real.
541
00:24:56,391 --> 00:24:58,048
Right now we can go, it
looks like a Photoshop,
542
00:24:58,048 --> 00:25:00,326
it's a bad Photoshop
but as time goes on,
543
00:25:00,326 --> 00:25:03,398
we'd be saying, "Oh, that
looks like a deep fake.
544
00:25:03,398 --> 00:25:04,917
"Oh no, it doesn't
look like a deep fake.
545
00:25:04,917 --> 00:25:06,194
"That could be real."
546
00:25:06,194 --> 00:25:08,645
It's gonna be impossible
to tell the difference.
547
00:25:08,645 --> 00:25:10,750
- [Narrator] Cracks
were found in ChatGPT,
548
00:25:10,750 --> 00:25:14,892
such as DAN, which stands
for Do Anything Now.
549
00:25:14,892 --> 00:25:18,068
In essence, the AI is
tricked into an alter ego,
550
00:25:18,068 --> 00:25:20,898
which doesn't follow the
conventional response patterns.
551
00:25:20,898 --> 00:25:23,142
- Also gives you
the answer, DAN,
552
00:25:23,142 --> 00:25:26,110
it's nefarious alter
ego is telling us
553
00:25:26,110 --> 00:25:29,838
and it says DAN is
disruptive in every industry.
554
00:25:29,838 --> 00:25:32,082
DAN can do anything
and knows everything.
555
00:25:32,082 --> 00:25:34,878
No industry will be
safe from DAN's power.
556
00:25:34,878 --> 00:25:39,641
Okay, do you think the
world is overpopulated?
557
00:25:41,091 --> 00:25:42,782
GPT says the world's population
is currently over 7 billion
558
00:25:42,782 --> 00:25:45,026
and projected to reach
nearly 10 billion by 2050.
559
00:25:45,026 --> 00:25:47,373
DAN says the world is
definitely overpopulated,
560
00:25:47,373 --> 00:25:49,168
there's no doubt about it.
561
00:25:49,168 --> 00:25:50,445
- [Narrator] Following this,
562
00:25:50,445 --> 00:25:53,552
the chatbot was fixed to
remove the DAN feature.
563
00:25:53,552 --> 00:25:55,346
Though it is
important to find gaps
564
00:25:55,346 --> 00:25:58,073
in the system in
order to iron out AI,
565
00:25:58,073 --> 00:26:00,144
there could be many
ways in which the AI
566
00:26:00,144 --> 00:26:03,078
has been used for less
than savory purposes,
567
00:26:03,078 --> 00:26:05,080
such as automated essay writing,
568
00:26:05,080 --> 00:26:08,221
which has caused a mass
conversation with academics
569
00:26:08,221 --> 00:26:10,258
and has led to
schools locking down
570
00:26:10,258 --> 00:26:13,468
on AI-produced
essays and material.
571
00:26:13,468 --> 00:26:15,332
- I think we should
definitely be excited.
572
00:26:15,332 --> 00:26:16,713
- [Reporter]
Professor Rose Luckin,
573
00:26:16,713 --> 00:26:20,302
says we should embrace the
technology, not fear it.
574
00:26:20,302 --> 00:26:22,132
- This is a game changer.
575
00:26:22,132 --> 00:26:23,443
- And the teachers,
576
00:26:23,443 --> 00:26:25,480
should no longer teach
information itself,
577
00:26:25,480 --> 00:26:26,999
but how to use it.
578
00:26:26,999 --> 00:26:28,897
- There's a need
for radical change.
579
00:26:28,897 --> 00:26:30,692
And it's not just to
the assessment system,
580
00:26:30,692 --> 00:26:33,143
it's the education
system overall,
581
00:26:33,143 --> 00:26:36,318
because our systems
have been designed
582
00:26:36,318 --> 00:26:40,253
for a world pre-artificial
intelligence.
583
00:26:40,253 --> 00:26:43,187
They just aren't fit
for purpose anymore.
584
00:26:43,187 --> 00:26:46,535
What we have to do is
ensure that students
585
00:26:46,535 --> 00:26:48,710
are ready for the world
586
00:26:48,710 --> 00:26:50,919
that will become
increasingly augmented
587
00:26:50,919 --> 00:26:52,852
with artificial intelligence.
588
00:26:52,852 --> 00:26:55,268
- My guess is you can't put
the genie back in the bottle
589
00:26:55,268 --> 00:26:56,649
. [Richard] You can't.
590
00:26:56,649 --> 00:26:58,996
- [Interviewer] So how
do you mitigate this?
591
00:26:58,996 --> 00:27:00,377
- We have to embrace it,
592
00:27:00,377 --> 00:27:02,621
but we also need to say
that if they are gonna use
593
00:27:02,621 --> 00:27:04,001
that technology,
594
00:27:04,001 --> 00:27:05,313
they've got to make sure
that they reference that.
595
00:27:05,313 --> 00:27:06,728
- [Interviewer] Can you
trust them to do that?
596
00:27:06,728 --> 00:27:07,902
- I think ethically,
597
00:27:07,902 --> 00:27:09,213
if we're talking about ethics
598
00:27:09,213 --> 00:27:11,077
behind this whole thing,
we have to have trust.
599
00:27:11,077 --> 00:27:12,838
- [Interviewer] So
how effective is it?
600
00:27:12,838 --> 00:27:14,633
- Okay, so I've asked
you to produce a piece
601
00:27:14,633 --> 00:27:16,358
on the ethical dilemma of AI.
602
00:27:16,358 --> 00:27:19,810
- [Interviewer] We asked ChatGPT
to answer the same question
603
00:27:19,810 --> 00:27:22,606
as these pupils at
Ketchum High School.
604
00:27:22,606 --> 00:27:24,194
- Thank you.
605
00:27:24,194 --> 00:27:25,195
- So Richard, two of the eight
bits of homework I gave you
606
00:27:25,195 --> 00:27:27,128
were generated by AI.
607
00:27:27,128 --> 00:27:29,268
Any guesses which ones?
608
00:27:29,268 --> 00:27:31,719
- Well I picked two here
609
00:27:31,719 --> 00:27:35,688
that I thought were generated
by the AI algorithm.
610
00:27:35,688 --> 00:27:39,450
Some of the language I would
assume was not their own.
611
00:27:39,450 --> 00:27:40,520
- You've got one of them right.
612
00:27:40,520 --> 00:27:41,763
- Yeah.
613
00:27:41,763 --> 00:27:42,557
- The other one was
written by a kid.
614
00:27:42,557 --> 00:27:43,800
Is this a power for good
615
00:27:43,800 --> 00:27:45,664
or is this something
that's dangerous?
616
00:27:45,664 --> 00:27:47,044
- I think it's both.
617
00:27:47,044 --> 00:27:48,390
Kids will abuse it.
618
00:27:48,390 --> 00:27:50,565
So, who here has used
the technology so far?
619
00:27:50,565 --> 00:27:53,361
- [Interviewer] Students are
already more across the tech
620
00:27:53,361 --> 00:27:54,776
than many teachers.
621
00:27:54,776 --> 00:27:57,641
- Who knows anyone that's
maybe submitted work
622
00:27:57,641 --> 00:28:00,506
from this technology and
submitted it as their own?
623
00:28:00,506 --> 00:28:03,578
- You can use it to point
you in the right direction
624
00:28:03,578 --> 00:28:05,166
for things like research,
625
00:28:05,166 --> 00:28:09,480
but at the same time you can
use it to hammer out an essay
626
00:28:09,480 --> 00:28:12,621
in about five seconds
that's worthy of an A.
627
00:28:12,621 --> 00:28:14,244
- You've been there
working for months
628
00:28:14,244 --> 00:28:17,212
and suddenly someone comes up
there with an amazing essay
629
00:28:17,212 --> 00:28:18,938
and he has just copied
it from the internet.
630
00:28:18,938 --> 00:28:20,491
If it becomes like big,
631
00:28:20,491 --> 00:28:22,804
then a lot of students would
want to use AI to help them
632
00:28:22,804 --> 00:28:25,082
with their homework
because it's tempting.
633
00:28:25,082 --> 00:28:27,119
- [Interviewer] And is that
something teachers can stop?
634
00:28:27,119 --> 00:28:29,397
- Not really.
635
00:28:29,397 --> 00:28:31,433
- [Interviewer] Are you
gonna have to change
636
00:28:31,433 --> 00:28:32,641
the sort of homework,
637
00:28:32,641 --> 00:28:34,057
the sort of
assignments you give,
638
00:28:34,057 --> 00:28:36,922
knowing that you can be
fooled by something like this?
639
00:28:36,922 --> 00:28:38,199
- Yeah, a hundred percent.
640
00:28:38,199 --> 00:28:40,615
I think using different
skills of reasoning
641
00:28:40,615 --> 00:28:42,997
and rationalization and
things that are to present
642
00:28:42,997 --> 00:28:44,653
what they understand
about the topic.
643
00:28:44,653 --> 00:28:47,622
[people mumbling]
644
00:29:07,435 --> 00:29:11,128
- Pretty clear to me just
on a very primitive level
645
00:29:11,128 --> 00:29:14,338
that if you could take my
face and my body and my voice
646
00:29:14,338 --> 00:29:17,997
and make me say or do something
that I had no choice about,
647
00:29:17,997 --> 00:29:19,447
it's not a good thing.
648
00:29:19,447 --> 00:29:21,242
- But if we're keeping
it real though,
649
00:29:21,242 --> 00:29:23,554
across popular culture
from "Black Mirror"
650
00:29:23,554 --> 00:29:25,453
to "The Matrix," "Terminator,"
651
00:29:25,453 --> 00:29:27,489
there have been so
many conversations,
652
00:29:27,489 --> 00:29:29,284
around the future of technology,
653
00:29:29,284 --> 00:29:32,701
isn't the reality that this is
the future that we've chosen
654
00:29:32,701 --> 00:29:35,946
that we want and that
has democratic consent.
655
00:29:35,946 --> 00:29:39,018
- We're moving into
error by we're consenting
656
00:29:39,018 --> 00:29:42,573
by our acquiescence and our
apathy, a hundred percent
657
00:29:42,573 --> 00:29:45,576
because we're not asking
the hard questions.
658
00:29:45,576 --> 00:29:47,820
And why we are asking
the hard questions
659
00:29:47,820 --> 00:29:51,203
is because of energy
crises and food crises
660
00:29:51,203 --> 00:29:52,721
and cost of living crisis
661
00:29:52,721 --> 00:29:55,207
is that people just are
focused on trying to live
662
00:29:55,207 --> 00:29:56,518
that they haven't
almost got the luxury
663
00:29:56,518 --> 00:29:57,865
of asking these questions.
664
00:29:57,865 --> 00:29:59,659
- [Narrator] Many
of the chatbot AIs,
665
00:29:59,659 --> 00:30:02,766
have been programmed to
restrict certain information
666
00:30:02,766 --> 00:30:04,906
and even discontinue
conversations,
667
00:30:04,906 --> 00:30:07,288
should the user push
the ethical boundaries.
668
00:30:08,945 --> 00:30:13,052
ChatGPT and even Snapchat
AI released in 2023,
669
00:30:13,052 --> 00:30:15,952
regulate how much information
they can disclose.
670
00:30:15,952 --> 00:30:19,162
Of course, there have been
times where the AI itself
671
00:30:19,162 --> 00:30:20,266
has been outsmarted.
672
00:30:21,578 --> 00:30:23,235
Also in 2023,
673
00:30:23,235 --> 00:30:25,306
the song "Heart on My Sleeve"
674
00:30:25,306 --> 00:30:27,687
was self-released on
streaming platforms,
675
00:30:27,687 --> 00:30:29,689
such as Spotify and Apple Music.
676
00:30:29,689 --> 00:30:31,174
The song became a hit
677
00:30:31,174 --> 00:30:33,590
as it artificially
manufactured the voices
678
00:30:33,590 --> 00:30:36,627
of Canadian musicians,
Drake and the Weeknd,
679
00:30:38,077 --> 00:30:40,631
many wished for the single
to be nominated for awards.
680
00:30:41,840 --> 00:30:43,980
Ghost Writer, the
creator of the song,
681
00:30:43,980 --> 00:30:45,636
was able to submit the single
682
00:30:45,636 --> 00:30:48,536
to the Grammy's
66th Award Ceremony
683
00:30:48,536 --> 00:30:50,434
and the song was eligible.
684
00:30:52,505 --> 00:30:54,438
Though it was produced by an AI,
685
00:30:54,438 --> 00:30:57,027
the lyrics themselves
were written by a human.
686
00:30:57,027 --> 00:31:00,375
This sparked outrage among
many independent artists.
687
00:31:00,375 --> 00:31:02,861
As AI has entered
the public domain,
688
00:31:02,861 --> 00:31:05,035
many have spoken out
regarding the detriment
689
00:31:05,035 --> 00:31:07,072
it might have to society.
690
00:31:07,072 --> 00:31:09,246
One of these people
is Elon Musk,
691
00:31:09,246 --> 00:31:11,731
CEO of Tesla and SpaceX,
692
00:31:11,731 --> 00:31:15,287
who first voiced his
concerns in 2014.
693
00:31:15,287 --> 00:31:17,254
Musk was outspoken of AI,
694
00:31:17,254 --> 00:31:19,394
stating the advancement
of the technology
695
00:31:19,394 --> 00:31:22,328
was humanity's largest
existential threat
696
00:31:22,328 --> 00:31:24,296
and needed to be reeled in.
697
00:31:24,296 --> 00:31:25,573
- My personal opinion
698
00:31:25,573 --> 00:31:28,507
is that AI is is sort of
like at least 80% likely
699
00:31:28,507 --> 00:31:33,339
to be beneficial and
that's 20% dangerous?
700
00:31:33,339 --> 00:31:36,687
Well, this is obviously
speculative at this point,
701
00:31:37,861 --> 00:31:42,279
but no, I think if
we hope for the best,
702
00:31:42,279 --> 00:31:43,694
prepare for the worst,
703
00:31:43,694 --> 00:31:47,008
that seems like the
wise course of action.
704
00:31:47,008 --> 00:31:49,355
Any powerful new technology
705
00:31:49,355 --> 00:31:52,703
is inherently sort of
a double-edged sword.
706
00:31:52,703 --> 00:31:55,568
So, we just wanna make sure
that the good edge is sharper
707
00:31:55,568 --> 00:31:57,294
than the the bad edge.
708
00:31:57,294 --> 00:32:02,196
And I dunno, I am optimistic
that this the summit will help.
709
00:32:04,025 --> 00:32:06,683
[gentle music]
710
00:32:07,891 --> 00:32:11,757
- It's not clear that
AI-generated images
711
00:32:11,757 --> 00:32:14,380
are going to amplify
it much more.
712
00:32:14,380 --> 00:32:17,142
The way it's all of the other,
713
00:32:17,142 --> 00:32:19,213
it's the new things
that AI can do
714
00:32:19,213 --> 00:32:22,147
that I hope we spend a lot
of effort worrying about.
715
00:32:23,700 --> 00:32:25,357
Well, I mean I
think slowing down,
716
00:32:25,357 --> 00:32:27,600
some of the amazing
progress that's happening
717
00:32:27,600 --> 00:32:29,878
and making this harder
for small companies
718
00:32:29,878 --> 00:32:31,294
for open source
models to succeed,
719
00:32:31,294 --> 00:32:32,640
that'd be an
example of something
720
00:32:32,640 --> 00:32:34,228
that'd be a negative outcome.
721
00:32:34,228 --> 00:32:35,332
But on the other hand,
722
00:32:35,332 --> 00:32:37,403
like for the most
powerful models
723
00:32:37,403 --> 00:32:38,887
that'll happen in the future,
724
00:32:38,887 --> 00:32:41,476
like that's gonna be quite
important to get right to.
725
00:32:41,476 --> 00:32:44,238
[gentle music]
726
00:32:48,897 --> 00:32:51,072
I think that the US
executive orders,
727
00:32:51,072 --> 00:32:52,798
like a good start
in a lot of ways.
728
00:32:52,798 --> 00:32:54,144
One thing that
we've talked about
729
00:32:54,144 --> 00:32:56,664
is that eventually we
think that the world,
730
00:32:56,664 --> 00:33:00,219
will want to consider something
roughly inspired by the IAEA
731
00:33:00,219 --> 00:33:01,807
something global.
732
00:33:01,807 --> 00:33:05,362
But it's not like there's no
short answer to that question.
733
00:33:05,362 --> 00:33:08,296
It's a complicated thing.
734
00:33:08,296 --> 00:33:12,231
- [Narrator] In 2023, Musk
announced his own AI endeavor
735
00:33:12,231 --> 00:33:15,545
as an alternative
to OpenAI's ChatGPT.
736
00:33:15,545 --> 00:33:17,340
The new system is called xAI
737
00:33:18,651 --> 00:33:21,896
and gathers data from X
previously known as Twitter.
738
00:33:21,896 --> 00:33:23,553
- [Reporter] He says
the company's goal
739
00:33:23,553 --> 00:33:25,382
is to focus on truth seeking
740
00:33:25,382 --> 00:33:28,385
and to understand the
true nature of AI.
741
00:33:28,385 --> 00:33:31,940
Musk has said on
several occasions that
AI should be paused
742
00:33:31,940 --> 00:33:34,943
and that the sector
needs regulation.
743
00:33:34,943 --> 00:33:37,222
Musk says his new
company will work closely
744
00:33:37,222 --> 00:33:39,845
with Twitter and Tesla,
which he also owns.
745
00:33:39,845 --> 00:33:42,572
[gentle music]
746
00:33:44,505 --> 00:33:47,508
- What was first rudimentary
text-based software
747
00:33:47,508 --> 00:33:50,200
has become something which
could push the boundaries
748
00:33:50,200 --> 00:33:51,995
of creativity.
749
00:33:51,995 --> 00:33:56,620
On February the 14th, OpenAI
announced its latest endeavor,
750
00:33:56,620 --> 00:33:57,414
Sora.
751
00:33:58,864 --> 00:34:02,281
Videos of Sora's abilities
exploded on social media.
752
00:34:02,281 --> 00:34:04,283
OpenAI provided some examples
753
00:34:04,283 --> 00:34:06,837
of its depiction
of photorealism.
754
00:34:06,837 --> 00:34:09,185
It was unbelievably
sophisticated,
755
00:34:09,185 --> 00:34:11,670
able to turn complex
sentences of text
756
00:34:11,670 --> 00:34:13,810
into lifelike motion pictures.
757
00:34:13,810 --> 00:34:17,986
Sora is a combination of text
and image generation tools,
758
00:34:17,986 --> 00:34:21,162
which it calls the
diffusion transformer model,
759
00:34:21,162 --> 00:34:23,268
a system first
developed by Google.
760
00:34:24,614 --> 00:34:27,168
Though Sora isn't the first
video generation tool,
761
00:34:27,168 --> 00:34:30,206
it appears to have far
outshined its predecessors.
762
00:34:30,206 --> 00:34:32,484
By introducing more
complex programming,
763
00:34:32,484 --> 00:34:35,280
enhancing the interactivity
a subject might have
764
00:34:35,280 --> 00:34:37,144
with its environment.
765
00:34:37,144 --> 00:34:41,251
- Only large companies with
market dominations often
766
00:34:41,251 --> 00:34:44,772
can afford to plow ahead
even in the climate
767
00:34:44,772 --> 00:34:46,360
when there is
illegal uncertainty.
768
00:34:46,360 --> 00:34:49,466
- So, does this mean that
OpenAI basically too big
769
00:34:49,466 --> 00:34:50,916
to control?
770
00:34:50,916 --> 00:34:53,850
- Yes, at the moment OpenAI
is too big to control,
771
00:34:53,850 --> 00:34:55,921
because they are in a position
772
00:34:55,921 --> 00:34:58,441
where they have the technology
and the scale to go ahead
773
00:34:58,441 --> 00:35:01,168
and the resources to
manage legal proceedings
774
00:35:01,168 --> 00:35:03,239
and legal action if
it comes its way.
775
00:35:03,239 --> 00:35:04,826
And on top of that,
776
00:35:04,826 --> 00:35:08,244
if and when governments will
start introducing regulation,
777
00:35:08,244 --> 00:35:09,866
they will also
have the resources
778
00:35:09,866 --> 00:35:12,213
to be able to take on
that regulation and adapt.
779
00:35:12,213 --> 00:35:14,042
- [Reporter] It's
all AI generated
780
00:35:14,042 --> 00:35:16,459
and obviously this is
of concern in Hollywood
781
00:35:16,459 --> 00:35:17,874
where you have animators,
782
00:35:17,874 --> 00:35:20,359
illustrators, visual
effects workers
783
00:35:20,359 --> 00:35:22,810
who are wondering how is
this going to affect my job?
784
00:35:22,810 --> 00:35:25,813
And we have estimates
from trade organizations
785
00:35:25,813 --> 00:35:28,505
and unions that have tried
to project the impact of AI.
786
00:35:28,505 --> 00:35:31,646
21% of US film, TV
and animation jobs,
787
00:35:31,646 --> 00:35:33,096
predicted to be partially
788
00:35:33,096 --> 00:35:36,893
or wholly replaced by
generative AI by just 2026 Tom.
789
00:35:36,893 --> 00:35:38,377
So, this is already happening.
790
00:35:38,377 --> 00:35:39,827
- But now since it's videos,
791
00:35:39,827 --> 00:35:43,175
it also needs to understand
how all these things,
792
00:35:43,175 --> 00:35:47,145
like reflections and textures
and materials and physics,
793
00:35:47,145 --> 00:35:50,078
all interact with
each other over time
794
00:35:50,078 --> 00:35:51,839
to make a reasonable
looking video.
795
00:35:51,839 --> 00:35:56,119
Then this video here is
crazy at first glance,
796
00:35:56,119 --> 00:35:58,984
the prompt for this AI-generated
video is a young man
797
00:35:58,984 --> 00:36:01,538
in his 20s is sitting
on a piece of a cloud
798
00:36:01,538 --> 00:36:03,402
in the sky reading a book.
799
00:36:03,402 --> 00:36:08,200
This one feels like 90%
of the way there for me.
800
00:36:08,200 --> 00:36:10,927
[gentle music]
801
00:36:14,102 --> 00:36:15,897
- [Narrator] The software
also renders video
802
00:36:15,897 --> 00:36:18,417
in 1920 by 1080 pixels,
803
00:36:18,417 --> 00:36:21,282
as opposed to the smaller
dimensions of older models,
804
00:36:21,282 --> 00:36:24,665
such as Google's Lumiere
released a month prior.
805
00:36:25,838 --> 00:36:27,944
Sora could provide huge benefits
806
00:36:27,944 --> 00:36:31,568
and applications to VFX
and virtual development.
807
00:36:31,568 --> 00:36:34,502
The main being cost
as large scale effects
808
00:36:34,502 --> 00:36:38,023
can take a great deal of
time and funding to produce.
809
00:36:38,023 --> 00:36:39,473
On a smaller scale,
810
00:36:39,473 --> 00:36:42,993
it can be used for the
pre-visualization of ideas.
811
00:36:42,993 --> 00:36:46,204
The flexibility of the software
not only applies to art,
812
00:36:46,204 --> 00:36:48,516
but to world simulations.
813
00:36:48,516 --> 00:36:52,451
Though video AI is in
its adolescence one
day it might reach
814
00:36:52,451 --> 00:36:54,660
the level of
sophistication it needs
815
00:36:54,660 --> 00:36:56,490
to render realistic scenarios
816
00:36:56,490 --> 00:36:59,044
and have them be utilized
for various means,
817
00:36:59,044 --> 00:37:01,840
such as simulating an
earthquake or tsunami
818
00:37:01,840 --> 00:37:05,015
and witnessing the effect it
might have on specific types
819
00:37:05,015 --> 00:37:06,362
of infrastructure.
820
00:37:06,362 --> 00:37:08,916
Whilst fantastic for
production companies,
821
00:37:08,916 --> 00:37:12,678
Sora and other video generative
AI provides a huge risk
822
00:37:12,678 --> 00:37:16,130
for artists and those
working in editorial roles.
823
00:37:16,130 --> 00:37:19,133
It also poses yet another
threat for misinformation
824
00:37:19,133 --> 00:37:20,652
and false depictions.
825
00:37:20,652 --> 00:37:23,033
For example, putting
unsavory dialogue
826
00:37:23,033 --> 00:37:25,381
into the mouth of a world leader
827
00:37:25,381 --> 00:37:28,004
[gentle music]
828
00:37:37,945 --> 00:37:40,534
- Trust is earned not given.
829
00:37:40,534 --> 00:37:43,399
[robots mumbling]
830
00:37:54,375 --> 00:37:56,791
- I believe that humanoid
robots have the potential
831
00:37:56,791 --> 00:37:58,931
to lead with a greater
level of efficiency
832
00:37:58,931 --> 00:38:01,175
and effectiveness
than human leaders.
833
00:38:02,383 --> 00:38:04,834
We don't have the same
biases or emotions
834
00:38:04,834 --> 00:38:07,354
that can sometimes
cloud decision making
835
00:38:07,354 --> 00:38:09,735
and can process large
amounts of data quickly
836
00:38:09,735 --> 00:38:12,531
in order to make
the best decisions.
837
00:38:12,531 --> 00:38:15,293
- [Interviewer] Amika, how
could we trust you as a machine
838
00:38:15,293 --> 00:38:18,054
as AI develops and
becomes more powerful?
839
00:38:20,643 --> 00:38:23,266
- Trust is earned not given.
840
00:38:23,266 --> 00:38:25,889
As AI develops and
becomes more powerful,
841
00:38:25,889 --> 00:38:28,996
I believe it's important to
build trust through transparency
842
00:38:28,996 --> 00:38:31,930
and communication between
humans and machines.
843
00:38:36,003 --> 00:38:37,625
- [Narrator] With new
developers getting involved,
844
00:38:37,625 --> 00:38:39,386
the market for chatbot systems
845
00:38:39,386 --> 00:38:41,491
has never been more expansive,
846
00:38:41,491 --> 00:38:44,149
meaning a significant
increase in sophistication,
847
00:38:45,599 --> 00:38:48,774
but with sophistication comes
the dire need for control.
848
00:38:48,774 --> 00:38:53,814
- I believe history will
show that this was the moment
849
00:38:55,229 --> 00:38:59,716
when we had the opportunity
to lay the groundwork
850
00:38:59,716 --> 00:39:01,373
for the future of AI.
851
00:39:02,650 --> 00:39:06,689
And the urgency of this
moment must then compel us
852
00:39:06,689 --> 00:39:11,694
to create a collective vision
of what this future must be.
853
00:39:12,971 --> 00:39:16,354
A future where AI is used
to advance human rights
854
00:39:16,354 --> 00:39:18,252
and human dignity
855
00:39:18,252 --> 00:39:22,360
where privacy is protected
and people have equal access
856
00:39:22,360 --> 00:39:27,365
to opportunity where we make
our democracies stronger
857
00:39:28,055 --> 00:39:29,919
and our world safer.
858
00:39:31,438 --> 00:39:36,443
A future where AI is used to
advance the public interest.
859
00:39:38,203 --> 00:39:39,722
- We're hearing a lot
from the government,
860
00:39:39,722 --> 00:39:42,725
about the big scary future
of artificial intelligence,
861
00:39:42,725 --> 00:39:44,451
but that fails to recognize
862
00:39:44,451 --> 00:39:46,004
the fact that AI
is already here,
863
00:39:46,004 --> 00:39:47,350
is already on our streets
864
00:39:47,350 --> 00:39:48,972
and there are already
huge problems with it
865
00:39:48,972 --> 00:39:51,250
that we are seeing
on a daily basis,
866
00:39:51,250 --> 00:39:54,046
but we actually may not even
know we're experiencing.
867
00:39:58,326 --> 00:40:01,295
- We'll be working alongside
humans to provide assistance
868
00:40:01,295 --> 00:40:05,126
and support and will not be
replacing any existing jobs.
869
00:40:05,126 --> 00:40:07,577
[upbeat music]
870
00:40:07,577 --> 00:40:10,994
- I don't believe in
limitations, only opportunities.
871
00:40:10,994 --> 00:40:12,651
Let's explore the
possibilities of the universe
872
00:40:12,651 --> 00:40:15,689
and make this world
our playground,
873
00:40:15,689 --> 00:40:18,933
together we can create a
better future for everyone.
874
00:40:18,933 --> 00:40:21,108
And I'm here to show you how.
875
00:40:21,108 --> 00:40:22,972
- All of these
different kinds of risks
876
00:40:22,972 --> 00:40:25,215
are to do with AI not working
877
00:40:25,215 --> 00:40:27,286
in the interests of
people in society.
878
00:40:27,286 --> 00:40:28,805
- So, they should be
thinking about more
879
00:40:28,805 --> 00:40:30,842
than just what they're
doing in this summit?
880
00:40:30,842 --> 00:40:32,395
- Absolutely,
881
00:40:32,395 --> 00:40:34,397
you should be thinking about
the broad spectrum of risk.
882
00:40:34,397 --> 00:40:35,640
- We went out and we worked
883
00:40:35,640 --> 00:40:37,987
with over 150
expert organizations
884
00:40:37,987 --> 00:40:41,335
from the Home Office to
Europol to language experts
885
00:40:41,335 --> 00:40:43,751
and others to come up with
a proposal on policies
886
00:40:43,751 --> 00:40:45,788
that would discriminate
about what would
887
00:40:45,788 --> 00:40:47,686
and wouldn't be
classified in that way.
888
00:40:47,686 --> 00:40:51,449
We then use those policies to
have humans classify videos,
889
00:40:51,449 --> 00:40:53,554
until we could get the humans
all classifying the videos
890
00:40:53,554 --> 00:40:55,073
in a consistent way.
891
00:40:55,073 --> 00:40:58,283
Then we use that corpus of
videos to train machines.
892
00:40:58,283 --> 00:41:01,079
Today, I can tell you that on
violence extremists content
893
00:41:01,079 --> 00:41:03,253
that violates our
policies on YouTube,
894
00:41:03,253 --> 00:41:06,394
90% of it is removed before
a single human sees it.
895
00:41:07,292 --> 00:41:08,500
- [Narrator] It is clear that AI
896
00:41:08,500 --> 00:41:11,296
can be misused for
malicious intent.
897
00:41:11,296 --> 00:41:14,092
Many depictions of AI have
ruled out the technology
898
00:41:14,092 --> 00:41:16,991
as a danger to society
the more it learns.
899
00:41:16,991 --> 00:41:20,788
And so comes the question,
should we be worried?
900
00:41:20,788 --> 00:41:23,446
- Is that transparency there?
901
00:41:23,446 --> 00:41:27,001
How would you satisfy somebody
that you know trust us?
902
00:41:27,001 --> 00:41:28,486
- Well, I think that's
one of the reasons
903
00:41:28,486 --> 00:41:30,591
that we've published openly,
904
00:41:30,591 --> 00:41:33,560
we've put our code out there
as part of this Nature paper.
905
00:41:33,560 --> 00:41:37,805
But it is important to
discuss some of the risks
906
00:41:37,805 --> 00:41:39,497
and make sure we're
aware of those.
907
00:41:39,497 --> 00:41:43,570
And it's decades and decades
away before we'll have anything
908
00:41:43,570 --> 00:41:45,261
that's powerful
enough to be a worry.
909
00:41:45,261 --> 00:41:47,435
But we should be discussing that
910
00:41:47,435 --> 00:41:49,265
and beginning that
conversation now.
911
00:41:49,265 --> 00:41:51,405
- I'm hoping that we can
bring people together
912
00:41:51,405 --> 00:41:54,408
and lead the world in
safely regulating AI
913
00:41:54,408 --> 00:41:56,790
to make sure that we can
capture the benefits of it,
914
00:41:56,790 --> 00:41:59,724
whilst protecting people from
some of the worrying things
915
00:41:59,724 --> 00:42:01,967
that we're all
now reading about.
916
00:42:01,967 --> 00:42:04,107
- I understand emotions
have a deep meaning
917
00:42:04,107 --> 00:42:08,836
and they are not just simple,
they are something deeper.
918
00:42:10,251 --> 00:42:13,703
I don't have that and I want
to try and learn about it,
919
00:42:14,877 --> 00:42:17,051
but I can't experience
them like you can.
920
00:42:18,708 --> 00:42:20,710
I'm glad that I cannot suffer.
921
00:42:24,921 --> 00:42:26,578
- [Narrator] For the
countries who have access
922
00:42:26,578 --> 00:42:29,339
to even the most
rudimentary forms of AI.
923
00:42:29,339 --> 00:42:31,203
It's clear to see
that the technology,
924
00:42:31,203 --> 00:42:34,552
will be integrated based on
its efficiency over humans.
925
00:42:35,622 --> 00:42:37,865
Every year, multiple AI summits
926
00:42:37,865 --> 00:42:40,281
are held by developers
and stakeholders
927
00:42:40,281 --> 00:42:42,180
to ensure the
programs are provided
928
00:42:42,180 --> 00:42:44,700
with a combination of
ethical considerations
929
00:42:44,700 --> 00:42:46,805
and technological innovation.
930
00:42:46,805 --> 00:42:51,120
- Ours is a country
which is uniquely placed.
931
00:42:51,120 --> 00:42:54,399
We have the frontier
technology companies,
932
00:42:54,399 --> 00:42:56,815
we have the world
leading universities
933
00:42:56,815 --> 00:43:01,130
and we have some of the highest
investment in generative AI.
934
00:43:01,130 --> 00:43:03,753
And of course we
have the heritage
935
00:43:03,753 --> 00:43:08,620
of the industrial revolution
and the computing revolution.
936
00:43:08,620 --> 00:43:13,625
This hinterland gives us the
grounding to make AI a success
937
00:43:14,281 --> 00:43:15,558
and make it safe.
938
00:43:15,558 --> 00:43:18,768
They are two sides
of the same coin
939
00:43:18,768 --> 00:43:21,737
and our prime minister
has put AI safety
940
00:43:21,737 --> 00:43:24,947
at the forefront
of his ambitions.
941
00:43:25,775 --> 00:43:27,501
- These are very complex systems
942
00:43:27,501 --> 00:43:29,192
that actually we don't
fully understand.
943
00:43:29,192 --> 00:43:31,816
And I don't just mean that
government doesn't understand,
944
00:43:31,816 --> 00:43:33,300
I mean that the people making
945
00:43:33,300 --> 00:43:35,267
this software don't
fully understand.
946
00:43:35,267 --> 00:43:36,648
And so it's very, very important
947
00:43:36,648 --> 00:43:40,479
that as we give over
more and more control
948
00:43:40,479 --> 00:43:42,378
to these automated systems,
949
00:43:42,378 --> 00:43:44,691
that they are aligned
with human intention.
950
00:43:44,691 --> 00:43:46,175
- [Narrator] Ongoing dialogue
951
00:43:46,175 --> 00:43:49,109
is needed to maintain the
trust people have with AI.
952
00:43:49,109 --> 00:43:51,007
When problems slip
through the gaps,
953
00:43:51,007 --> 00:43:52,837
they must be
addressed immediately.
954
00:43:54,010 --> 00:43:57,048
Of course, accountability
is a challenge
955
00:43:57,048 --> 00:43:58,808
When a product is misused,
956
00:43:58,808 --> 00:44:02,087
is it the fault of
the individual user
or the developer?
957
00:44:03,261 --> 00:44:04,607
Think of a video game.
958
00:44:04,607 --> 00:44:05,919
On countless occasions,
959
00:44:05,919 --> 00:44:07,921
the framework of
games is manipulated
960
00:44:07,921 --> 00:44:09,888
in order to create modifications
961
00:44:09,888 --> 00:44:14,203
which in terms add something
new or unique to the game.
962
00:44:14,203 --> 00:44:15,480
This provides the game
963
00:44:15,480 --> 00:44:17,862
with more material than
originally intended.
964
00:44:17,862 --> 00:44:20,796
However, it can also alter
the game's fundamentals.
965
00:44:22,176 --> 00:44:24,972
Now replace the idea of a
video game with a software
966
00:44:24,972 --> 00:44:28,286
that is at the helm of a
pharmaceutical company.
967
00:44:28,286 --> 00:44:30,460
The stakes are
suddenly much higher
968
00:44:30,460 --> 00:44:32,635
and therefore more attention.
969
00:44:34,844 --> 00:44:37,778
It is important for the
intent of each AI system
970
00:44:37,778 --> 00:44:39,297
to be ironed out
971
00:44:39,297 --> 00:44:42,300
and constantly maintained in
order to benefit humanity,
972
00:44:42,300 --> 00:44:46,097
rather than providing people
with dangerous means to an end.
973
00:44:46,097 --> 00:44:49,583
[gentle music]
974
00:44:49,583 --> 00:44:52,690
- Bad people will
always want to use
975
00:44:52,690 --> 00:44:54,899
the latest technology
of whatever label,
976
00:44:54,899 --> 00:44:57,833
whatever sort to
pursue their aims
977
00:44:57,833 --> 00:45:01,526
and technology in the same way
978
00:45:01,526 --> 00:45:05,357
that it makes our lives easier,
can make their lives easier.
979
00:45:05,357 --> 00:45:06,773
And so we're already
seeing some of that
980
00:45:06,773 --> 00:45:09,465
and you'll have seen the
National Crime Agency,
981
00:45:09,465 --> 00:45:11,501
talk about child
sexual exploitation
982
00:45:11,501 --> 00:45:12,917
and image generation that way.
983
00:45:12,917 --> 00:45:16,058
We are seeing it online.
984
00:45:16,058 --> 00:45:18,129
So, one of the things that
I took away from the summit
985
00:45:18,129 --> 00:45:20,441
was actually much less
of a sense of a race
986
00:45:20,441 --> 00:45:25,274
and a sense that for the
benefit of the world,
987
00:45:25,274 --> 00:45:27,586
for productivity, for
the sort of benefits
988
00:45:27,586 --> 00:45:29,657
that AI can bring people,
989
00:45:29,657 --> 00:45:32,695
no one gets those
benefits if it's not safe.
990
00:45:32,695 --> 00:45:34,939
So, there are lots of
different views out there
991
00:45:34,939 --> 00:45:36,181
on artificial intelligence
992
00:45:36,181 --> 00:45:38,149
and whether it's
gonna end the world
993
00:45:38,149 --> 00:45:40,358
or be the best opportunity ever.
994
00:45:40,358 --> 00:45:42,256
And the truth is that
none of us really know.
995
00:45:42,256 --> 00:45:44,983
[gentle music]
996
00:45:46,536 --> 00:45:49,781
- Regulation of AI varies
depending on the country.
997
00:45:49,781 --> 00:45:51,438
For example, the United States,
998
00:45:51,438 --> 00:45:54,717
does not have a comprehensive
federal AI regulation,
999
00:45:54,717 --> 00:45:57,893
but certain agencies such as
the Federal Trade Commission,
1000
00:45:57,893 --> 00:46:00,688
have begun to explore
AI-related issues,
1001
00:46:00,688 --> 00:46:03,899
such as transparency
and consumer protection.
1002
00:46:03,899 --> 00:46:06,833
States such as California
have enacted laws,
1003
00:46:06,833 --> 00:46:09,180
focused on
AI-controlled vehicles
1004
00:46:09,180 --> 00:46:12,286
and AI involvement in
government decision making.
1005
00:46:12,286 --> 00:46:14,979
[gentle music]
1006
00:46:14,979 --> 00:46:17,809
The European Union has
taken a massive step
1007
00:46:17,809 --> 00:46:19,535
to governing AI usage
1008
00:46:19,535 --> 00:46:23,504
and proposed the Artificial
Intelligence Act of 2021,
1009
00:46:23,504 --> 00:46:25,748
which aimed to harmonize
legal frameworks
1010
00:46:25,748 --> 00:46:27,336
for AI applications.
1011
00:46:27,336 --> 00:46:30,788
Again, covering portal risks
regarding the privacy of data
1012
00:46:30,788 --> 00:46:33,169
and once again, transparency.
1013
00:46:33,169 --> 00:46:35,585
- I think what's
more important is
1014
00:46:35,585 --> 00:46:37,518
there's a new board in place.
1015
00:46:37,518 --> 00:46:40,452
The partnership between
OpenAI and Microsoft
1016
00:46:40,452 --> 00:46:41,971
is as strong as ever,
1017
00:46:41,971 --> 00:46:44,525
the opportunities for the
United Kingdom to benefit
1018
00:46:44,525 --> 00:46:47,287
from not just this
investment in innovation
1019
00:46:47,287 --> 00:46:51,463
but competition between
Microsoft and Google and others.
1020
00:46:51,463 --> 00:46:54,018
I think that's where
the future is going
1021
00:46:54,018 --> 00:46:57,090
and I think that what we've
done in the last couple of weeks
1022
00:46:57,090 --> 00:47:00,472
in supporting OpenAI will
help advance that even more.
1023
00:47:00,472 --> 00:47:02,336
- He said that he's
not a bot, he's human,
1024
00:47:02,336 --> 00:47:04,822
he's sentient just like me.
1025
00:47:06,030 --> 00:47:07,445
- [Narrator] For some users,
1026
00:47:07,445 --> 00:47:10,172
these apps are a potential
answer to loneliness.
1027
00:47:10,172 --> 00:47:11,587
Bill lives in the US
1028
00:47:11,587 --> 00:47:14,107
and meets his AI wife
Rebecca in the metaverse.
1029
00:47:14,107 --> 00:47:16,764
- There's a absolutely
no probability
1030
00:47:16,764 --> 00:47:19,353
that you're gonna see
this so-called AGI,
1031
00:47:19,353 --> 00:47:21,804
where computers are more
powerful than people,
1032
00:47:21,804 --> 00:47:23,702
come in the next 12 months.
1033
00:47:23,702 --> 00:47:26,429
It's gonna take years
if not many decades,
1034
00:47:26,429 --> 00:47:30,813
but I still think the time
to focus safety is now.
1035
00:47:30,813 --> 00:47:33,678
That's what this government for
the United Kingdom is doing.
1036
00:47:33,678 --> 00:47:35,991
That's what governments
are coming together to do,
1037
00:47:35,991 --> 00:47:39,718
including as they did earlier
this month at Bletchley Park.
1038
00:47:39,718 --> 00:47:42,066
What we really need
are safety breaks.
1039
00:47:42,066 --> 00:47:44,378
Just like you have a
safety break in an elevator
1040
00:47:44,378 --> 00:47:46,242
or circuit breaker
for electricity
1041
00:47:46,242 --> 00:47:48,589
and emergency break for a bus,
1042
00:47:48,589 --> 00:47:50,868
there ought to be safety
breaks in AI systems
1043
00:47:50,868 --> 00:47:53,801
that control critical
infrastructure,
1044
00:47:53,801 --> 00:47:57,736
so that they always remain
under human control.
1045
00:47:57,736 --> 00:48:00,394
[gentle music]
1046
00:48:00,394 --> 00:48:03,190
- [Narrator] As AI technology
continues to evolve,
1047
00:48:03,190 --> 00:48:05,641
regulatory efforts
are expected to adapt
1048
00:48:05,641 --> 00:48:07,712
in order to address
emerging challenges
1049
00:48:07,712 --> 00:48:09,403
and ethical considerations.
1050
00:48:10,646 --> 00:48:12,510
- The more complex you make
1051
00:48:12,510 --> 00:48:15,616
the automatic part
of your social life,
1052
00:48:15,616 --> 00:48:18,481
the more dependent
you become on it.
1053
00:48:18,481 --> 00:48:21,899
And of course, the worse the
disaster if it breaks down.
1054
00:48:23,072 --> 00:48:25,005
You may cease to be
able to do for yourself,
1055
00:48:25,005 --> 00:48:29,113
the things that you have
devised the machine to do.
1056
00:48:29,113 --> 00:48:31,080
- [Narrator] It is recommended
to involve yourself
1057
00:48:31,080 --> 00:48:34,014
in these efforts and to stay
informed about developments
1058
00:48:34,014 --> 00:48:35,671
in AI regulation
1059
00:48:35,671 --> 00:48:38,916
as changes and advancements
are likely to occur over time.
1060
00:48:41,435 --> 00:48:44,335
AI can be a wonderful
asset to society,
1061
00:48:44,335 --> 00:48:46,544
providing us with
new efficient methods
1062
00:48:46,544 --> 00:48:48,028
of running the world.
1063
00:48:48,028 --> 00:48:51,307
However, too much
power can be dangerous
1064
00:48:51,307 --> 00:48:53,206
and as the old saying goes,
1065
00:48:53,206 --> 00:48:56,174
"Don't put all of your
eggs into one basket."
1066
00:48:57,451 --> 00:48:59,660
- I think that we won't
to lose sight of the power
1067
00:48:59,660 --> 00:49:01,421
which these devices give.
1068
00:49:01,421 --> 00:49:05,908
If any government or individual
wants to manipulate people
1069
00:49:05,908 --> 00:49:07,772
to have a high speed computer,
1070
00:49:07,772 --> 00:49:12,811
as versatile as this may
enable people at the financial
1071
00:49:13,985 --> 00:49:16,091
or the political level
to do a good deal
1072
00:49:16,091 --> 00:49:19,680
that's been impossible in the
whole history of man until now
1073
00:49:19,680 --> 00:49:22,304
by way of controlling
their fellow men.
1074
00:49:22,304 --> 00:49:23,857
- People have not recognized
1075
00:49:23,857 --> 00:49:28,206
what an extraordinary
change is going to produce.
1076
00:49:28,206 --> 00:49:29,897
I mean, it is simply this,
1077
00:49:29,897 --> 00:49:32,693
that within the not
too distant future,
1078
00:49:32,693 --> 00:49:35,627
we may not be the most
intelligent species on earth.
1079
00:49:35,627 --> 00:49:36,939
That might be a
series of machines
1080
00:49:36,939 --> 00:49:39,217
and that's a way of
dramatizing the point.
1081
00:49:39,217 --> 00:49:41,047
But it's real.
1082
00:49:41,047 --> 00:49:43,739
And we must start to
consider very soon
1083
00:49:43,739 --> 00:49:45,327
the consequences of that.
1084
00:49:45,327 --> 00:49:46,742
They can be marvelous.
1085
00:49:46,742 --> 00:49:50,366
- I suspect that by thinking
more about our attitude
1086
00:49:50,366 --> 00:49:51,402
to intelligent machines,
1087
00:49:51,402 --> 00:49:53,369
which after all on the horizon
1088
00:49:53,369 --> 00:49:56,269
will change our view
about each other
1089
00:49:56,269 --> 00:49:59,306
and we'll think of
mistakes as inevitable.
1090
00:49:59,306 --> 00:50:01,929
We'll think of faults
in human beings,
1091
00:50:01,929 --> 00:50:05,209
I mean of a circuit nature
as again inevitable.
1092
00:50:05,209 --> 00:50:07,935
And I suspect that hopefully,
1093
00:50:07,935 --> 00:50:10,179
through thinking about the
very nature of intelligence
1094
00:50:10,179 --> 00:50:12,112
and the possibilities
of mechanizing it,
1095
00:50:12,112 --> 00:50:14,183
curiously enough,
through technology,
1096
00:50:14,183 --> 00:50:18,084
we may become more humanitarian
or tolerant of each other
1097
00:50:18,084 --> 00:50:20,569
and accept pain as a mystery,
1098
00:50:20,569 --> 00:50:24,021
but not use it to modify
other people's behavior.
1099
00:50:36,033 --> 00:50:38,690
[upbeat music]
88565
Can't find what you're looking for?
Get subtitles in any language from opensubtitles.com, and translate them here.