Would you like to inspect the original subtitles? These are the user uploaded subtitles that are being translated:
1
00:00:00,120 --> 00:00:03,960
What would it take for a machine to jam?
2
00:00:03,960 --> 00:00:06,320
This question was first posed
in Computer Music Journal
3
00:00:06,320 --> 00:00:08,720
in 1988 as a form of musical
4
00:00:08,720 --> 00:00:11,840
turing test, a way of identifying human-like
5
00:00:11,840 --> 00:00:14,640
intelligence in machines. And this test is
6
00:00:14,640 --> 00:00:18,269
very simple. Blues and E flat,
7
00:00:18,269 --> 00:00:19,760
one two, a 1234.
8
00:00:23,400 --> 00:00:24,760
What would it take in this case for a
9
00:00:24,760 --> 00:00:27,480
machine to convince me and you that it was
10
00:00:27,480 --> 00:00:30,280
human? Well, very quickly, a number of
11
00:00:30,280 --> 00:00:32,920
things need to happen in real time. First,
12
00:00:32,920 --> 00:00:35,360
the machine would need to identify where one
13
00:00:35,360 --> 00:00:38,400
was and entrain to my pulse using
14
00:00:38,400 --> 00:00:42,160
not only auditory but visual cues. We're very
15
00:00:42,160 --> 00:00:44,720
good at figuring out if other people are
16
00:00:44,720 --> 00:00:47,600
locked in with our pulse. And so I would
17
00:00:47,600 --> 00:00:49,880
need to sense that the machine was feeling
18
00:00:49,880 --> 00:00:53,320
the groove. Two, the machine would
19
00:00:53,320 --> 00:00:55,440
need to identify what it was that I was
20
00:00:55,440 --> 00:00:58,760
doing on my bass guitar, and then fit that
21
00:00:58,760 --> 00:01:02,560
within the paradigm of a twelve bar blues.
22
00:01:02,560 --> 00:01:06,040
Am I doing the quick IV, for example,
23
00:01:06,040 --> 00:01:09,360
or am I playing a II-V on the turnaround
24
00:01:09,360 --> 00:01:12,400
instead of a V VI? So am I playing a
25
00:01:12,400 --> 00:01:15,480
jazz blues versus a delta blues versus a
26
00:01:15,480 --> 00:01:18,600
Chicago style blues? It would then need to
27
00:01:18,600 --> 00:01:21,920
take that information and respond in kind in
28
00:01:21,920 --> 00:01:26,600
an improvised solo, using meaningful blues
29
00:01:26,600 --> 00:01:30,440
vocabulary. Now, music is not really a
30
00:01:30,440 --> 00:01:32,600
language, but it certainly feels like a
31
00:01:32,600 --> 00:01:35,080
language to those who play music. And so if
32
00:01:35,080 --> 00:01:38,520
I ask a musical question, does it feel
33
00:01:38,520 --> 00:01:41,160
like I'm getting a meaningful answer in
34
00:01:41,160 --> 00:01:45,520
response? Does it feel like I'm
35
00:01:45,520 --> 00:01:48,280
connecting with somebody, that there is a
36
00:01:48,280 --> 00:01:51,360
real ghost in the machine on the other side
37
00:01:51,360 --> 00:01:55,160
of the algorithm?
38
00:01:55,160 --> 00:01:56,920
I'm fairly convinced that this will never
39
00:01:56,920 --> 00:02:00,840
happen. No AI will be able to pass a true
40
00:02:00,840 --> 00:02:03,680
musical Turing test. Call Ray Kurzweil.
41
00:02:03,680 --> 00:02:05,680
Tell him hes a hack. Singularity, my ass.
42
00:02:05,680 --> 00:02:07,520
Now, I'm not going to bullBASS you in this video.
43
00:02:07,520 --> 00:02:09,840
I'm not going to appeal to some vague sense
44
00:02:09,840 --> 00:02:12,400
of the uniqueness of human musical
45
00:02:12,400 --> 00:02:17,880
creativity - "AI has no soul" - because it's
46
00:02:17,880 --> 00:02:20,040
gonna get there, right? I mean, Red Lobster
47
00:02:20,040 --> 00:02:22,680
is already using AI generated music in its
48
00:02:22,680 --> 00:02:25,320
ad campaigns. And who are we to question the
49
00:02:25,320 --> 00:02:30,720
aesthetic sensibilities of Red Lobster? Red
50
00:02:30,720 --> 00:02:32,580
Lobster, you got the magic
touch. "Cheddar Bay Biscuits
51
00:02:32,580 --> 00:02:35,280
I love you so much" There have
been some incredible advances in generative
52
00:02:35,280 --> 00:02:38,600
AI from companies like Udio and Suno AI.
53
00:02:38,600 --> 00:02:40,480
They let you generate full pieces of music
54
00:02:40,480 --> 00:02:42,720
from text prompts. Just type in what you
55
00:02:42,720 --> 00:02:45,280
want and it will do a pretty good job of
56
00:02:45,280 --> 00:02:46,840
giving it to you.
57
00:02:46,840 --> 00:02:50,760
"In Tommy's Shack Grill late at night.
58
00:02:50,760 --> 00:02:59,840
Big Joe Flippin those paddies, fine.
American Cheddar melting just right."
59
00:03:00,480 --> 00:03:04,080
That's horrifying. It's like spitting
60
00:03:04,080 --> 00:03:07,480
on Muddy Waters' grave. Upon hearing these
61
00:03:07,480 --> 00:03:10,200
kinds of results, many techno-optimists have
62
00:03:10,200 --> 00:03:12,400
breathlessly extolled that the musical
63
00:03:12,400 --> 00:03:14,640
Turing Test has been passed.
64
00:03:14,640 --> 00:03:16,760
Machine musical intelligence is upon us.
65
00:03:16,760 --> 00:03:18,680
Daddy Elon is so excited.
66
00:03:18,680 --> 00:03:20,720
So why am I so doubtful here, right?
67
00:03:20,720 --> 00:03:24,240
Why am I saying that music AI
will never pass the Turing Test?
68
00:03:24,240 --> 00:03:27,040
Well, I think there's a
pretty profound category error
69
00:03:27,040 --> 00:03:29,120
that's going on here that we need to always
70
00:03:29,120 --> 00:03:31,360
be aware of going forward.
71
00:03:31,360 --> 00:03:36,800
What generative AI does is not music.
72
00:03:36,800 --> 00:03:37,680
Let me explain.
73
00:03:37,680 --> 00:03:40,320
This video was brought to you by Nebula.
74
00:03:40,320 --> 00:03:41,520
Hope of an extraordinary
75
00:03:41,520 --> 00:03:43,560
aesthetic success based on extraordinary
76
00:03:43,560 --> 00:03:45,840
technology is a cruel deceit.
77
00:03:45,840 --> 00:03:57,360
Iannis Xennakis, 1985.
78
00:03:57,360 --> 00:03:59,960
In 1950, Alan Turing first wrote about what
79
00:03:59,960 --> 00:04:02,400
he called the Imitation Game, what we now
80
00:04:02,400 --> 00:04:05,920
call the Turing Test. In this test, an
81
00:04:05,920 --> 00:04:07,880
interlocutor asked questions of two
82
00:04:07,880 --> 00:04:11,120
entities, one machine and one human. And if
83
00:04:11,120 --> 00:04:13,640
at the end of a conversation - traditionally
84
00:04:13,640 --> 00:04:15,920
done through a text prompt - the interlocutor
85
00:04:15,920 --> 00:04:18,240
is unable to tell which one is the machine
86
00:04:18,240 --> 00:04:20,320
and which one is the human, then we say that
87
00:04:20,320 --> 00:04:22,800
the machine passed the Turing Test. It
88
00:04:22,800 --> 00:04:26,640
displays human level intelligence. Now,
89
00:04:26,640 --> 00:04:28,560
how might we take that idea and expand it
90
00:04:28,560 --> 00:04:31,040
out into the world of music? Well, one
91
00:04:31,040 --> 00:04:32,680
scenario that we showed at the beginning of
92
00:04:32,680 --> 00:04:36,120
this video involves a "Turing Jam Session".
93
00:04:36,120 --> 00:04:37,680
During the improvisation among an
94
00:04:37,680 --> 00:04:40,040
interlocutor and two musicians, the
95
00:04:40,040 --> 00:04:42,280
interlocutors task would be to identify who
96
00:04:42,280 --> 00:04:44,840
is the machine and who is the human. I'm
97
00:04:44,840 --> 00:04:46,600
really drawn to this test because jam
98
00:04:46,600 --> 00:04:48,920
sessions are a great way to get to know
99
00:04:48,920 --> 00:04:50,520
people, know what they're about, know their
100
00:04:50,520 --> 00:04:53,200
musical taste. They're a lot of fun. They're a
101
00:04:53,200 --> 00:04:55,520
social activity. But there are other
102
00:04:55,520 --> 00:04:58,400
possible musical Turing tests. Christopher
103
00:04:58,400 --> 00:05:01,280
Ariza loosely categorizes them as either
104
00:05:01,280 --> 00:05:04,000
musical directive tests, which involve
105
00:05:04,000 --> 00:05:06,200
ongoing musical interaction between the
106
00:05:06,200 --> 00:05:09,200
interlocutor and two agents, and musical
107
00:05:09,200 --> 00:05:12,560
output tests, which involve no interaction.
108
00:05:12,560 --> 00:05:15,120
The listener simply judges the human-like
109
00:05:15,120 --> 00:05:18,400
quality of a given musical output.
110
00:05:18,400 --> 00:05:20,520
I have no doubt that generative AI will be
111
00:05:20,520 --> 00:05:23,280
able to pass musical output tests. Red
112
00:05:23,280 --> 00:05:24,960
Lobster's marketing team apparently thinks
113
00:05:24,960 --> 00:05:27,400
so, too. But musical directive tests, where
114
00:05:27,400 --> 00:05:29,760
there's a continuous interaction between
115
00:05:29,760 --> 00:05:32,520
agents, put the emphasis on process, not
116
00:05:32,520 --> 00:05:34,920
product. Alan Turing's original imitation
117
00:05:34,920 --> 00:05:37,120
game envisioned a conversation between
118
00:05:37,120 --> 00:05:39,720
agents, not just somebody passively looking
119
00:05:39,720 --> 00:05:41,960
at text and determining whether or not it
120
00:05:41,960 --> 00:05:44,760
was computer generated, whether chatGPT
121
00:05:44,760 --> 00:05:47,200
did your homework, in other words. And so to
122
00:05:47,200 --> 00:05:49,520
pass a musical Turing test, a machine has to
123
00:05:49,520 --> 00:05:52,240
do what musicians do when they make music.
124
00:05:52,240 --> 00:05:55,680
The process has to feel human. One way that
125
00:05:55,680 --> 00:05:57,520
visual artists have already started to push
126
00:05:57,520 --> 00:06:00,000
back at the flood of generative AI images is
127
00:06:00,000 --> 00:06:02,200
to document their process and tell the story
128
00:06:02,200 --> 00:06:04,280
of making the art. This approach will become
129
00:06:04,280 --> 00:06:06,080
more and more common with musicians over the
130
00:06:06,080 --> 00:06:08,400
next couple of years as a means of pushing
131
00:06:08,400 --> 00:06:10,440
back against the inevitable tide of
132
00:06:10,440 --> 00:06:14,840
generative AI slop. So back in 2008,
133
00:06:14,840 --> 00:06:17,160
Jack Conte, current CEO of Patreon, was
134
00:06:17,160 --> 00:06:19,120
releasing music with Nataly Dawn under the
135
00:06:19,120 --> 00:06:21,200
name Pomplamoose. They were releasing these
136
00:06:21,200 --> 00:06:23,800
YouTube videos in a format that they called
137
00:06:23,800 --> 00:06:27,680
video song. Video songs had two one,
138
00:06:27,680 --> 00:06:30,520
what you see is what you hear, and two, if
139
00:06:30,520 --> 00:06:33,160
you hear it, at some point, you saw it.
140
00:06:33,160 --> 00:06:35,240
Now, this seems kind of like obvious now
141
00:06:35,240 --> 00:06:37,760
because this format is so ubiquitous. But
142
00:06:37,760 --> 00:06:40,040
back then, back in 2008, it was a
143
00:06:40,040 --> 00:06:42,520
revolutionary approach to releasing art.
144
00:06:42,520 --> 00:06:44,960
You were seeing the music as it was actually
145
00:06:44,960 --> 00:06:47,760
made - an appeal that I imagine will carry
146
00:06:47,760 --> 00:06:49,800
some resonance in the future in the face of
147
00:06:49,800 --> 00:06:52,240
generative AI. The music theorist William
148
00:06:52,240 --> 00:06:55,040
O'Hara expands on this idea of showing your
149
00:06:55,040 --> 00:06:57,240
work for musicians. Borrowing a term for the
150
00:06:57,240 --> 00:06:59,360
ancient greek word for craft, he calls it
151
00:06:59,360 --> 00:07:02,960
the techne of YouTube. A techne of YouTube
152
00:07:02,960 --> 00:07:05,520
performance, then, is a form of music
153
00:07:05,520 --> 00:07:07,680
theoretical knowledge that exists at the
154
00:07:07,680 --> 00:07:10,160
intersection of analytical detail,
155
00:07:10,160 --> 00:07:12,680
virtuosic performance ability, practical
156
00:07:12,680 --> 00:07:14,600
instrumental considerations, and an
157
00:07:14,600 --> 00:07:16,680
awareness of one's audience and the
158
00:07:16,680 --> 00:07:19,320
communicative tendencies of social media.
159
00:07:19,320 --> 00:07:21,520
Pomplamoose has since expanded on this idea
160
00:07:21,520 --> 00:07:24,040
of techne in social media musical
161
00:07:24,040 --> 00:07:26,720
performance by releasing these short form
162
00:07:26,720 --> 00:07:29,560
video things where you see the musicians
163
00:07:29,560 --> 00:07:32,040
actually working out the music in the studio
164
00:07:32,040 --> 00:07:34,320
and joking around, you get to see how the
165
00:07:34,320 --> 00:07:36,320
sausage is made, so to speak. Can we try
166
00:07:36,320 --> 00:07:38,520
snapping a tight one? Let's try a tight one.
167
00:07:38,520 --> 00:07:41,520
Sorry, you want to snap a tight one?
168
00:07:41,520 --> 00:07:43,600
You also get to see what
169
00:07:43,600 --> 00:07:46,120
exactly would be necessary for a machine
170
00:07:46,120 --> 00:07:48,840
intelligence to do if it was to pass
171
00:07:48,840 --> 00:07:50,760
the musical directive test.
172
00:07:50,760 --> 00:07:54,020
It would need to understand
and respond to musical jokes.
173
00:07:54,020 --> 00:07:56,040
It was just like, we're feeling this in two.
174
00:07:56,040 --> 00:07:58,560
I just think about everything in one. One.
175
00:07:58,560 --> 00:08:04,040
One is a 400 bar form.
176
00:08:04,040 --> 00:08:10,640
It would need to understand
and take musical direction.
177
00:08:10,640 --> 00:08:13,400
It would need to vibe with Jack Conte,
178
00:08:13,400 --> 00:08:14,120
so to speak.
179
00:08:14,120 --> 00:08:16,160
I talk about him all the time on this channel,
180
00:08:16,160 --> 00:08:18,360
but the musicologist Christophe Small
181
00:08:18,360 --> 00:08:21,920
talks about how music is not really a noun,
182
00:08:21,920 --> 00:08:24,840
but in fact a verb. In his book Musicking,
183
00:08:24,840 --> 00:08:27,320
he writes that music is not a thing at all,
184
00:08:27,320 --> 00:08:30,320
but an activity, something that people do.
185
00:08:30,320 --> 00:08:32,039
The apparent thing "Music" is
186
00:08:32,039 --> 00:08:34,399
a figment, an abstraction of the action,
187
00:08:34,400 --> 00:08:36,640
whose reality vanishes as soon as we examine
188
00:08:36,640 --> 00:08:38,480
it at all, closely.
189
00:08:38,480 --> 00:08:40,360
Singing protestant hymns in a church,
190
00:08:40,360 --> 00:08:43,200
dialing in sick lead tones on a quad cortex
191
00:08:43,200 --> 00:08:45,000
freestyling live on the radio,
192
00:08:45,000 --> 00:08:47,880
facing the wall of death at a metal festival,
193
00:08:47,880 --> 00:08:51,240
watching singers perform o
sole mio in a concert hall,
194
00:08:51,240 --> 00:08:53,320
sitting in at a jazz jam session,
195
00:08:53,320 --> 00:08:56,360
streaming yourself on Twitch
producing music in FL Studio,
196
00:08:56,360 --> 00:08:57,760
its very to difficult to see what these
197
00:08:57,760 --> 00:08:59,040
things have in common
198
00:08:59,040 --> 00:09:03,400
besides somehow reacting to organized sound.
199
00:09:03,400 --> 00:09:05,600
And any one of these activities might be a good
200
00:09:05,600 --> 00:09:07,920
candidate for a musical directive test.
201
00:09:07,920 --> 00:09:10,240
They are all different examples of ongoing
202
00:09:10,240 --> 00:09:12,080
musical interactions.
203
00:09:12,080 --> 00:09:13,840
Generative AI, on the other hand,
204
00:09:13,840 --> 00:09:15,560
is very good at creating products,
205
00:09:15,560 --> 00:09:16,920
musical recordings.
206
00:09:16,920 --> 00:09:18,720
But that product is only ever useful for
207
00:09:18,720 --> 00:09:21,240
passing the output test.
208
00:09:21,240 --> 00:09:23,440
Passing the directive test, though, would
209
00:09:23,440 --> 00:09:25,680
require AI researchers to treat music as a
210
00:09:25,680 --> 00:09:28,200
verb, like Christopher Small suggests a
211
00:09:28,200 --> 00:09:31,240
process, a thing that two or more people do
212
00:09:31,240 --> 00:09:33,480
together, like in Alan Turing's original
213
00:09:33,480 --> 00:09:35,760
imitation game. And when you do that, you
214
00:09:35,760 --> 00:09:37,360
have to take a look at the dynamic
215
00:09:37,360 --> 00:09:39,680
relationships between audiences and
216
00:09:39,680 --> 00:09:42,520
performers and the technology they use and
217
00:09:42,520 --> 00:09:45,000
the spaces that they make music in. And by
218
00:09:45,000 --> 00:09:46,800
the way, all of this is just for western
219
00:09:46,800 --> 00:09:49,360
music so far.
220
00:09:49,360 --> 00:09:51,920
The whole thing is a lot.
221
00:09:51,920 --> 00:09:53,640
I am not gonna pretend like I know
222
00:09:53,640 --> 00:09:56,960
how AI works. I am but a simple bass player.
223
00:09:56,960 --> 00:09:59,160
I have tried to read those papers and I am
224
00:09:59,160 --> 00:10:01,360
just not smart enough. I do recommend
225
00:10:01,360 --> 00:10:04,280
Valerio Velardo's videos on AI music if you
226
00:10:04,280 --> 00:10:06,680
want to get into some of the technical weeds
227
00:10:06,680 --> 00:10:09,240
about these things. But basically, the way I
228
00:10:09,240 --> 00:10:12,320
understand it is that a large language model
229
00:10:12,320 --> 00:10:15,080
will train on a bunch of data and then use
230
00:10:15,080 --> 00:10:18,920
that data to try and accurately predict what
231
00:10:18,920 --> 00:10:21,640
the next thing will be in a sequence. This
232
00:10:21,640 --> 00:10:23,480
is basically what the computational
233
00:10:23,480 --> 00:10:26,560
cognition model is for humans, which says
234
00:10:26,560 --> 00:10:28,160
that we take information, information in
235
00:10:28,160 --> 00:10:31,040
from the world through our senses as input,
236
00:10:31,040 --> 00:10:33,120
and then we process it in our brain, and
237
00:10:33,120 --> 00:10:36,320
then our brain outputs behavior. This is a
238
00:10:36,320 --> 00:10:38,560
fairly outdated model, and one that I don't
239
00:10:38,560 --> 00:10:40,760
feel like applies to how we think about
240
00:10:40,760 --> 00:10:42,920
music. And if the point is to pass the
241
00:10:42,920 --> 00:10:46,480
Turing test, the machine has to think like a
242
00:10:46,480 --> 00:10:59,240
human.
243
00:11:07,120 --> 00:11:09,360
Anybody who has ever performed knows that
244
00:11:09,360 --> 00:11:11,480
getting stuck inside your head
245
00:11:11,480 --> 00:11:13,240
is the worst possible thing.
246
00:11:13,240 --> 00:11:14,840
Thinking too much means you
247
00:11:14,840 --> 00:11:17,800
cannot react with meaningful musical ideas.
248
00:11:17,800 --> 00:11:20,080
But your mind isn't blank, you're still
249
00:11:20,080 --> 00:11:21,160
thinking about things.
250
00:11:21,160 --> 00:11:23,280
Its just very fragmented.
251
00:11:23,280 --> 00:11:25,200
One hip new theory in philosophy
252
00:11:25,200 --> 00:11:27,840
of the mind that accounts for this is called
253
00:11:27,840 --> 00:11:31,720
4E cognition, after the four E's
254
00:11:31,720 --> 00:11:34,880
Embodied, Extended, Embedded, and Enacted
255
00:11:34,880 --> 00:11:37,160
cognition. They represent a dynamic
256
00:11:37,160 --> 00:11:39,760
relationship between the brain, the body,
257
00:11:39,760 --> 00:11:41,360
and your environment.
258
00:11:41,360 --> 00:11:44,520
The first of these is Embodied cognition.
259
00:11:44,520 --> 00:11:47,680
Your body shapes how you think.
260
00:11:47,680 --> 00:11:49,080
When it comes to music, this means
261
00:11:49,080 --> 00:11:51,040
that if it sounds good, it's because it
262
00:11:51,040 --> 00:11:52,680
feels good.
263
00:11:52,680 --> 00:11:54,560
And this is backed by two decades of
264
00:11:54,560 --> 00:11:56,040
music neuroscience research,
265
00:11:56,040 --> 00:11:58,600
especially when it comes to auditory motor
266
00:11:58,600 --> 00:12:00,960
coupling. The areas of your brain which
267
00:12:00,960 --> 00:12:02,880
process movement through space are the same
268
00:12:02,880 --> 00:12:05,000
areas of your brain that process rhythm and
269
00:12:05,000 --> 00:12:07,720
music in general. The vestibular system that
270
00:12:07,720 --> 00:12:09,680
governs balance influences your sense of
271
00:12:09,680 --> 00:12:11,880
downbeat, where one is. If you're not
272
00:12:11,880 --> 00:12:14,040
physically balancing your body, you might
273
00:12:14,040 --> 00:12:16,760
lose the downbeat. A very common occurrence,
274
00:12:16,760 --> 00:12:20,000
like what I just did here in this jam with
275
00:12:20,000 --> 00:12:22,640
Rotem Sivan and James Muschler. You see me
276
00:12:22,640 --> 00:12:25,080
swaying maybe a little bit too much there in
277
00:12:25,080 --> 00:12:27,720
the background, and then the rhythm gets all
278
00:12:27,720 --> 00:12:29,160
...floaty.
279
00:12:33,440 --> 00:12:35,600
Cool, but maybe not intentional.
280
00:12:35,600 --> 00:12:38,200
Failure is actually something that Alan Turing
281
00:12:38,200 --> 00:12:40,200
identified as a means of getting a machine
282
00:12:40,200 --> 00:12:42,320
to fool people into thinking that it was
283
00:12:42,320 --> 00:12:44,880
human. If a machine is too good at answering
284
00:12:44,880 --> 00:12:46,600
questions, it wont seem human.
285
00:12:46,600 --> 00:12:48,320
To err is human,
286
00:12:48,320 --> 00:12:49,960
and so to pass a Turing test,
287
00:12:49,960 --> 00:12:53,000
an AI might need to lose where one is.
288
00:12:53,000 --> 00:12:54,920
The second form of cognition is
289
00:12:54,920 --> 00:12:56,840
Extended cognition,
290
00:12:56,840 --> 00:12:59,120
where the world is your brain.
291
00:12:59,720 --> 00:13:02,260
Thinking requires a lot of energy -
292
00:13:02,260 --> 00:13:04,080
evolutionarily speaking - your brain gets
293
00:13:04,080 --> 00:13:06,760
tired sometimes. And so humans have figured
294
00:13:06,760 --> 00:13:08,760
out ways of extending our thought patterns
295
00:13:08,760 --> 00:13:10,560
into the physical world as a means of
296
00:13:10,560 --> 00:13:13,600
reducing cognitive load. The classic example
297
00:13:13,600 --> 00:13:16,120
of this is writing. We write things down so
298
00:13:16,120 --> 00:13:17,640
we dont have to remember them anymore,
299
00:13:17,640 --> 00:13:20,080
freeing up cognitive capacity in our brain.
300
00:13:20,080 --> 00:13:22,000
Plato famously complained about this, how
301
00:13:22,000 --> 00:13:23,760
people were getting lazy because they relied
302
00:13:23,760 --> 00:13:26,160
on writing too much. Smartphones are the
303
00:13:26,160 --> 00:13:28,680
latest example of this, extending our brains
304
00:13:28,680 --> 00:13:31,160
into the world with technology. We could say
305
00:13:31,160 --> 00:13:33,520
that music notation is a form of extended
306
00:13:33,520 --> 00:13:36,000
cognition, by letting us remember more
307
00:13:36,000 --> 00:13:37,840
music than we would normally be able to with
308
00:13:37,840 --> 00:13:40,880
our mere brains. Orchestral composers don't
309
00:13:40,880 --> 00:13:42,680
have to remember every note for every
310
00:13:42,680 --> 00:13:44,640
instrument that they have ever written, and
311
00:13:44,640 --> 00:13:46,200
so they are freed from the constraints of
312
00:13:46,200 --> 00:13:48,480
their own memory to imagine larger and
313
00:13:48,480 --> 00:13:51,320
grander musical designs - music shaped by
314
00:13:51,320 --> 00:13:54,160
extending our brains past their limitations.
315
00:13:54,160 --> 00:13:57,320
An AI might need to show forgetfulness if
316
00:13:57,320 --> 00:13:59,840
it's going to pass a musical Turing test.
317
00:13:59,840 --> 00:14:01,800
The third form of cognition is
318
00:14:01,800 --> 00:14:03,720
Embedded cognition.
319
00:14:03,720 --> 00:14:08,000
Patterns of thought are
embedded in external systems.
320
00:14:08,000 --> 00:14:10,960
One way to think of this is
how I have embedded my musical
321
00:14:10,960 --> 00:14:13,960
vocabulary into a system of tuning for bass.
322
00:14:13,960 --> 00:14:16,640
4th tuning. Like, I don't have to think
323
00:14:16,640 --> 00:14:19,520
that much to be able to express myself with
324
00:14:19,520 --> 00:14:24,360
my bass tuned like this.
325
00:14:24,360 --> 00:14:26,160
The notes are just there where my fingers
326
00:14:26,160 --> 00:14:27,080
expect them to be.
327
00:14:27,080 --> 00:14:29,040
But if I was to detune my bass a little bit
328
00:14:30,600 --> 00:14:34,160
to an unfamiliar tuning system,
329
00:14:34,160 --> 00:14:35,480
I don't know where anything is anymore.
330
00:14:35,480 --> 00:14:37,920
So my cognitive load has increased
331
00:14:37,920 --> 00:14:40,400
as I hunt and peck for each individual note
332
00:14:40,400 --> 00:14:46,360
on my instrument.
333
00:14:46,360 --> 00:14:48,360
Anybody who's ever tried to type with a
334
00:14:48,360 --> 00:14:50,200
Dvorak keyboard
335
00:14:50,200 --> 00:14:52,480
knows what I'm talking about.
336
00:14:52,480 --> 00:14:54,320
The patterns of thought embedded in
337
00:14:54,320 --> 00:14:56,520
the technology that we use,
338
00:14:56,520 --> 00:14:57,880
like the bass guitar,
339
00:14:57,880 --> 00:14:59,600
guide our musical intuitions.
340
00:14:59,600 --> 00:15:01,080
You can tell if somebody wrote a piece in a
341
00:15:01,080 --> 00:15:03,440
digital audio workstation versus musescore,
342
00:15:03,440 --> 00:15:06,400
for example, would an AI need to mimic this
343
00:15:06,400 --> 00:15:08,880
pattern of embedded cognition to pass a
344
00:15:08,880 --> 00:15:11,960
Turing test? I don't know, but this is how
345
00:15:11,960 --> 00:15:12,960
we do it, you know?
346
00:15:12,960 --> 00:15:14,600
The fourth form of cognition is
347
00:15:14,600 --> 00:15:16,480
Enacted cognition.
348
00:15:16,480 --> 00:15:18,080
Doing, is thinking.
349
00:15:18,080 --> 00:15:20,040
You process the world through action.
350
00:15:20,040 --> 00:15:21,440
It basically says that there are
351
00:15:21,440 --> 00:15:24,320
certain activities which are meaningless
352
00:15:24,320 --> 00:15:26,280
if you are passive,
353
00:15:26,280 --> 00:15:28,400
like sports, for example.
354
00:15:28,400 --> 00:15:29,840
We don't say that you're a good
355
00:15:29,840 --> 00:15:31,920
soccer player if you spend a lot of time
356
00:15:31,920 --> 00:15:34,360
thinking about soccer, because you have seen
357
00:15:34,360 --> 00:15:36,080
a lot of other people do it, you know, you
358
00:15:36,080 --> 00:15:38,760
kind of gotta get out there and actually run
359
00:15:38,760 --> 00:15:40,440
and do the thing yourself.
360
00:15:40,440 --> 00:15:42,600
The very first question from this video,
361
00:15:42,600 --> 00:15:45,640
what would it take for a
machine intelligence to jam?
362
00:15:45,640 --> 00:15:46,640
Is like asking,
363
00:15:46,640 --> 00:15:50,920
what would it take for a machine
intelligence to play soccer?
364
00:15:50,920 --> 00:15:53,480
And the answer is, a body.
365
00:15:53,480 --> 00:15:54,360
We need replicants.
366
00:15:54,360 --> 00:15:56,280
We need androids out there on the field.
367
00:15:56,280 --> 00:15:58,080
Otherwise, it's just a supercomputer
368
00:15:58,080 --> 00:15:59,560
thinking about soccer.
369
00:15:59,560 --> 00:16:00,480
Without a body,
370
00:16:00,480 --> 00:16:03,160
AI sports intelligence is meaningless.
371
00:16:03,160 --> 00:16:04,840
You gotta have robots on the field,
372
00:16:04,840 --> 00:16:06,760
processing in real time what's going on
373
00:16:06,760 --> 00:16:08,960
with their robot bodies.
374
00:16:08,960 --> 00:16:09,920
Without a body,
375
00:16:09,920 --> 00:16:13,200
AI music intelligence is meaningless.
376
00:16:13,200 --> 00:16:15,120
You gotta have robots on the bandstand,
377
00:16:15,120 --> 00:16:16,880
processing in real time what's going on
378
00:16:16,880 --> 00:16:19,080
with their robot bodies.
379
00:16:19,080 --> 00:16:21,320
As the great jazz educator Hal Galper said,
380
00:16:21,320 --> 00:16:25,160
we musicians are athletes of the fine muscles,
381
00:16:25,160 --> 00:16:27,240
and like athletes of the larger muscles,
382
00:16:27,240 --> 00:16:31,440
our meaning is created by our bodies doing things.
383
00:16:31,440 --> 00:16:33,360
Like our siblings in sports,
384
00:16:33,360 --> 00:16:36,280
we share in vivo, the struggle, the joy,
385
00:16:36,280 --> 00:16:38,480
and the experience of our lived selves
386
00:16:38,480 --> 00:16:53,200
moving through the world.
387
00:16:53,200 --> 00:16:56,840
Embodied AI is a long way off, but I don't
388
00:16:56,840 --> 00:16:59,240
see any technical reason why we can't have
389
00:16:59,240 --> 00:17:02,000
musical terminators. Again, I'm not an AI
390
00:17:02,000 --> 00:17:03,480
researcher, so I don't know any of the
391
00:17:03,480 --> 00:17:05,079
actual nitty gritty with any of this stuff,
392
00:17:05,079 --> 00:17:07,239
but, you know, it could happen. And I also
393
00:17:07,240 --> 00:17:09,599
see how it would be possible to treat music
394
00:17:09,599 --> 00:17:11,719
more as a conversation between human and
395
00:17:11,720 --> 00:17:14,520
machine, passing a musical directive test by
396
00:17:14,520 --> 00:17:16,800
valuing and creating meaning in the process
397
00:17:16,800 --> 00:17:19,440
of musicking. So why am I so doubtful?
398
00:17:19,440 --> 00:17:21,040
Why is the thesis of this video that
399
00:17:21,040 --> 00:17:23,839
the musical Turing test will never be passed?
400
00:17:23,839 --> 00:17:28,840
We will never have musical machine intelligence.
401
00:17:35,000 --> 00:17:36,000
[CAPITALISM]
402
00:17:36,000 --> 00:17:38,280
The presumed autonomous thingness of works
403
00:17:38,280 --> 00:17:40,640
of music is, of course, only part of the
404
00:17:40,640 --> 00:17:42,760
prevailing modern philosophy of art in
405
00:17:42,760 --> 00:17:45,720
general. What is valued is not the action of
406
00:17:45,720 --> 00:17:48,520
art, not the act of creating, and even less
407
00:17:48,520 --> 00:17:50,800
that of perceiving and responding, but the
408
00:17:50,800 --> 00:17:54,400
created art object itself.
409
00:17:54,400 --> 00:17:55,760
You can sell a thing.
410
00:17:55,760 --> 00:17:57,440
It's harder to sell the process.
411
00:17:57,440 --> 00:17:59,040
If the process of making quality
412
00:17:59,040 --> 00:18:00,880
recorded music can be made more efficient,
413
00:18:00,880 --> 00:18:02,360
the market is incentivized to make the
414
00:18:02,360 --> 00:18:04,520
process as efficient as possible.
415
00:18:04,520 --> 00:18:06,640
Generative AI creates recorded music
416
00:18:06,640 --> 00:18:09,800
extraordinarily cost effectively compared to
417
00:18:09,800 --> 00:18:11,640
the other ways you might do it. It fully
418
00:18:11,640 --> 00:18:13,440
automates a process that had previously
419
00:18:13,440 --> 00:18:15,840
required human input, much the same way that
420
00:18:15,840 --> 00:18:19,040
industrial capitalism automated making cars,
421
00:18:19,040 --> 00:18:22,440
making food, and making things,
422
00:18:22,440 --> 00:18:24,520
as long as those things are good enough. In
423
00:18:24,520 --> 00:18:26,160
other words, as long as they pass the
424
00:18:26,160 --> 00:18:29,000
musical output test, the processes are
425
00:18:29,000 --> 00:18:31,000
relevant, only the product. There are
426
00:18:31,000 --> 00:18:32,840
billions of dollars now being thrown at
427
00:18:32,840 --> 00:18:34,720
developing generative models for language,
428
00:18:34,720 --> 00:18:36,360
images, and now music. Because there are
429
00:18:36,360 --> 00:18:38,080
potentially billions of dollars to be made
430
00:18:38,080 --> 00:18:40,400
in the market. Spotify is now in on the
431
00:18:40,400 --> 00:18:42,920
generative AI music trend. Theres just, you
432
00:18:42,920 --> 00:18:45,800
know, no money to be made in passing a music
433
00:18:45,800 --> 00:18:47,520
directive test. You're just making the
434
00:18:47,520 --> 00:18:49,280
process to get to the product less
435
00:18:49,280 --> 00:18:51,720
efficient. And I mean that very literally,
436
00:18:51,720 --> 00:18:53,440
too. The current prize for passing an
437
00:18:53,440 --> 00:18:55,280
improvisation based Turing test is only
438
00:18:55,280 --> 00:18:58,360
$1,000. And with such weak market pressure
439
00:18:58,360 --> 00:19:00,080
to do something like this - I mean, I guess
440
00:19:00,080 --> 00:19:01,760
you could sell tickets to see this as part
441
00:19:01,760 --> 00:19:03,720
of a live show - theres just no reason to
442
00:19:03,720 --> 00:19:06,200
spend that kind of computational energy into
443
00:19:06,200 --> 00:19:08,400
doing this. The cloud computing power
444
00:19:08,400 --> 00:19:10,640
required to run large language models uses
445
00:19:10,640 --> 00:19:13,440
absurd amounts of energy consumption is a
446
00:19:13,440 --> 00:19:15,360
great bottleneck for AI. Based on the
447
00:19:15,360 --> 00:19:17,400
extremely intensive computational demands of
448
00:19:17,400 --> 00:19:19,280
training. The carbon cost of image
449
00:19:19,280 --> 00:19:21,680
generation is staggeringly high, and raw
450
00:19:21,680 --> 00:19:23,880
audio generation is slated to be much
451
00:19:23,880 --> 00:19:25,360
higher. I mean, just to, I guess, put this
452
00:19:25,360 --> 00:19:28,080
in perspective, I think a gigawatt, it's
453
00:19:28,080 --> 00:19:29,640
like around the size of
454
00:19:30,160 --> 00:19:32,400
a meaningful nuclear power plant,
455
00:19:32,400 --> 00:19:34,880
only going towards training a model.
456
00:19:34,880 --> 00:19:36,520
Who is going to build the equivalent
457
00:19:36,520 --> 00:19:40,040
of nuclear power plants so that robots can
458
00:19:40,040 --> 00:19:43,440
jam with me? It's just so much more
459
00:19:43,440 --> 00:19:45,760
efficient to have humans do the jamming.
460
00:19:45,760 --> 00:19:51,800
I'm old fashioned and
461
00:19:51,800 --> 00:19:55,560
very idealistic about that. My feeling is
462
00:19:55,560 --> 00:19:59,920
I'll outplay anybody using the machine
463
00:19:59,920 --> 00:20:02,920
or I'll die. I don't care. The day that
464
00:20:02,920 --> 00:20:05,120
the machine outplays me, they can plant me
465
00:20:05,120 --> 00:20:08,320
in the yard with the corn. And I
466
00:20:08,320 --> 00:20:11,680
mean it. I'm very serious. I will not permit
467
00:20:11,680 --> 00:20:15,000
myself to be outplayed by someone using the
468
00:20:15,000 --> 00:20:18,080
machine. I'm just not going to permit that.
469
00:20:18,080 --> 00:20:19,720
You know, there are people that I respect
470
00:20:19,720 --> 00:20:22,280
that use this technology to make beautiful
471
00:20:22,280 --> 00:20:25,120
music that somehow captures what it means to
472
00:20:25,120 --> 00:20:28,080
be human in the year 2024. And I think
473
00:20:28,080 --> 00:20:29,640
that's exciting. But then there are people
474
00:20:29,640 --> 00:20:32,120
that I do not respect, like the people who
475
00:20:32,120 --> 00:20:35,080
run companies like Suno, Udio, and other AI
476
00:20:35,080 --> 00:20:38,440
companies, who have a very accelerationist
477
00:20:38,440 --> 00:20:41,360
mindset when it comes to this, it feels like
478
00:20:41,360 --> 00:20:44,800
music is just one more box to tick on
479
00:20:44,800 --> 00:20:47,520
the way to the singularity. Music is a
480
00:20:47,520 --> 00:20:50,280
problem that technology can solve. There
481
00:20:50,280 --> 00:20:52,640
seems to be a profound disinterest in the
482
00:20:52,640 --> 00:20:55,920
artistic process, why music sounds the way
483
00:20:55,920 --> 00:20:57,920
that it does. And so you get things like the
484
00:20:57,920 --> 00:20:59,960
beautiful, rich history of the blues, a
485
00:20:59,960 --> 00:21:02,360
Black American tradition, reduced to typing
486
00:21:02,360 --> 00:21:04,520
into a text prompt. Delta blues about
487
00:21:04,520 --> 00:21:06,760
cheeseburgers. That's why I refuse to call
488
00:21:06,760 --> 00:21:08,840
this stuff music, because the technology
489
00:21:08,840 --> 00:21:11,880
behind it is so aggressively anti human,
490
00:21:11,880 --> 00:21:15,240
anti history, anti music. And that's why I
491
00:21:15,240 --> 00:21:17,000
also feel like the musical directive test
492
00:21:17,000 --> 00:21:18,600
will never be passed because the people
493
00:21:18,600 --> 00:21:21,360
running the show just don't care. You know,
494
00:21:21,360 --> 00:21:22,840
one of the things that I've learned over the
495
00:21:22,840 --> 00:21:26,000
years talking about musicking on this channel
496
00:21:26,000 --> 00:21:28,080
is that you can change your relationship to
497
00:21:28,080 --> 00:21:31,040
music that you hate by choosing to musick it
498
00:21:31,040 --> 00:21:33,640
differently. And I kind of hate this Red
499
00:21:33,640 --> 00:21:37,700
Lobster tune because it kinda slaps red
500
00:21:37,700 --> 00:21:49,400
lobster. You got me up every
501
00:21:49,400 --> 00:21:55,880
single time.
502
00:22:25,040 --> 00:22:31,520
There's like a phrase extension too
503
00:22:31,520 --> 00:22:38,160
disgusting.
504
00:22:38,160 --> 00:22:41,280
Now, I owe a massive amount of context for
505
00:22:41,280 --> 00:22:42,720
everything that we've talked about here
506
00:22:42,720 --> 00:22:45,440
today to a video that I saw from the science
507
00:22:45,440 --> 00:22:48,440
creator Tibees - Toby Hendy, where she goes
508
00:22:48,440 --> 00:22:51,800
over Alan Turing's original 1950 paper,
509
00:22:51,800 --> 00:22:55,560
where he first details his imitation game.
510
00:22:55,560 --> 00:22:58,440
She highlights just how visionary Turing's
511
00:22:58,440 --> 00:23:01,280
paper truly was, like how he predicted that
512
00:23:01,280 --> 00:23:03,360
machines would need a degree of randomness
513
00:23:03,360 --> 00:23:05,280
in them to evolve to have human like
514
00:23:05,280 --> 00:23:07,600
intelligence. This has actually turned out
515
00:23:07,600 --> 00:23:10,120
to be an essential part of modern machine
516
00:23:10,120 --> 00:23:13,160
learning. She also covers how Turing thought
517
00:23:13,160 --> 00:23:15,720
of potential objections to the idea that
518
00:23:15,720 --> 00:23:17,960
machines could become intelligent,
519
00:23:17,960 --> 00:23:20,240
including some weird ones like Turing felt
520
00:23:20,240 --> 00:23:22,760
like he seriously needed to address the
521
00:23:22,760 --> 00:23:25,640
prospect of extrasensory perception.
522
00:23:25,640 --> 00:23:27,320
Anyway, this video was a great one, giving
523
00:23:27,320 --> 00:23:29,640
me some extra context about the history of
524
00:23:29,640 --> 00:23:31,920
machine learning. And you can find it and
525
00:23:31,920 --> 00:23:34,000
many more like it over on my streaming
526
00:23:34,000 --> 00:23:36,640
service, Nebula. Nebula is a creator-owned
527
00:23:36,640 --> 00:23:38,200
streaming service that was originally
528
00:23:38,200 --> 00:23:40,120
started as a means for creators to make
529
00:23:40,120 --> 00:23:42,800
interesting videos and essays free of the
530
00:23:42,800 --> 00:23:44,880
constraints of the recommendation algorithm.
531
00:23:44,880 --> 00:23:46,680
But it's since organically grown into,
532
00:23:46,680 --> 00:23:48,720
like, one of the genuinely best places on
533
00:23:48,720 --> 00:23:50,960
the Internet for curious folks to find
534
00:23:50,960 --> 00:23:52,920
exclusive, interesting, and enriching
535
00:23:52,920 --> 00:23:55,000
content. Like for example, on there you'll
536
00:23:55,000 --> 00:23:58,440
find science creators like Tibees and Jordan
537
00:23:58,440 --> 00:24:00,600
Harrod, who does some fantastic stuff with
538
00:24:00,600 --> 00:24:03,920
AI. You'll find amazing video essayists like
539
00:24:03,920 --> 00:24:06,840
the OG video essayist herself, Lindsay Ellis
540
00:24:06,840 --> 00:24:09,720
is on Nebula. And also Jacob Geller. If you
541
00:24:09,720 --> 00:24:12,000
have never seen a Jacob Geller video essay,
542
00:24:12,000 --> 00:24:14,120
I highly recommend you check out some Jacob
543
00:24:14,120 --> 00:24:16,840
Geller. Like some of this stuff is so
544
00:24:16,840 --> 00:24:18,640
beautiful. I think he's one of the the best
545
00:24:18,640 --> 00:24:21,440
people in the game making video essays. Go
546
00:24:21,440 --> 00:24:23,280
check out some Jacob Geller. You'll also
547
00:24:23,280 --> 00:24:25,320
find some of my fellow music creators that I
548
00:24:25,320 --> 00:24:27,760
deeply love and respect, like the queen of
549
00:24:27,760 --> 00:24:31,040
jazz education herself, Aimee Nolte is
550
00:24:31,040 --> 00:24:33,920
on Nebula. You also have the wonderful music
551
00:24:33,920 --> 00:24:36,200
theorist 12Tone making videos over
552
00:24:36,200 --> 00:24:38,560
there. I genuinely love the community of
553
00:24:38,560 --> 00:24:41,000
creators over on Nebula. They are such a
554
00:24:41,000 --> 00:24:44,440
wealth of inspiration for me, and I know
555
00:24:44,440 --> 00:24:46,360
they will be a wealth of inspiration for you
556
00:24:46,360 --> 00:24:48,640
too. If you're already on Nebula, we're
557
00:24:48,640 --> 00:24:50,640
making it easier to find content for both
558
00:24:50,640 --> 00:24:53,320
new creators and existing favorites. There
559
00:24:53,320 --> 00:24:56,760
are categories now, news, culture, science,
560
00:24:56,760 --> 00:25:00,000
history, podcasts and classes.
561
00:25:00,000 --> 00:25:02,240
Each category is kind of like its own mini
562
00:25:02,240 --> 00:25:04,280
service. Like, I have some classes over
563
00:25:04,280 --> 00:25:06,640
there. I have a class on vlogging, which you
564
00:25:06,640 --> 00:25:08,520
might enjoy. I also have a class that I did
565
00:25:08,520 --> 00:25:11,800
with Aimee Nolte on jazz and blues that I know
566
00:25:11,800 --> 00:25:13,960
you will enjoy. If you like my nerdy music
567
00:25:13,960 --> 00:25:16,520
theory channel, if you sign up using my link
568
00:25:16,520 --> 00:25:20,000
at Nebula.tv/adamneely, or use the
569
00:25:20,000 --> 00:25:22,360
link below, you can support me and all the
570
00:25:22,360 --> 00:25:24,400
creators over on Nebula directly and get
571
00:25:24,400 --> 00:25:27,640
Nebula for 40% off annual plans,
572
00:25:27,640 --> 00:25:31,080
which is as little as $2.50 a month.
573
00:25:31,080 --> 00:25:32,800
What's exciting, though, and genuinely
574
00:25:32,800 --> 00:25:35,200
unique to nebula, I think, is that now
575
00:25:35,200 --> 00:25:37,440
Nebula is offering lifetime time
576
00:25:37,440 --> 00:25:39,320
subscriptions, which means, yes, now until
577
00:25:39,320 --> 00:25:41,720
the singularity, the end of time. Thank you,
578
00:25:41,720 --> 00:25:44,080
Ray Kurzweil. You can be enjoying Nebula and
579
00:25:44,080 --> 00:25:46,360
all it has to offer. There are some big
580
00:25:46,360 --> 00:25:49,080
concept, high octane Nebula originals to be
581
00:25:49,080 --> 00:25:51,440
excited for coming this summer. Like
582
00:25:51,440 --> 00:25:54,200
Identiteaze, the debut short film from Jessie
583
00:25:54,200 --> 00:25:56,600
Gender coming this June. And of course, Jet
584
00:25:56,600 --> 00:25:58,960
Lag season ten is now in production. You
585
00:25:58,960 --> 00:26:00,720
actually might remember Toby from season
586
00:26:00,720 --> 00:26:02,880
five of Jet Lag The Game, but she'll also be
587
00:26:02,880 --> 00:26:04,680
appearing in the latest season alongside
588
00:26:04,680 --> 00:26:07,400
Ben, Adam, and Sam from Wendover. I'm a fan
589
00:26:07,400 --> 00:26:08,960
of Jet lag the game, by the way. It's kind
590
00:26:08,960 --> 00:26:11,640
of like reminds me of tour. It feels almost
591
00:26:11,640 --> 00:26:14,600
weirdly comfy watching them run around
592
00:26:14,600 --> 00:26:17,040
because it's like me running around Europe.
593
00:26:17,040 --> 00:26:19,960
Anyway, $300 gets you lifetime access to all
594
00:26:19,960 --> 00:26:21,680
of this great stuff and everything that
595
00:26:21,680 --> 00:26:22,880
Nebula will ever produce
596
00:26:22,880 --> 00:26:24,440
from now until the end of time.
597
00:26:24,440 --> 00:26:25,960
I love reading that, by the way.
598
00:26:25,960 --> 00:26:27,360
Now until the end of time.
599
00:26:27,360 --> 00:26:28,240
That's. That's fun.
600
00:26:29,120 --> 00:26:31,520
I'm very excited for the future of Nebula,
601
00:26:31,520 --> 00:26:33,120
and I think you will enjoy this
602
00:26:33,120 --> 00:26:36,000
community that aims to engage the world in a
603
00:26:36,000 --> 00:26:39,320
more meaningful human way.
604
00:26:39,320 --> 00:26:41,240
Thank you so much for watching. You can sign
605
00:26:41,240 --> 00:26:43,800
up today for either 40% off annual plans or
606
00:26:43,800 --> 00:26:47,360
$300 off lifetime access. And until next
607
00:26:47,360 --> 00:26:49,240
time, guys,
45049
Can't find what you're looking for?
Get subtitles in any language from opensubtitles.com, and translate them here.