Oct. 22, 2025

Why Science and Philosophy Need Each Other | Lauren Ross & Megan Peters

The player is loading ...
Why Science and Philosophy Need Each Other | Lauren Ross & Megan Peters

Why do science and philosophy so often stand apart - and what happens when they come back together?In this thought-provoking conversation, Dr Tevin Naidu brings together Prof Lauren Ross (philosopher of science, UC Irvine) and Prof Megan Peters (computational neuroscientist, UC Irvine) to explore why modern science still depends on philosophical insight - and why philosophy needs to engage with empirical rigor.TIMESTAMPS:(00:00) - Introduction: Why science and philosophy need each other(01:45) - Science vs. Philosophy: debunking the "anything goes" myth(06:22) - What scientists misunderstand about philosophy(09:38) - Philosophy and science as synergistic collaborators(34:40) - Brains as model-builders: uncertainty, inference, and subjective experience(37:47) - How noise & variability reveal links between brain models and experience(39:39) - What counts as an explanation? Descriptions vs. why-questions in science(41:19) - Defining the explanatory target in consciousness research (contrast & clarity)(44:27) - Types of explanation: causal, mechanistic, computational, and mathematical(47:28) - Levels of analysis: Marr, models, and matching methods to questions(57:23) - The microprocessor/Mario example: what "perfect access" still fails to explain(58:50) - Groundbreaking work: metacognition, psychophysics & linking model knobs to experience(59:59) - Processing "under the hood": what the brain does without subjective access(01:00:01) - Vision science & the limits of introspection: implications for consciousness studies(01:26:39) - Is consciousness an epiphenomenon? Debate and conceptual framing(01:28:24) - Precision of questions: why asking the right question matters for explanation(01:29:54) - Plurality of explanatory targets: accepting piecemeal explanations for complex systems(01:42:22) - Community & interdisciplinarity: building networks that bridge science and philosophy(01:44:32) - Future horizons for consciousness research: what philosophy must confront next(02:06:36) - Final reflections: how a philosophically informed neuroscience could reshape the field(02:08:05) - ConclusionEPISODE LINKS:- Megan's Website: https://www.meganakpeters.org/- Megan's Lab: https://www.cnclab.io/- Neuromatch: https://neuromatch.io/- Megan's LinkedIn: https://www.linkedin.com/in/megan-peters-58a86133/- Megan's X: https://twitter.com/meganakpeters- Could a Neuroscientist Understand a Microprocessor: https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1005268- Neuromatch’s Consciousness Day in the neuroAI course: https://neuroai.neuromatch.io/tutorials/W2D5_Mysteries/chapter_title.html- Lauren's Website: https://sites.socsci.uci.edu/~rossl/- Lauren's X: https://twitter.com/proflaurenross- Lauren's LinkedIn: https://www.linkedin.com/in/lauren-ross-48522843/- Explanation in Biology: https://www.cambridge.org/core/elements/explanation-in-biology/743A8C5A6E709B1E348FCD4D005C67B3CONNECT:- Website: https://mindbodysolution.org - YouTube: https://youtube.com/@MindBodySolution- Podcast: https://creators.spotify.com/pod/show/mindbodysolution- Twitter: https://twitter.com/drtevinnaidu- Facebook: https://facebook.com/drtevinnaidu - Instagram: https://instagram.com/drtevinnaidu- LinkedIn: https://linkedin.com/in/drtevinnaidu- Website: https://tevinnaidu.com=============================Disclaimer: The information provided on this channel is for educational purposes only. The content is shared in the spirit of open discourse and does not constitute, nor does it substitute, professional or medical advice. We do not accept any liability for any loss or damage incurred from you acting or not acting as a result of listening/watching any of our contents. You acknowledge that you use the information provided at your own risk. Listeners/viewers are advised to conduct their own research and consult with their own experts in the respective fields.

1
00:00:08,280 --> 00:00:10,560
Megan, Lauren, thanks so much
for joining me today.

2
00:00:10,560 --> 00:00:12,400
We have an exciting colloquium
planned.

3
00:00:12,400 --> 00:00:15,440
The topic is why science and
philosophy need each other.

4
00:00:15,560 --> 00:00:18,680
I can't think of the perfect two
guests to be on the show because

5
00:00:19,120 --> 00:00:23,280
Lauren, you could be seen as a a
scientifically informed

6
00:00:23,280 --> 00:00:25,240
philosopher.
Megan, you could be seen as a

7
00:00:25,400 --> 00:00:26,880
philosophically informed
scientist.

8
00:00:26,920 --> 00:00:29,680
However, one could argue that
both of you are actually both.

9
00:00:30,080 --> 00:00:32,600
So I think that brings us to
this first question, which is

10
00:00:32,600 --> 00:00:34,160
fundamental to setting the
stage.

11
00:00:34,600 --> 00:00:38,200
Why do both of you think science
and philosophy are often seen as

12
00:00:38,200 --> 00:00:42,400
separate, even antagonistic,
when historically they emerged

13
00:00:42,400 --> 00:00:44,520
together?
Lauren, do you want to perhaps

14
00:00:44,520 --> 00:00:45,720
start?
And then Megan, you can take it

15
00:00:45,720 --> 00:00:49,040
from there.
Sounds great.

16
00:00:49,600 --> 00:00:54,960
It's a very interesting
question, and it partly has to

17
00:00:54,960 --> 00:01:00,520
do with how we think of both
science and philosophy in modern

18
00:01:00,520 --> 00:01:05,400
times.
For science, usually we have

19
00:01:05,400 --> 00:01:13,440
more awareness and understanding
of what we're referring to, but

20
00:01:13,440 --> 00:01:16,520
with philosophy, that's less the
case.

21
00:01:16,600 --> 00:01:21,840
Philosophy is this label that
can be applied to many different

22
00:01:21,840 --> 00:01:24,200
things, many different types of
thinking.

23
00:01:24,200 --> 00:01:26,920
I mean, philosophy is also a
field, of course, that involves

24
00:01:27,240 --> 00:01:32,320
different types of work.
In everyday life conversations,

25
00:01:32,320 --> 00:01:37,760
we use the term philosophy to
sometimes refer to places where

26
00:01:37,760 --> 00:01:43,400
you have questions without
answers, a place where anything

27
00:01:43,400 --> 00:01:47,000
goes, where maybe we're
interested in your own

28
00:01:47,000 --> 00:01:49,880
subjective views, your beliefs
and your thoughts.

29
00:01:50,560 --> 00:01:54,680
And sometimes philosophy in this
space is viewed as the opposite

30
00:01:54,680 --> 00:01:59,120
of pragmatic.
It's your philosophical musings

31
00:01:59,120 --> 00:02:03,080
or, or you'll sometimes hear
even scientists say that they're

32
00:02:03,080 --> 00:02:06,640
asking a philosophical question
where they mean there's this

33
00:02:06,640 --> 00:02:11,560
sort of unbounded open question.
Or you'll hear the expression

34
00:02:11,560 --> 00:02:16,560
near philosophy.
So I I think that starts to show

35
00:02:16,560 --> 00:02:20,120
and paint a picture for how
science and philosophy can be

36
00:02:20,120 --> 00:02:22,680
seen as opposites.
And I would say mistakenly,

37
00:02:22,920 --> 00:02:25,720
because that picture of
philosophy is not how we think

38
00:02:25,720 --> 00:02:27,760
of science.
This picture of philosophy

39
00:02:28,120 --> 00:02:31,920
questions without answers and
the sort of unbounded anything

40
00:02:31,920 --> 00:02:34,240
goes, well, science doesn't
operate like that.

41
00:02:34,680 --> 00:02:40,040
And part of what can help here
is to specify different types of

42
00:02:40,040 --> 00:02:42,640
philosophy, and in this case,
philosophy of science,

43
00:02:43,120 --> 00:02:46,440
philosophy of mind, philosophy
of cognitive science, where we

44
00:02:46,440 --> 00:02:48,960
don't have an anything goes type
project.

45
00:02:49,080 --> 00:02:52,880
We have an interest in the
principles of science.

46
00:02:52,880 --> 00:02:57,040
We have an interest in
precision, clarity, rigor.

47
00:02:57,840 --> 00:03:03,360
And so this would be one reason
I would suggest for why they're

48
00:03:03,360 --> 00:03:06,880
sometimes seen as separate or in
contrast.

49
00:03:06,880 --> 00:03:08,720
It'll partly depend on who you
talk to.

50
00:03:09,080 --> 00:03:15,720
Many scientists do view them as
helpful colleagues that kind of

51
00:03:15,720 --> 00:03:20,600
need to work together or that
when they do work together can

52
00:03:20,640 --> 00:03:22,440
lead to various types of
successes.

53
00:03:22,440 --> 00:03:25,520
So that would be a first answer
that I would give.

54
00:03:26,400 --> 00:03:31,320
Megan same question.
Yeah, actually there's a lot of

55
00:03:31,320 --> 00:03:34,760
of what Lauren said that I agree
with, but I'm actually kind of

56
00:03:34,760 --> 00:03:38,120
surprised by this question in
general, because from my

57
00:03:38,120 --> 00:03:41,080
perspective, I don't find them
antagonistic at all.

58
00:03:41,080 --> 00:03:45,320
And I guess it's my privilege
that the philosophers and

59
00:03:45,320 --> 00:03:48,480
scientists that I tend to hang
out with might agree with me

60
00:03:48,480 --> 00:03:51,800
that they are not antagonistic.
But again, that's, that's my own

61
00:03:51,800 --> 00:03:55,080
privilege in the space that I, I
choose to occupy and that I'm

62
00:03:55,080 --> 00:03:59,920
privileged to occupy.
I I feel that, yes, there is

63
00:03:59,920 --> 00:04:04,960
this general tenor, this general
feeling that philosophy and

64
00:04:04,960 --> 00:04:08,720
science might be separate,
antagonistic, because there's

65
00:04:08,880 --> 00:04:11,280
the empirical scientists who are
doing the real work.

66
00:04:11,280 --> 00:04:14,240
And it's the philosophers who
are over here in their armchairs

67
00:04:14,240 --> 00:04:17,320
kind of, you know, deciding that
there is a difference when maybe

68
00:04:17,320 --> 00:04:20,360
there isn't really a difference
in in the real world.

69
00:04:20,880 --> 00:04:24,400
And so there could be that
tension where it's like, do we

70
00:04:24,400 --> 00:04:27,040
really need to be having that
particular type of conversation?

71
00:04:27,040 --> 00:04:29,760
Does that really matter to the
experiments that I'm going to be

72
00:04:29,760 --> 00:04:33,920
doing?
But I think that again, the

73
00:04:34,080 --> 00:04:37,960
folks that I tend to interact
with within the science and the

74
00:04:37,960 --> 00:04:43,120
philosophy space recognize that
these are truly synergistic, not

75
00:04:43,120 --> 00:04:48,920
even just friendly, that they
both can learn from each other

76
00:04:48,920 --> 00:04:51,560
in the ways that that Lauren
pointed out.

77
00:04:52,080 --> 00:04:54,840
But I will disagree, Lauren,
with one thing that you said,

78
00:04:55,160 --> 00:04:59,880
which is that the realm of
questions without answers might

79
00:04:59,880 --> 00:05:03,080
be more the philosophical realm.
And I feel like there's just so

80
00:05:03,080 --> 00:05:04,720
much of science that is that
too.

81
00:05:05,040 --> 00:05:08,800
There's so much where the
purpose of what we're doing as

82
00:05:08,800 --> 00:05:11,840
scientists at the at the cutting
edge, at the forefront of our

83
00:05:11,840 --> 00:05:15,920
knowledge is, well, maybe there
isn't an answer yet, but the

84
00:05:15,920 --> 00:05:19,120
role of philosophy and the role
of philosophically informed

85
00:05:19,120 --> 00:05:22,960
science could be to discover, is
there an answer to be had here?

86
00:05:23,720 --> 00:05:27,640
So we don't know the answer yet
and we need to decide, is this

87
00:05:27,640 --> 00:05:30,240
something that we could actually
go after scientifically?

88
00:05:30,600 --> 00:05:34,680
So a lot of what I do is a
question without an answer yet

89
00:05:34,680 --> 00:05:39,520
too.
So, yeah, I think that, sure,

90
00:05:39,520 --> 00:05:42,880
the general tenor might be that
these are antagonistic, but I

91
00:05:42,880 --> 00:05:46,200
think that there are quite a lot
of people who also disagree with

92
00:05:46,200 --> 00:05:50,240
that general assessment.
Yeah, I think that a lot of

93
00:05:50,240 --> 00:05:54,320
people actually forget that when
you do a PhD in in anything

94
00:05:54,320 --> 00:05:56,840
scientific, you're just
fundamentally getting a

95
00:05:57,200 --> 00:06:00,400
doctorate in philosophy because
I mean, this, this is an

96
00:06:00,840 --> 00:06:03,640
enriched history of just
philosophy that's expanded over

97
00:06:03,640 --> 00:06:05,400
time.
Natural philosophy has become

98
00:06:05,680 --> 00:06:08,320
what science is today.
Lauren, for you, as a

99
00:06:08,480 --> 00:06:11,520
philosopher of science and a
trained physician, I know as a

100
00:06:11,520 --> 00:06:15,760
medical doctor, it's that most
people who when I worked in the

101
00:06:15,760 --> 00:06:19,480
medical field, when I do work in
the medical field, is that it's

102
00:06:19,480 --> 00:06:23,560
very uninformed philosophically.
It's a quite a common theme.

103
00:06:23,560 --> 00:06:25,440
They're not not interested in
it.

104
00:06:25,720 --> 00:06:28,560
It's just that they just don't
have the time perhaps to explore

105
00:06:28,560 --> 00:06:32,400
this as much as maybe someone
like like myself does.

106
00:06:32,520 --> 00:06:36,200
So what do you think scientists
most often misunderstand about

107
00:06:36,200 --> 00:06:38,080
what philosophy actually
contributes?

108
00:06:40,560 --> 00:06:45,480
I think that scientists, if they
are misunderstanding what

109
00:06:45,480 --> 00:06:50,640
philosophy is here, primarily
philosophy of science and what

110
00:06:50,640 --> 00:06:55,320
it can contribute.
It's this view that philosophy

111
00:06:55,560 --> 00:07:02,880
is sort of so open and so
anything goes that it isn't

112
00:07:02,880 --> 00:07:10,000
useful and that it's this base
where you can't always get

113
00:07:10,000 --> 00:07:13,880
traction on questions or how to
understand the world.

114
00:07:14,240 --> 00:07:17,720
Part of what's coming up in this
question and part of, as you

115
00:07:17,720 --> 00:07:22,480
mentioned, Ted, why this is so
interesting, the kind of current

116
00:07:22,480 --> 00:07:26,520
views in some areas that these
are disparate types of study is

117
00:07:26,520 --> 00:07:32,640
that they very much were part of
the same program in early work.

118
00:07:32,640 --> 00:07:36,680
So natural philosophy, we refer
to early scientists as natural

119
00:07:36,680 --> 00:07:39,560
philosophers.
Aristotle is both someone we

120
00:07:39,560 --> 00:07:43,800
think of as an early biologist
and also a philosopher.

121
00:07:43,800 --> 00:07:49,520
And in in Darwin's lifetime, the
term scientist was created.

122
00:07:49,520 --> 00:07:52,960
And so he was a natural
philosopher perhaps early on in

123
00:07:52,960 --> 00:07:55,720
his career and then was only
referred to as a scientist

124
00:07:55,720 --> 00:07:58,520
later.
So they very much do have a

125
00:07:58,520 --> 00:08:01,640
shared root and a shared
history.

126
00:08:02,480 --> 00:08:07,480
Currently, the common
misconceptions I see is that

127
00:08:07,680 --> 00:08:13,680
philosophy is so open that you
can't use it to get guidance.

128
00:08:14,080 --> 00:08:18,360
And that's, as Megan is
suggesting, very much

129
00:08:18,360 --> 00:08:21,880
antithetical to the kind of work
that you see in a lot of

130
00:08:21,880 --> 00:08:25,160
philosophy of Cogsai, a lot of
philosophy of mind, a lot of

131
00:08:25,160 --> 00:08:28,040
philosophy of neuroscience.
This is a space where you have

132
00:08:28,040 --> 00:08:34,080
scientists, sorry, you have
philosophers who are doing work

133
00:08:34,600 --> 00:08:38,280
where they're interested in, I'd
say 3 main things.

134
00:08:38,520 --> 00:08:43,840
There's an interest in getting
precision about foundational

135
00:08:44,200 --> 00:08:49,920
topics and methods in science.
They want to know the principles

136
00:08:49,920 --> 00:08:54,000
and the justification that are
guiding those concepts, those

137
00:08:54,000 --> 00:08:56,400
methods.
And then they want to know, and

138
00:08:56,400 --> 00:08:59,840
they want to be able to specify
how something works.

139
00:08:59,880 --> 00:09:04,040
If you have a scientist giving
an explanation, how do you know

140
00:09:04,160 --> 00:09:08,080
it's a good one?
If you have scientists debating

141
00:09:08,280 --> 00:09:12,640
how we should understand
causation or what the mechanism

142
00:09:12,640 --> 00:09:16,800
is for something, how do you
know when it works and when it

143
00:09:16,800 --> 00:09:19,960
doesn't?
So here we're often looking at

144
00:09:19,960 --> 00:09:22,720
science from a functional
perspective where scientists

145
00:09:22,720 --> 00:09:27,720
have goals and you can assess
the success of science with

146
00:09:27,720 --> 00:09:30,280
respect to when scientists are
reaching those goals.

147
00:09:30,760 --> 00:09:35,920
And so in this space, we think
of science as a as a, a practice

148
00:09:35,920 --> 00:09:38,200
that gives us our best
understanding of the world.

149
00:09:38,600 --> 00:09:42,600
And it often involves this
theorizing that we sometimes

150
00:09:42,600 --> 00:09:45,720
call philosophy, that scientists
are very much doing and

151
00:09:45,720 --> 00:09:49,360
philosophers of science are
engaged in as well, where

152
00:09:49,360 --> 00:09:54,720
you're, you're looking at these
fundamental scientific concepts

153
00:09:55,120 --> 00:09:57,800
and practices that scientists
engage in.

154
00:09:58,040 --> 00:10:01,000
If science does give us our best
understanding of the world, we

155
00:10:01,000 --> 00:10:03,400
should be able to say how it
does.

156
00:10:03,920 --> 00:10:08,840
And that's where here it's
helpful to get precision about

157
00:10:09,000 --> 00:10:12,680
what is an explanation in
science, what is causation?

158
00:10:13,000 --> 00:10:15,880
What is getting information
about the causal structure of

159
00:10:15,880 --> 00:10:18,440
the world?
What are the principles that

160
00:10:18,440 --> 00:10:22,400
scientists use that we can
identify to help guide work in

161
00:10:22,400 --> 00:10:24,320
this space?
And then how do you know when it

162
00:10:24,320 --> 00:10:27,360
works?
How do you know when scientists

163
00:10:27,360 --> 00:10:29,480
have met the standards of their
field?

164
00:10:29,560 --> 00:10:33,240
And that partly involves
specifying what they are.

165
00:10:33,240 --> 00:10:38,720
And so I think it's sometimes
surprising to physicians,

166
00:10:39,360 --> 00:10:42,800
healthcare practitioners who are
more in a professional space and

167
00:10:42,800 --> 00:10:50,200
they aren't necessarily
theorizing the way that other

168
00:10:50,200 --> 00:10:54,400
types of scientists are, to hear
that this is a kind of

169
00:10:54,400 --> 00:10:59,760
philosophy of that this is a
kind of work that happens in

170
00:10:59,760 --> 00:11:01,440
philosophy and philosophy of
science.

171
00:11:01,960 --> 00:11:06,160
So, yeah, there's a kind of
difference, I think, across

172
00:11:06,920 --> 00:11:12,160
types of researchers.
Some of them are more on the

173
00:11:12,160 --> 00:11:16,920
front lines of professional
work, and maybe others are more

174
00:11:16,920 --> 00:11:19,240
engaged with research.
And you have some of these

175
00:11:19,240 --> 00:11:23,560
researchers who are working with
philosophers and kind of

176
00:11:23,560 --> 00:11:26,000
interested in these theoretical
questions that show up in

177
00:11:26,000 --> 00:11:30,360
philosophy of science.
Megan As a neuroscientist deeply

178
00:11:30,360 --> 00:11:33,960
grounded in philosophy, what do
philosophers sometimes overlook

179
00:11:34,080 --> 00:11:36,480
about how scientific practice
really works today?

180
00:11:38,160 --> 00:11:41,160
You're asking me to to say
what's wrong with all my

181
00:11:41,160 --> 00:11:47,040
colleagues.
So I think for me this is a, a

182
00:11:47,080 --> 00:11:50,280
challenge that maybe
philosophers face more than some

183
00:11:50,280 --> 00:11:52,840
scientists, but scientists
certainly face this challenge as

184
00:11:52,840 --> 00:11:56,360
well.
And that is, I hinted at this

185
00:11:56,360 --> 00:11:58,840
earlier, this idea of is this a
difference that makes a

186
00:11:58,840 --> 00:12:01,120
difference?
So philosophers of science,

187
00:12:01,120 --> 00:12:03,840
philosophers of mind,
philosophers of modeling of

188
00:12:03,840 --> 00:12:09,600
cognitive science will often try
to drive at the the conceptual

189
00:12:09,600 --> 00:12:13,120
distinctions that provide
clarity with respect to the

190
00:12:13,120 --> 00:12:17,240
questions that we're asking and
an assessment of the validity of

191
00:12:17,240 --> 00:12:20,160
the methods that we're using to
answer those questions.

192
00:12:21,120 --> 00:12:24,880
But sometimes I think
philosophers and scientists to a

193
00:12:24,880 --> 00:12:31,160
certain extent as well, we get
so into the details of finding

194
00:12:31,320 --> 00:12:34,800
the joints in nature, you know,
finding the separation between

195
00:12:34,800 --> 00:12:38,720
two concepts that ultimately, if
we were to take a step back and

196
00:12:38,720 --> 00:12:41,600
say, all right, well, maybe
there is this separation in

197
00:12:41,600 --> 00:12:46,760
concepts that you've identified,
this difference between concept

198
00:12:46,760 --> 00:12:50,440
A and concept B that you've
started to really home in on.

199
00:12:50,920 --> 00:12:55,840
How could we ever know if that's
a real difference, if that, And

200
00:12:55,840 --> 00:12:57,960
not just that this is a
difference that we can

201
00:12:58,720 --> 00:13:03,280
conceptualize, that we can come
up with, that we can describe,

202
00:13:03,720 --> 00:13:05,960
but that this is a real
difference in the world.

203
00:13:06,400 --> 00:13:12,920
This is a real joint in nature.
And I think that sometimes the

204
00:13:12,920 --> 00:13:16,400
pushback that scientists will
give towards philosophers is

205
00:13:16,400 --> 00:13:19,600
this like, Oh, well, you're
making distinctions that don't

206
00:13:19,600 --> 00:13:23,160
really have any bearing on
anything that's physical, that's

207
00:13:23,160 --> 00:13:26,920
real, that's empirical.
And so you're, you're really

208
00:13:26,920 --> 00:13:30,360
just kind of in this space as as
Lauren said, that where

209
00:13:30,360 --> 00:13:32,600
everything, anything goes, like
you've, you've discovered a

210
00:13:32,600 --> 00:13:34,560
difference and you've decided
that that's an important

211
00:13:34,560 --> 00:13:38,080
difference.
But I think that the hard part

212
00:13:38,080 --> 00:13:43,200
is, is not just dismissing these
differences or these

213
00:13:43,200 --> 00:13:46,000
distinctions and saying, well, I
could never test for them.

214
00:13:46,600 --> 00:13:49,520
So it's not, it's not a
meaningful distinction.

215
00:13:50,080 --> 00:13:52,360
The hard part is deciding
whether there is a meaningful

216
00:13:52,360 --> 00:13:56,400
distinction there.
And so deciding whether this is

217
00:13:56,400 --> 00:14:00,560
a problem where philosopher of
science or cogs I or modeling

218
00:14:00,960 --> 00:14:04,760
has come up with this
distinction that may or may not

219
00:14:04,760 --> 00:14:08,560
be empirically testable.
And the challenge is to say, do

220
00:14:08,560 --> 00:14:10,400
we care to empirically test
this?

221
00:14:10,440 --> 00:14:14,040
And if we do care to empirically
test it, can we even come up

222
00:14:14,040 --> 00:14:16,800
with something that would allow
us to see whether this joint in

223
00:14:16,800 --> 00:14:21,080
nature is actually present?
And so I think that that's a

224
00:14:21,080 --> 00:14:27,760
hard hump between science and
and some philosophy, where some

225
00:14:27,760 --> 00:14:31,440
more pure philosophers of
science will see the intrinsic

226
00:14:31,440 --> 00:14:35,320
value of making the distinction
and clarifying it to begin with.

227
00:14:35,760 --> 00:14:38,720
And some empirical scientists
will say, well, that's great,

228
00:14:39,200 --> 00:14:41,360
you can write it down, That's
lovely.

229
00:14:41,360 --> 00:14:44,800
You can draw a picture.
But like, do I actually care?

230
00:14:45,000 --> 00:14:48,440
Is this a thing that I can go
and find with some sort of

231
00:14:48,440 --> 00:14:51,280
empirical study?
So that, I think would be the

232
00:14:51,280 --> 00:14:55,400
closest thing that I can think
of to a a kind of something that

233
00:14:55,680 --> 00:15:01,360
philosophers might overlook or
that they the relative value

234
00:15:01,360 --> 00:15:05,640
placed on that enterprise is
different between philosophy of

235
00:15:05,640 --> 00:15:10,000
science and empirical science.
Megan, before we started, while

236
00:15:10,000 --> 00:15:12,960
we were waiting, Lauren, we've
accidentally sent.

237
00:15:12,960 --> 00:15:15,760
I might have missed mistakenly
not sent the ring link to the

238
00:15:15,760 --> 00:15:17,480
right place, but we were
chatting about one of our

239
00:15:17,480 --> 00:15:19,920
favorite heroes, who's Dan
Daniel Dennett.

240
00:15:20,960 --> 00:15:24,440
And, and I often looked at Dan
growing growing up as a as a

241
00:15:24,440 --> 00:15:25,960
neuroscientist and a
philosopher.

242
00:15:25,960 --> 00:15:29,200
He was someone so ingrained into
both of these fields.

243
00:15:29,520 --> 00:15:33,080
And and he often touched on this
deeper reflection culturally.

244
00:15:33,120 --> 00:15:36,800
So this whole this, does this
reflect something deeper,

245
00:15:36,800 --> 00:15:40,960
perhaps so like objectivity
versus reflection or shut up and

246
00:15:40,960 --> 00:15:45,040
calculate versus anything goes.
What do both of you think about

247
00:15:45,040 --> 00:15:47,040
this?
And how might we bridge this

248
00:15:47,080 --> 00:15:50,000
divide to transform how we study
the mind and consciousness?

249
00:15:53,720 --> 00:16:00,000
It partly relates to what has
come up already because, as

250
00:16:00,000 --> 00:16:03,800
Megan suggested, it's sometimes
is confusing to think of there

251
00:16:03,800 --> 00:16:06,920
being a difference between this
kind of philosophical work in

252
00:16:06,920 --> 00:16:11,880
science, because you see
scientists who are engaged in

253
00:16:11,880 --> 00:16:13,760
philosophical questions and
theorizing.

254
00:16:13,760 --> 00:16:16,760
So from my perspective, I look
at what they're doing and

255
00:16:17,280 --> 00:16:21,240
they're doing philosophy, and
then looking at these

256
00:16:21,240 --> 00:16:25,080
philosophers who are interested
in providing analysis and

257
00:16:25,080 --> 00:16:27,960
accounts that, as Megan was
suggesting, kind of latch onto

258
00:16:27,960 --> 00:16:31,600
the world matter.
You can do something with them.

259
00:16:31,680 --> 00:16:36,080
You can show why this would be a
good account to have or not.

260
00:16:36,760 --> 00:16:42,040
You see how both of them are
really interrelated types of

261
00:16:42,040 --> 00:16:45,480
projects.
And so I think it partly boils

262
00:16:45,480 --> 00:16:48,400
down to sometimes we have
cartoon pictures of both.

263
00:16:48,760 --> 00:16:52,920
We have a kind of cartoon
picture of a scientist who just

264
00:16:53,280 --> 00:16:57,080
takes out a measuring device and
goes out and studies the world.

265
00:16:57,480 --> 00:17:01,080
And what you miss if you look at
that picture, is all the

266
00:17:01,080 --> 00:17:03,920
theorizing that took place
before you set up that

267
00:17:03,920 --> 00:17:05,880
experiment.
There's so many assumptions

268
00:17:05,880 --> 00:17:08,359
involved.
There's so many methods you can

269
00:17:08,359 --> 00:17:11,040
choose from.
There's so many questions that

270
00:17:11,040 --> 00:17:15,160
scientists need to and do ask
themselves an answer before they

271
00:17:15,160 --> 00:17:19,119
just go out and get the
objective facts about the world.

272
00:17:19,119 --> 00:17:21,480
And it's going to depend on the
questions they ask, which is

273
00:17:21,480 --> 00:17:25,359
partly what Megan brought up.
And in some cases, you've got to

274
00:17:25,359 --> 00:17:28,840
ask the right kind of question
too, or appreciate the different

275
00:17:28,840 --> 00:17:32,440
questions require different
methods and then they give you

276
00:17:32,440 --> 00:17:35,720
different answers.
This is also just fascinating

277
00:17:35,720 --> 00:17:38,680
from the standpoint of how
complex the world is.

278
00:17:39,080 --> 00:17:42,000
Scientists have to deal with
that and they want order.

279
00:17:42,400 --> 00:17:45,840
And it's fascinating how they're
able to do that given the

280
00:17:45,840 --> 00:17:51,040
complexity of the world.
And so, you know, they are able

281
00:17:51,040 --> 00:17:54,280
to do that.
We kind of look at the places

282
00:17:54,360 --> 00:17:57,720
where they've done it, and then
we're looking at these other

283
00:17:57,720 --> 00:18:01,400
situations where there's a a
complex new question, there's

284
00:18:01,400 --> 00:18:04,720
some new territory they're
trying to understand, right?

285
00:18:04,840 --> 00:18:08,520
Is the brain the most
complicated machine on the

286
00:18:08,520 --> 00:18:11,400
planet?
You know, the brain is so

287
00:18:11,400 --> 00:18:13,800
complicated, the world is so
complicated.

288
00:18:13,880 --> 00:18:19,440
And so you see how they're
making decisions about what to

289
00:18:19,640 --> 00:18:23,680
do with that because they can't
cite all detail that's out there

290
00:18:23,680 --> 00:18:27,200
and not all of it matters.
So they have to figure out what

291
00:18:27,680 --> 00:18:31,800
details of the world matter and
how to carve out questions that

292
00:18:31,800 --> 00:18:38,080
allow them to give principled
answers to those kind of topics

293
00:18:38,080 --> 00:18:41,080
of interest.
And so this is both science and

294
00:18:41,080 --> 00:18:45,640
philosophy, the way I think
Megan and I often see it.

295
00:18:45,640 --> 00:18:49,200
But if you have a toy picture of
science and a toy picture of

296
00:18:49,200 --> 00:18:51,600
philosophy, they look very
distinct.

297
00:18:51,600 --> 00:18:53,680
And there are types of
philosophy, as Megan is

298
00:18:53,680 --> 00:18:56,760
suggesting, where it's more
armchair type work.

299
00:18:57,320 --> 00:19:03,200
And in this case what we want is
we want philosophy that's useful

300
00:19:03,280 --> 00:19:05,240
for these kind of scientific
questions.

301
00:19:05,280 --> 00:19:07,560
And we see many examples of
that.

302
00:19:07,920 --> 00:19:12,480
That might be that would be part
of the kind of answer I would

303
00:19:12,480 --> 00:19:17,960
give.
I, I totally agree with the, the

304
00:19:18,000 --> 00:19:22,360
cartoonification of science
versus philosophy and in in

305
00:19:22,360 --> 00:19:26,640
particular this version of
science, which is that you pull

306
00:19:26,640 --> 00:19:29,480
out your, you know, measurement
O meter or whatever, and you

307
00:19:29,480 --> 00:19:32,040
point it at the thing and you
get some sort of objective

308
00:19:32,320 --> 00:19:35,080
answer.
And so this mischaracterization

309
00:19:35,080 --> 00:19:39,000
of philosophy as this subjective
anything goes and science is

310
00:19:39,000 --> 00:19:41,880
objective and like, definitely
we're just measuring the world.

311
00:19:42,480 --> 00:19:46,720
No, like there is no such thing
as objective science.

312
00:19:46,720 --> 00:19:49,840
Sorry, but there just isn't
that.

313
00:19:50,040 --> 00:19:55,280
We carry, as Lauren you said,
the the assumptions that we make

314
00:19:55,280 --> 00:19:58,600
about the structure of reality,
about the types of measurements

315
00:19:58,600 --> 00:20:01,760
that are going to be useful,
about the types of models that

316
00:20:01,760 --> 00:20:04,840
we can build that will be useful
to answering a particular type

317
00:20:04,840 --> 00:20:08,080
of question or retrieving a
particular type of explanatory

318
00:20:08,080 --> 00:20:12,040
goal.
There's so many cases where if

319
00:20:12,040 --> 00:20:16,600
you actually do kind of a
historical overview of a

320
00:20:16,600 --> 00:20:19,800
particular niche field.
For example, like this

321
00:20:19,800 --> 00:20:24,160
particular type of model of
decisions and reaction times in,

322
00:20:24,560 --> 00:20:28,040
in neuroscience, you know, how,
how do people make decisions in

323
00:20:28,040 --> 00:20:30,800
a noisy environment and how long
does it take them to come to a

324
00:20:30,800 --> 00:20:33,360
decision under the conditions of
noise in the world?

325
00:20:33,360 --> 00:20:36,320
You know, you're driving down a
foggy Rd. it's foggy.

326
00:20:36,360 --> 00:20:40,040
How long do you take to decide
what you're seeing, right?

327
00:20:40,120 --> 00:20:41,840
And what do you decide that
you're seeing?

328
00:20:42,280 --> 00:20:45,520
There's models of that kind of
decision process and those

329
00:20:45,520 --> 00:20:50,200
models have been successful for
literally decades since they

330
00:20:50,200 --> 00:20:51,840
were developed.
And there's been a lot of really

331
00:20:51,840 --> 00:20:54,760
beautiful work to say this is
now like the dominant

332
00:20:54,760 --> 00:20:57,880
explanation of how we make these
types of decisions.

333
00:20:58,240 --> 00:21:02,160
But they have assumptions and
those assumptions, Dr., the

334
00:21:02,160 --> 00:21:07,760
experiments that are done to to
generate the objective empirical

335
00:21:07,760 --> 00:21:11,800
data that then goes on to
validate or 'cause those models

336
00:21:11,800 --> 00:21:16,080
to be modified a little bit.
And if you take a step back and

337
00:21:16,080 --> 00:21:20,160
you look at those assumptions,
they have constrained the space

338
00:21:20,160 --> 00:21:25,720
of inquiry in a way that
obscured potential alternative

339
00:21:25,720 --> 00:21:28,880
explanations.
So this is a particular hobby

340
00:21:28,880 --> 00:21:31,120
horse of mine because we've got
a couple papers on this

341
00:21:31,120 --> 00:21:33,000
recently.
But I think the general

342
00:21:33,000 --> 00:21:38,200
principle applies across all of
science, not just cognitive

343
00:21:38,200 --> 00:21:42,080
science and psychology and you
know, complexity sciences within

344
00:21:42,080 --> 00:21:46,120
neuroscience, but in general,
the, the way you think the world

345
00:21:46,120 --> 00:21:49,720
works and the models that you've
built and their relative success

346
00:21:49,720 --> 00:21:54,360
in capturing the components of
the system that you're trying to

347
00:21:54,360 --> 00:22:01,000
explain that gives you myopia.
And if you don't get out of

348
00:22:01,000 --> 00:22:03,400
that, if you don't take off the
blinders, you're going to miss a

349
00:22:03,400 --> 00:22:07,040
whole lot.
And simply the recognition that

350
00:22:07,040 --> 00:22:10,120
you have blinders on in the
first place allows you to

351
00:22:10,120 --> 00:22:14,040
acknowledge that science is not
an objective enterprise, that

352
00:22:14,040 --> 00:22:18,640
there is always a scientist in
the picture, and that we are

353
00:22:18,640 --> 00:22:21,520
human beings and we have biases
and we have preconceived notions

354
00:22:21,520 --> 00:22:26,920
and we have assumptions and we
we shape the way that we go

355
00:22:26,920 --> 00:22:30,720
about trying to understand the
world in ways that we not, we

356
00:22:30,720 --> 00:22:32,320
may not be fully aware of at
all.

357
00:22:32,320 --> 00:22:35,240
Those biases and implicit
assumptions are, are implicit.

358
00:22:35,320 --> 00:22:40,080
They are deeply buried and and
they're going to shape the new

359
00:22:40,080 --> 00:22:44,440
models that we built.
So I fully, fully agree with

360
00:22:44,440 --> 00:22:47,720
Lauren here.
And this is another case where I

361
00:22:47,720 --> 00:22:50,240
think there's a, you know,
seeming divide between

362
00:22:50,920 --> 00:22:54,520
objectivity and subjectivity,
philosophy versus science, that

363
00:22:54,520 --> 00:22:56,920
kind of thing.
And it's we're kidding ourselves

364
00:22:56,920 --> 00:22:59,480
if we think that science is
truly objective, because it just

365
00:22:59,480 --> 00:23:03,520
really is not.
Well, OK, so the stage is set,

366
00:23:03,600 --> 00:23:06,480
and I think now would be a great
way to explore both of your work

367
00:23:07,000 --> 00:23:10,000
together while trying to
illuminate each other's work.

368
00:23:10,240 --> 00:23:13,080
So in that, with that being
said, let's try this.

369
00:23:13,560 --> 00:23:18,120
Megan, perhaps could you tell us
why Lauren's work helps

370
00:23:18,680 --> 00:23:20,360
illuminate?
So let's say, why does her

371
00:23:20,360 --> 00:23:22,720
philosophical work help
illuminate science?

372
00:23:22,840 --> 00:23:25,280
And then I'm going to ask you,
Lauren, do the same question but

373
00:23:25,280 --> 00:23:27,440
in reverse.
Sure.

374
00:23:28,400 --> 00:23:32,280
So I as probably was, was said,
you know, in, in my

375
00:23:32,280 --> 00:23:34,600
introduction, you can go like
Google both of us.

376
00:23:35,000 --> 00:23:39,440
So I am a a philosopher and
scientist of subjective

377
00:23:39,440 --> 00:23:41,440
experience.
I study the brain and the mind.

378
00:23:41,440 --> 00:23:44,320
I try to reverse engineer the
software that's running on the

379
00:23:44,320 --> 00:23:48,840
wetware of our brains and how
that creates the subjective

380
00:23:48,840 --> 00:23:53,200
experiences that you have of the
world and the models that you

381
00:23:53,200 --> 00:23:57,240
build and query and kind of run
forward to predict what's going

382
00:23:57,240 --> 00:23:59,760
to happen in your environment
and how you're going to interact

383
00:23:59,760 --> 00:24:03,200
with it.
So the kind of work that Lauren

384
00:24:03,200 --> 00:24:07,280
does is really helpful to me
because it brings this

385
00:24:07,280 --> 00:24:09,640
conceptual clarity.
You know, the consciousness

386
00:24:09,640 --> 00:24:14,520
science as a broadly writ field
is a little bit all over the

387
00:24:14,520 --> 00:24:16,840
place.
You've got everybody from folks

388
00:24:16,840 --> 00:24:20,120
who are studying this from, you
know, kind of the, the quantum

389
00:24:20,120 --> 00:24:21,880
or mathematical side.
And then you've got the

390
00:24:21,880 --> 00:24:24,160
cognitive neuroscientists who
like to go look at brain

391
00:24:24,160 --> 00:24:25,320
activity.
And then you've got the

392
00:24:25,320 --> 00:24:28,120
theoreticians.
So it's, it's a little bit all

393
00:24:28,120 --> 00:24:29,720
over the place, like a lot of
fields.

394
00:24:29,720 --> 00:24:33,200
Sure, you've got a lot of
interdisciplinarity, but the

395
00:24:33,200 --> 00:24:37,560
nature of what we are studying
as folks who are interested in

396
00:24:37,560 --> 00:24:43,160
subjective experience is even
less objectively identifiable

397
00:24:43,160 --> 00:24:46,440
than basically anything else in
the world because it is the

398
00:24:46,440 --> 00:24:48,880
thing that lives inside your
head by definition.

399
00:24:49,960 --> 00:24:55,360
And so having clarity on those
concepts or seeking clarity on

400
00:24:55,360 --> 00:24:58,160
those concepts, what do I mean
when I say consciousness, when I

401
00:24:58,160 --> 00:25:01,520
say subjective experience, when
I say qualitative experience?

402
00:25:02,640 --> 00:25:08,800
This gives us Lauren's work.
And, and I saw this very clearly

403
00:25:08,800 --> 00:25:10,840
actually at the Southern
California Consciousness

404
00:25:10,840 --> 00:25:14,800
Conference that we both went to,
I don't know, last spring where

405
00:25:14,800 --> 00:25:17,400
Lauren kept pushing the rest of
us scientists in the room to

406
00:25:17,440 --> 00:25:20,800
say, what is it in what actually
are you trying to explain?

407
00:25:22,040 --> 00:25:24,120
What is the target of your
explanation?

408
00:25:24,600 --> 00:25:27,560
Because every time you all say
the word consciousness, I'm

409
00:25:27,560 --> 00:25:30,600
paraphrasing here, Lauren was a
lot more, you know, diplomatic.

410
00:25:30,600 --> 00:25:34,000
But basically, you know, every
time that we said the word

411
00:25:34,000 --> 00:25:36,280
consciousness, everybody in the
room meant something slightly

412
00:25:36,280 --> 00:25:38,400
different.
And it wasn't.

413
00:25:38,400 --> 00:25:41,640
This isn't just a taxonomic or
linguistic problem.

414
00:25:41,680 --> 00:25:43,640
This is a conceptual clarity
problem.

415
00:25:44,560 --> 00:25:51,480
And so I think that for the kind
of work that I do and even more

416
00:25:51,480 --> 00:25:54,080
expansive, the kind of work that
any cognitive scientist or

417
00:25:54,080 --> 00:25:56,640
computational neuroscientist
does, where we're really trying

418
00:25:56,640 --> 00:25:59,200
to reverse engineer the software
of the mind.

419
00:25:59,480 --> 00:26:04,680
In a lot of ways, the target of
the explanation itself is

420
00:26:04,680 --> 00:26:08,760
unclear from the beginning.
And it's really hard to come up

421
00:26:08,760 --> 00:26:12,960
with a nicely constrained little
box to live in and say that is

422
00:26:12,960 --> 00:26:14,760
the thing that I want to
explain.

423
00:26:15,280 --> 00:26:19,120
And so this is where someone
like Lauren and Lauren is

424
00:26:19,120 --> 00:26:22,640
particularly good at doing this
in a way that corrals the cats

425
00:26:22,640 --> 00:26:25,360
and herds the cats into coming
up with something useful.

426
00:26:26,960 --> 00:26:30,680
It's it's really valuable
because without that clarity,

427
00:26:31,080 --> 00:26:34,120
we're just going to have the
same conversations over and over

428
00:26:34,120 --> 00:26:36,920
and over again and they will
always devolve into what is it

429
00:26:36,920 --> 00:26:38,440
that we're even trying to
understand.

430
00:26:40,480 --> 00:26:42,360
Lauren, same question, but about
Megan's work.

431
00:26:44,120 --> 00:26:48,240
Perfect.
It's, it's so important as a

432
00:26:48,240 --> 00:26:55,200
philosopher of science to talk
to actual scientists to make

433
00:26:55,200 --> 00:27:00,000
sure that the way you're
characterizing what they do

434
00:27:00,760 --> 00:27:06,760
makes sense, is accurate, and it
sort of keeps you in check a

435
00:27:06,760 --> 00:27:10,920
bit.
One of the challenges of my

436
00:27:10,920 --> 00:27:14,920
field is that sometimes
philosophers will have toy

437
00:27:15,720 --> 00:27:20,120
simplified characterizations of
what scientists are interested

438
00:27:20,120 --> 00:27:23,560
in, what they want to explain,
and then what they're doing in

439
00:27:23,560 --> 00:27:27,280
the 1st place.
And so one of the areas I work

440
00:27:27,280 --> 00:27:30,600
on is scientific explanation.
How do scientists give

441
00:27:30,600 --> 00:27:32,920
explanations?
How do you know they've got a

442
00:27:32,920 --> 00:27:35,800
real one?
What are the standards that need

443
00:27:35,800 --> 00:27:38,400
to be met?
Well, one thing you need as a

444
00:27:38,400 --> 00:27:41,240
philosopher of science, if
you're going to do that well, is

445
00:27:41,240 --> 00:27:44,120
you need to capture the actual
explanatory targets that

446
00:27:44,120 --> 00:27:49,160
scientists are interested in.
And so one of the many values of

447
00:27:49,160 --> 00:27:55,600
talking to Megan is looking at
the types of explanatory targets

448
00:27:55,840 --> 00:28:01,000
that she's interested in her
work and then in her field,

449
00:28:01,480 --> 00:28:06,040
they're far more complicated
than a lot of the more simple

450
00:28:06,040 --> 00:28:08,920
models we have for how
explanations work.

451
00:28:09,400 --> 00:28:14,520
And so if we're going to provide
hopeful, accurate accounts,

452
00:28:14,520 --> 00:28:19,600
scientific explanation, we need
to make sure that we're not just

453
00:28:19,600 --> 00:28:25,280
talking about explaining how if
you throw a rock and a bottle,

454
00:28:25,280 --> 00:28:28,600
it shatters, which is a, you
know, there's these kind of

455
00:28:28,600 --> 00:28:33,640
classic examples that show up a
lot in philosophy that are often

456
00:28:33,640 --> 00:28:35,680
quite simple.
They have an explanatory target

457
00:28:35,680 --> 00:28:37,560
that's binary.
It sort of happens or it

458
00:28:37,560 --> 00:28:40,240
doesn't.
And you can even think of these

459
00:28:40,240 --> 00:28:44,960
examples that are more sciency.
So you might want to explain eye

460
00:28:44,960 --> 00:28:48,800
color in a fruit fly.
There's different colors that

461
00:28:48,800 --> 00:28:51,280
will show up and you want to
know well what explains why it's

462
00:28:51,280 --> 00:28:55,200
got red eyes or white or black.
Or you might want to explain the

463
00:28:55,200 --> 00:28:57,520
height of a plant.
You have genetically identical

464
00:28:57,520 --> 00:28:59,080
plants and they've got different
heights.

465
00:28:59,160 --> 00:29:03,960
What explains that?
Those are getting us real

466
00:29:03,960 --> 00:29:12,000
scientific examples, but those
are so much more simplified and

467
00:29:14,360 --> 00:29:17,880
not complex when you compare it
to something like explaining

468
00:29:18,000 --> 00:29:20,560
subjective experience.
When you look at explaining

469
00:29:20,560 --> 00:29:23,720
consciousness, even when you
look at explaining disease

470
00:29:23,720 --> 00:29:27,360
outcomes that are harder to
identify and measure.

471
00:29:27,360 --> 00:29:31,320
And so keeping us honest, right?
And so that's one of the main

472
00:29:31,320 --> 00:29:37,600
advantages of of working with
Megan is it keeps your

473
00:29:37,600 --> 00:29:47,320
philosophy honest, both in terms
of are we actually capturing the

474
00:29:47,920 --> 00:29:51,000
phenomena in the world that
scientists are interested, that

475
00:29:51,000 --> 00:29:52,800
they're studying and then how
they do it.

476
00:29:52,800 --> 00:29:57,600
So another nice thing that Megan
mentioned is that scientists,

477
00:29:58,080 --> 00:30:01,360
you know, and humans, when we're
reasoning in everyday life and

478
00:30:01,360 --> 00:30:05,120
in scientific context, we have
limited information about the

479
00:30:05,120 --> 00:30:10,400
world.
We don't have that picture where

480
00:30:10,400 --> 00:30:14,040
you've got information about all
of the details.

481
00:30:15,000 --> 00:30:20,760
And so one of the features we
need to include in our accounts

482
00:30:20,760 --> 00:30:24,560
is that limitation.
When humans reason, there's

483
00:30:24,560 --> 00:30:28,560
limitations in terms of
computational abilities,

484
00:30:28,560 --> 00:30:31,280
computational power, the time
scale in which they're making

485
00:30:31,280 --> 00:30:33,960
decisions.
Scientists are humans and so

486
00:30:35,040 --> 00:30:38,240
what's what's important is our
accounts of explanation need to

487
00:30:38,240 --> 00:30:42,080
include those limitations, but
also they managed to be

488
00:30:42,080 --> 00:30:45,160
successful despite those
constraints.

489
00:30:45,160 --> 00:30:51,720
And so part of what is so
helpful about interdisciplinary

490
00:30:51,720 --> 00:30:55,080
connections of being a
philosopher of science, working

491
00:30:55,080 --> 00:30:59,360
with an actual scientist is that
when we're coming up with

492
00:31:00,360 --> 00:31:03,720
accounts of how scientific
practice and explanations work,

493
00:31:03,920 --> 00:31:08,920
you can actually check it with
the practice of scientists that

494
00:31:08,920 --> 00:31:11,880
are right next door to you.
You can talk to them about it.

495
00:31:13,040 --> 00:31:17,560
You can make sure that you have
clarity on what their goals are,

496
00:31:17,640 --> 00:31:19,160
right?
That's, that's something that's

497
00:31:19,160 --> 00:31:23,960
very important for, in order for
us to provide criteria for

498
00:31:23,960 --> 00:31:27,560
explanation or ways of
understanding causality that are

499
00:31:27,560 --> 00:31:31,200
useful, that we need to know
what goals scientists have.

500
00:31:31,200 --> 00:31:35,200
And then are these concepts
useful for their goals.

501
00:31:35,320 --> 00:31:42,720
And so there's a a whole host of
reasons why working with Megan

502
00:31:42,720 --> 00:31:49,480
and talking with Megan kind of
helps keep my philosophy honest

503
00:31:49,480 --> 00:31:53,000
in a way that I wouldn't be able
to do on my own, right?

504
00:31:53,000 --> 00:31:57,440
Because he's doing that kind of
scientific work in a way that

505
00:31:57,440 --> 00:31:59,280
I'm not.
So it's a big advantage of this

506
00:31:59,280 --> 00:32:03,240
interdisciplinary approach.
Yeah, I completely agree.

507
00:32:03,240 --> 00:32:06,320
I think that both of you work
works together.

508
00:32:06,440 --> 00:32:08,040
It's a very symbiotic
relationship.

509
00:32:08,040 --> 00:32:09,880
It's it's something that should
be seen as one.

510
00:32:09,880 --> 00:32:12,120
And I think that by the end of
this conversation, hopefully you

511
00:32:12,120 --> 00:32:16,320
both do identify as both
philosopher and neuroscientist.

512
00:32:16,320 --> 00:32:18,680
But Megan, let's let's go to
your work for a moment.

513
00:32:18,840 --> 00:32:21,880
In computational and cognitive
neuroscience, models attempt to

514
00:32:21,880 --> 00:32:24,280
capture how the brain handles
uncertainty.

515
00:32:24,960 --> 00:32:28,000
What can these models truly
reveal about something you just

516
00:32:28,000 --> 00:32:30,640
touched on earlier, subjective
experience.

517
00:32:31,560 --> 00:32:35,400
So if this is truly subjective,
are these models going to give

518
00:32:35,400 --> 00:32:37,440
us any sort of objective
information?

519
00:32:39,120 --> 00:32:42,600
Yeah, great, great question.
And this is maybe not the hard

520
00:32:42,600 --> 00:32:44,880
problem, but this is one of the
hard questions, right.

521
00:32:44,880 --> 00:32:50,080
So the, the idea here is can any
empirical science give us any

522
00:32:50,080 --> 00:32:55,760
sort of foothold or toehold or
fingernail hold on, on something

523
00:32:55,760 --> 00:32:57,480
that we might refer to as the
hard problem?

524
00:32:57,560 --> 00:32:59,840
And then the nature of
subjective experience.

525
00:33:00,480 --> 00:33:06,000
And I, I think, you know, I'm
gonna use a couple overused

526
00:33:06,000 --> 00:33:09,200
examples here maybe to explain
where I'm coming from.

527
00:33:09,200 --> 00:33:15,600
But a lot of folks that in the
philosophically informed science

528
00:33:15,600 --> 00:33:20,760
of consciousness might say that
consciousness science right now

529
00:33:20,760 --> 00:33:23,880
is in the state that life
sciences was, you know, several

530
00:33:23,880 --> 00:33:26,440
100 years ago where there was
this magical force that we

531
00:33:26,440 --> 00:33:29,680
called life.
And it was this vital force.

532
00:33:29,920 --> 00:33:32,400
And we didn't know what it was,
but it was like a thing that was

533
00:33:32,400 --> 00:33:36,280
out there and, and it was magic.
And that as we learned more

534
00:33:36,280 --> 00:33:40,280
about biology, the problem just
kind of dissolved that we found

535
00:33:40,280 --> 00:33:44,360
ways of describing and
explaining what was going on

536
00:33:44,680 --> 00:33:47,560
that made it very clear, well,
this is a thing that's alive and

537
00:33:47,560 --> 00:33:48,720
this is a thing that's not
alive.

538
00:33:48,720 --> 00:33:51,640
And this is a thing that's maybe
halfway in between like viruses

539
00:33:52,000 --> 00:33:53,840
and we're not really sure
whether they're alive or not

540
00:33:53,840 --> 00:33:55,600
alive by, by different
definitions.

541
00:33:55,600 --> 00:33:58,320
But it kind of doesn't matter
where the the bifurcation where

542
00:33:58,320 --> 00:34:00,120
we put that binary point
anymore.

543
00:34:00,560 --> 00:34:05,760
And I feel like I agree with the
folks who will, who will state

544
00:34:05,760 --> 00:34:10,239
that consciousness science may
have a similar future ahead of

545
00:34:10,239 --> 00:34:13,480
it, where right now we have this
monolithic thing that we call

546
00:34:13,480 --> 00:34:15,760
consciousness or subjective
experience.

547
00:34:16,719 --> 00:34:23,880
And it seems like there is this
massive explanatory gap, but the

548
00:34:23,880 --> 00:34:27,400
reality very well could be that
as we approach that explanatory

549
00:34:27,400 --> 00:34:31,440
gap, it it shrinks and it, it
appears to be this big chasm

550
00:34:31,440 --> 00:34:33,679
from over here.
But as we take tiny baby steps

551
00:34:33,679 --> 00:34:37,120
towards it, it turns out that
that was an illusion or a

552
00:34:37,120 --> 00:34:41,159
barrage or, or something.
So I think that the work that

553
00:34:41,159 --> 00:34:45,400
we're doing on how the brain
deals with uncertainty, how it

554
00:34:45,400 --> 00:34:49,520
arrives at the best that it it
does, that kind of inference to

555
00:34:49,520 --> 00:34:52,400
the best explanation.
You know, your brain is itself a

556
00:34:52,400 --> 00:34:55,280
natural philosopher and that
it's trying to understand the

557
00:34:55,280 --> 00:34:58,400
environment and build a model of
the environment all the time.

558
00:34:58,400 --> 00:35:03,800
It's doing what scientists are
trying to do with, you know,

559
00:35:03,800 --> 00:35:05,600
with help from philosophers of
science.

560
00:35:05,680 --> 00:35:11,600
And so I think that in a way,
understanding how the brain is,

561
00:35:11,640 --> 00:35:13,680
is building these models of the
world.

562
00:35:14,120 --> 00:35:18,680
The result of those models is
ultimately somehow magically our

563
00:35:18,680 --> 00:35:21,040
subjective experience.
Unless you want to deny that

564
00:35:21,040 --> 00:35:24,240
subjective experience exists.
And that again, might be like,

565
00:35:24,560 --> 00:35:27,560
OK, I'm going to leave that over
there for the the folks who want

566
00:35:28,160 --> 00:35:30,800
to to argue that maybe
subjective experience doesn't

567
00:35:30,800 --> 00:35:33,280
exist.
But for me, it's a useful

568
00:35:33,280 --> 00:35:38,000
assumption to say, Yep,
subjective experience exists,

569
00:35:38,280 --> 00:35:43,360
conscious awareness exists.
So I'm going to try to build

570
00:35:43,480 --> 00:35:51,440
ways of capturing variants in it
and linking that variance to

571
00:35:52,920 --> 00:35:56,280
simplified components of models
that I build.

572
00:35:56,600 --> 00:35:59,640
If I twist this knob in my
model, it predicts that some

573
00:35:59,640 --> 00:36:02,960
sort of output on the subjective
experience side is going to

574
00:36:02,960 --> 00:36:06,320
change in a particular way.
I'd go do an experiment.

575
00:36:06,520 --> 00:36:09,840
Yeah, it did OK when people say,
oh, I have a stronger subjective

576
00:36:09,840 --> 00:36:12,080
experience.
OK, so maybe I'm on to something

577
00:36:12,080 --> 00:36:14,320
there.
I'll link it up with the brain

578
00:36:14,320 --> 00:36:17,360
and say, OK, if I twist this
knob, then I see like this area

579
00:36:17,360 --> 00:36:20,640
of the brain lights up more or
the pattern changes or

580
00:36:20,640 --> 00:36:24,040
something, then I can say, OK, I
think that this is the nature of

581
00:36:24,040 --> 00:36:27,880
the information being
represented in the patterns of

582
00:36:27,880 --> 00:36:30,840
neural activity.
And it maps onto this component

583
00:36:30,840 --> 00:36:33,520
of the model and it maps onto
this report of your subjective

584
00:36:33,520 --> 00:36:36,000
experience.
So that's how I'm trying to go

585
00:36:36,000 --> 00:36:38,760
about it.
I'm not going to say that any

586
00:36:38,760 --> 00:36:45,240
work that I'm doing is, is
building, is solving any sort of

587
00:36:45,240 --> 00:36:48,160
hard problem or jumping any sort
of explanatory gap.

588
00:36:48,160 --> 00:36:52,400
But I think that if we sit over
here and we say, hey, look at

589
00:36:52,400 --> 00:36:55,520
that explanatory gap, it's the
size of the Grand Canyon.

590
00:36:55,760 --> 00:36:58,240
I'm not even going to bother
approaching it to see how big it

591
00:36:58,240 --> 00:37:00,080
is.
I don't think that that's a

592
00:37:00,080 --> 00:37:02,800
useful enterprise.
So I want to take, I want to

593
00:37:02,800 --> 00:37:05,200
create approaches to take those
baby steps.

594
00:37:05,200 --> 00:37:08,320
And that's some of the work that
we're doing on metacognition

595
00:37:08,320 --> 00:37:12,680
specifically is not just
understanding how the brain kind

596
00:37:12,680 --> 00:37:15,680
of builds models of the world or
how the mind builds models of

597
00:37:15,680 --> 00:37:19,680
the world, but how it also puts
itself into those models, how it

598
00:37:19,680 --> 00:37:23,200
builds models of itself.
And the subjective experiences

599
00:37:23,200 --> 00:37:28,360
that we have are ultimately the
reflection of a combination of

600
00:37:28,360 --> 00:37:33,360
the model that we've built of
our environments and kind of our

601
00:37:33,360 --> 00:37:38,080
own understanding or
introspective insight into that

602
00:37:38,080 --> 00:37:41,000
model that we built that we can
query and and evaluate that

603
00:37:41,000 --> 00:37:43,760
model and look at it.
So that's how I use uncertainty

604
00:37:43,880 --> 00:37:51,800
or noise or variation is to look
for how it dictates how it

605
00:37:51,800 --> 00:37:54,880
interacts with the subjective
experiences that we can report

606
00:37:54,880 --> 00:37:57,160
in these kinds of experimental
approaches.

607
00:37:58,760 --> 00:38:01,560
Lauren, my question for you it,
you'll notice that these

608
00:38:01,560 --> 00:38:05,240
questions sort of inform each
other and then bounce back and

609
00:38:05,240 --> 00:38:08,160
forth.
So your research as a, as a sort

610
00:38:08,160 --> 00:38:12,200
of reply to Megan, your
research, Lauren, distinguishes

611
00:38:12,200 --> 00:38:15,040
between types of explanation.
So, mechanistic causal

612
00:38:15,040 --> 00:38:19,600
unification based.
When a neuroscientist claims

613
00:38:19,600 --> 00:38:22,680
that they've explained
something, for example

614
00:38:22,680 --> 00:38:25,600
consciousness, which form of
explanation are they actually

615
00:38:25,600 --> 00:38:30,640
offering?
I think the short answer to this

616
00:38:31,080 --> 00:38:37,720
question is that it's still a
bit of an open, It's still a bit

617
00:38:37,720 --> 00:38:45,720
of an open question, what we
expect these explanations to

618
00:38:45,720 --> 00:38:50,080
meet in terms of the criteria.
I also think that few people in

619
00:38:50,080 --> 00:38:54,200
this space suggest that they
have a full explanation or

620
00:38:54,640 --> 00:38:58,440
almost any explanation of
consciousness.

621
00:38:58,800 --> 00:39:03,560
So let me back up a little bit.
Here in my work in philosophy of

622
00:39:03,560 --> 00:39:07,160
science, we study scientific
explanation.

623
00:39:07,600 --> 00:39:12,320
What does it take for a
scientist to have an explanation

624
00:39:12,320 --> 00:39:15,480
and to give an explanation?
Something that's very important

625
00:39:15,480 --> 00:39:22,120
about this space is that is
saying a little bit about what

626
00:39:22,160 --> 00:39:25,520
explanation is.
So we often think of explanation

627
00:39:25,520 --> 00:39:29,640
as one of the most important
things that scientists do.

628
00:39:29,800 --> 00:39:31,960
It's a very difficult thing for
them to do.

629
00:39:32,200 --> 00:39:36,280
We think of explanations as
giving deep understanding of the

630
00:39:36,280 --> 00:39:38,920
world.
So in this sense, explanation is

631
00:39:38,920 --> 00:39:42,120
different from other types of
projects that scientists engage

632
00:39:42,120 --> 00:39:45,000
in that are very important
projects, like giving

633
00:39:45,440 --> 00:39:49,520
descriptions of the world.
So I can describe the color of a

634
00:39:49,520 --> 00:39:53,000
leaf on a tree, but I haven't
explained why it has that color.

635
00:39:53,000 --> 00:39:55,280
So that's a description.
Scientists engage in

636
00:39:55,280 --> 00:39:58,920
classification.
They sort things into helpful

637
00:39:58,920 --> 00:40:02,480
categories.
That's also not an explanation

638
00:40:02,520 --> 00:40:05,200
of something in the world.
And in other cases they give

639
00:40:05,200 --> 00:40:08,280
predictions.
And giving a prediction is of

640
00:40:08,280 --> 00:40:11,680
course very useful, but it's not
yet giving an explanation.

641
00:40:11,920 --> 00:40:15,160
We think of explanations as
answering why questions.

642
00:40:15,560 --> 00:40:18,640
So why is it the case that that
leaf is green?

643
00:40:18,880 --> 00:40:22,000
Why does this person have a
disease as opposed to not?

644
00:40:22,280 --> 00:40:28,280
Why does this?
Plant have a certain height as

645
00:40:28,280 --> 00:40:29,680
opposed to having another
height.

646
00:40:29,960 --> 00:40:37,280
And so and so a first thing to
point out is that explanations

647
00:40:37,280 --> 00:40:39,440
offer deep understanding of the
world.

648
00:40:39,440 --> 00:40:41,920
We want to know what criteria
they need to meet to know that

649
00:40:41,920 --> 00:40:45,440
we have good ones right.
How do you know when you have a

650
00:40:45,440 --> 00:40:50,400
right, a good or an appropriate
explanation of any kind of

651
00:40:50,400 --> 00:40:54,680
phenomenon of interest, right, A
disease outcome, social

652
00:40:54,680 --> 00:40:56,920
inequalities, right?
This doesn't just extend to

653
00:40:56,920 --> 00:40:59,080
neuroscience.
This is all scientific domains.

654
00:40:59,400 --> 00:41:03,520
So the two parts that you see
here for an explanation or two

655
00:41:03,520 --> 00:41:07,200
parts that show up is you first
need to ask an explanatory why

656
00:41:07,200 --> 00:41:12,280
question, or you can couch your
explanatory target in terms of a

657
00:41:12,280 --> 00:41:15,400
why question, right?
What explains consciousness is

658
00:41:15,400 --> 00:41:19,560
going to be a sort of start or
you can put in any kind of

659
00:41:20,080 --> 00:41:22,600
target of interest.
So you ask a why question.

660
00:41:22,840 --> 00:41:27,760
The explanation is the answer.
So why is it the case that this

661
00:41:27,760 --> 00:41:30,440
patient has measles as opposed
to not?

662
00:41:30,520 --> 00:41:34,600
Well, part of the explanation is
there's some virus that they

663
00:41:35,040 --> 00:41:38,840
encountered and then there's a
bunch of other interactions in

664
00:41:38,840 --> 00:41:42,320
the immune system that explain
why they have that disease

665
00:41:42,320 --> 00:41:44,760
outcome.
SO2 parts of an explanation,

666
00:41:45,880 --> 00:41:49,840
explanatory why question and
then your answer to that

667
00:41:49,840 --> 00:41:53,200
question.
So in order to give an

668
00:41:53,200 --> 00:41:56,680
explanation for something, you
need to say what you want to

669
00:41:56,680 --> 00:42:00,280
explain.
And that's where that why

670
00:42:00,280 --> 00:42:04,320
question shows up.
And there's actually a lot of

671
00:42:04,320 --> 00:42:08,960
features involved in providing a
well defined explanatory target.

672
00:42:09,280 --> 00:42:11,920
And so right now in
consciousness research, there's

673
00:42:11,920 --> 00:42:19,200
debate and investigation and
discussion about what's the

674
00:42:19,200 --> 00:42:21,600
explanatory target and then
what's the answer?

675
00:42:21,600 --> 00:42:24,720
What's the stuff that explains
that target?

676
00:42:24,720 --> 00:42:29,560
And as Megan dimensioned, there
are many different explanatory

677
00:42:29,560 --> 00:42:32,160
targets that are showing up in
consciousness research.

678
00:42:32,160 --> 00:42:37,680
And part of the challenge is
being very clear about which one

679
00:42:37,800 --> 00:42:41,960
a scientist is interested in.
So saying you know what explains

680
00:42:41,960 --> 00:42:45,720
consciousness, that's not a well
defined scientific question yet.

681
00:42:45,920 --> 00:42:49,280
It's not yet a well defined
explanatory why question for two

682
00:42:49,280 --> 00:42:51,960
reasons.
First, you need to define

683
00:42:52,120 --> 00:42:55,640
consciousness and we don't have
a consensus definition.

684
00:42:55,640 --> 00:42:58,480
So then you need to be precise
about which one you have in

685
00:42:58,480 --> 00:43:00,520
mind.
And then the second is you need

686
00:43:00,520 --> 00:43:03,160
a contrast.
You always have to say as

687
00:43:03,160 --> 00:43:06,200
opposed to what?
So if I'm interested in

688
00:43:06,200 --> 00:43:10,040
explaining why someone has a
loss of sensation in their hand,

689
00:43:11,200 --> 00:43:13,680
I can't just say what explains
why they have a loss of

690
00:43:13,680 --> 00:43:16,320
sensation in their hand.
I have to say as opposed to

691
00:43:16,320 --> 00:43:19,560
what?
As opposed to full sensation in

692
00:43:19,560 --> 00:43:22,960
their hand, or as opposed to a
loss of sensation in their leg,

693
00:43:23,280 --> 00:43:25,680
right.
If I don't specify the contrast,

694
00:43:26,280 --> 00:43:28,000
you don't know what answer to
give me.

695
00:43:28,160 --> 00:43:32,520
And so part of what?
Philosophers of science do here

696
00:43:32,520 --> 00:43:35,480
is we're looking at what are the
things that need to be met to

697
00:43:35,480 --> 00:43:38,000
have a well defined explanatory
target.

698
00:43:38,000 --> 00:43:40,400
And you see them in other
scientific fields.

699
00:43:40,400 --> 00:43:44,440
So we're looking at cases where
we have scientists who've

700
00:43:44,440 --> 00:43:48,840
successfully given explanations
and we're looking at the

701
00:43:48,840 --> 00:43:50,360
criteria.
And then we're looking at

702
00:43:50,360 --> 00:43:52,840
consciousness research and these
other spaces where you have

703
00:43:52,840 --> 00:43:57,800
scientists working on answering
really difficult questions that

704
00:43:58,120 --> 00:44:01,880
we don't yet have answers to.
But you first have to ask the

705
00:44:01,880 --> 00:44:05,440
right kind of question before
you can get an answer.

706
00:44:05,880 --> 00:44:08,040
And so there's two main
challenges.

707
00:44:08,040 --> 00:44:11,880
What's the right question?
And then in terms of what's the

708
00:44:11,880 --> 00:44:17,360
right answer, here's where you
start to see what do you need to

709
00:44:17,560 --> 00:44:20,320
give an answer.
Do you want causal information?

710
00:44:20,480 --> 00:44:22,160
Do you want a causal
explanation?

711
00:44:22,600 --> 00:44:24,440
Do you want a functional
explanation?

712
00:44:25,120 --> 00:44:30,200
We sometimes think that
computational explanations are.

713
00:44:30,400 --> 00:44:33,440
There's something there that we
need that's going to help answer

714
00:44:33,440 --> 00:44:36,440
that question.
Mechanism, of course, shows up

715
00:44:37,240 --> 00:44:40,560
in philosophy of science.
We have different categories of

716
00:44:40,560 --> 00:44:42,760
explanations.
Causal is a main one.

717
00:44:43,800 --> 00:44:47,000
I would put mechanistic
explanation that's just a causal

718
00:44:47,000 --> 00:44:50,160
explanation.
Mechanisms are just saying

719
00:44:50,160 --> 00:44:53,640
you've identified.
Well in most cases mechanism is

720
00:44:53,640 --> 00:44:57,080
a causal explanation.
In other cases there might be a

721
00:44:57,080 --> 00:44:59,560
non causal mathematical
explanation.

722
00:45:00,720 --> 00:45:04,960
So I guess 3 categories I would
pin down are causal explanation,

723
00:45:06,200 --> 00:45:08,880
non causal, mathematical.
There's a lot of debate about

724
00:45:08,880 --> 00:45:12,400
what those look like.
Functional explanations you

725
00:45:12,400 --> 00:45:14,560
could think like evolutionary
explanation.

726
00:45:14,560 --> 00:45:16,880
That's not quite what we're
interested in here.

727
00:45:17,920 --> 00:45:22,680
And computational, there's a
question what you know, are

728
00:45:22,680 --> 00:45:25,200
computational explanations
causal?

729
00:45:25,200 --> 00:45:27,160
Do they, are they a subcategory
of causal?

730
00:45:27,720 --> 00:45:33,480
But for the most part, we're
often interested in causal

731
00:45:33,480 --> 00:45:36,080
explanations.
So you're, you're looking for

732
00:45:36,080 --> 00:45:39,520
the main factors that 'cause
that target of interest.

733
00:45:39,520 --> 00:45:43,720
And there's also debate here
about in consciousness research.

734
00:45:43,720 --> 00:45:46,360
Do you have the right factors
there?

735
00:45:46,800 --> 00:45:49,960
If you're interested in
correlates, neural correlates,

736
00:45:49,960 --> 00:45:57,840
there's often a bit of slippage
in how that's used.

737
00:45:58,080 --> 00:46:00,960
But if something is a mere
correlation with your target,

738
00:46:01,080 --> 00:46:03,560
then you don't yet have
causality.

739
00:46:04,120 --> 00:46:09,840
So this is where a philosopher
is working with scientists to,

740
00:46:09,960 --> 00:46:14,320
to to help determine what are
your different explanatory

741
00:46:14,320 --> 00:46:19,920
targets, because that's going to
help you get the right answer to

742
00:46:19,920 --> 00:46:23,840
that question.
And what I would say is there

743
00:46:23,840 --> 00:46:27,840
isn't one question here.
There almost never is for

744
00:46:27,840 --> 00:46:34,000
complex systems.
There isn't A1 complete full

745
00:46:34,360 --> 00:46:36,240
theory of everything
explanation.

746
00:46:36,800 --> 00:46:39,800
It's piece meal.
And so you're asking different

747
00:46:40,160 --> 00:46:42,400
why questions about a complex
system.

748
00:46:42,720 --> 00:46:45,920
And that's the sort of trick
that scientists have to manage

749
00:46:45,920 --> 00:46:49,920
this complexity.
But part of what that shows are

750
00:46:49,920 --> 00:46:51,760
those two pieces of an
explanation.

751
00:46:51,960 --> 00:46:57,240
Your explanatory why question
the X we, the fancy word here is

752
00:46:57,800 --> 00:46:59,600
explanondem.
This is what you want to

753
00:46:59,600 --> 00:47:02,080
explain.
And then the explanons is what

754
00:47:02,760 --> 00:47:05,200
answers that question.
What gives you the explanation?

755
00:47:05,200 --> 00:47:08,640
Usually some kind of causal
information causes explain their

756
00:47:08,640 --> 00:47:10,400
effects.
And so there's a whole challenge

757
00:47:10,400 --> 00:47:13,920
of once you have a well defined
explanatory target going out in

758
00:47:13,920 --> 00:47:19,000
the world and identifying the
main causes that are relevant to

759
00:47:19,000 --> 00:47:23,240
that target.
I think let's try and bridge

760
00:47:23,240 --> 00:47:25,440
these two together.
So Megan, taking all of that

761
00:47:25,440 --> 00:47:28,000
into account, these levels of
explanation, your work in

762
00:47:28,000 --> 00:47:31,840
metacognition or your research
in neuroimaging modelling,

763
00:47:31,840 --> 00:47:35,160
etcetera, how would you then
address what Lawrence talking

764
00:47:35,160 --> 00:47:38,440
about using your work as a as a
guide for us?

765
00:47:40,160 --> 00:47:41,960
Yeah, great.
Great question.

766
00:47:42,120 --> 00:47:43,760
That's kind of the whole
enterprise, right?

767
00:47:44,760 --> 00:47:48,760
I think there's a couple things
that Lauren said that really

768
00:47:50,000 --> 00:47:53,200
resonate with me.
And this is the nature of being

769
00:47:53,200 --> 00:47:56,000
very clear about the the
questions that you're asking.

770
00:47:56,000 --> 00:48:01,160
So in actually this is this is
what we try to instill in our

771
00:48:01,160 --> 00:48:04,480
students at Neuromash is the
asking and answering the right

772
00:48:04,480 --> 00:48:07,360
kind of question is, is the
primary thing that you should be

773
00:48:07,360 --> 00:48:09,320
looking at.
The technique can come

774
00:48:09,320 --> 00:48:10,920
afterwards.
You have to pick the technique

775
00:48:10,920 --> 00:48:14,480
later in order to answer the
question, but you got to get the

776
00:48:14,480 --> 00:48:18,360
question right first.
And, and I just had a piece come

777
00:48:18,360 --> 00:48:21,640
out recently about how to come
up with good scientific

778
00:48:21,640 --> 00:48:23,520
questions and what that really
looks like.

779
00:48:24,080 --> 00:48:28,680
And there's been a lot of work
in computational neuroscience

780
00:48:28,680 --> 00:48:31,760
and cognitive neuroscience in
how to think about the

781
00:48:31,760 --> 00:48:34,520
interaction between the
questions that you're asking and

782
00:48:34,520 --> 00:48:37,440
the goals that you have as a
modeler or as a scientist in

783
00:48:37,440 --> 00:48:41,640
general.
And the plurality that Lauren

784
00:48:41,640 --> 00:48:46,440
noted is absolutely right that
there's, you know, depending on

785
00:48:46,440 --> 00:48:49,200
who you ask, there's what, how
and why questions.

786
00:48:49,200 --> 00:48:51,480
That's classic Diane and Abbott
2005.

787
00:48:51,480 --> 00:48:54,840
There's, you know, Mars levels
of analysis, which are

788
00:48:55,440 --> 00:48:58,520
computational and algorithmic
and implementation.

789
00:48:58,520 --> 00:49:02,440
You can ask questions about each
of those levels of, you know, at

790
00:49:02,480 --> 00:49:06,160
each of those levels of inquiry,
you can have questions that

791
00:49:06,520 --> 00:49:08,680
target different levels of
granularity.

792
00:49:08,680 --> 00:49:12,360
So you have micro versus macro
versus organismal versus like

793
00:49:12,360 --> 00:49:15,720
societal.
And so this plurality of

794
00:49:15,720 --> 00:49:18,960
questions and plurality of
approaches I think is really

795
00:49:18,960 --> 00:49:22,600
critical because as Lauren said,
there is no one question to rule

796
00:49:22,600 --> 00:49:24,520
them all.
There is no one answer or one

797
00:49:24,520 --> 00:49:29,560
explanation to rule them all.
There's no one ring to rule them

798
00:49:29,560 --> 00:49:30,800
all.
It's just not going to happen.

799
00:49:31,160 --> 00:49:36,400
So I think that from our
perspective, this is this is

800
00:49:36,400 --> 00:49:39,920
actually something that I try to
instill in all of not just my,

801
00:49:39,960 --> 00:49:42,920
you know, doctoral trainees, but
the undergraduates that I teach

802
00:49:42,920 --> 00:49:45,320
and the folks that we reach out
to at Neuromatch as well.

803
00:49:45,840 --> 00:49:50,760
Is that this the recognition
that the heterogeneity is, is a

804
00:49:50,760 --> 00:49:55,640
feature, not a bug that I think
is is really, really critical.

805
00:49:55,840 --> 00:49:58,360
There was something else that
that Lauren said to earlier

806
00:49:58,360 --> 00:50:05,200
though about this, which is in
coming up with your your type of

807
00:50:05,200 --> 00:50:07,760
question, you have to have a
little bit of an understanding

808
00:50:07,760 --> 00:50:11,280
of the the way in which you
might go about building that

809
00:50:11,280 --> 00:50:15,600
explanons that explanation of
the target and the level of

810
00:50:15,600 --> 00:50:18,920
visibility that you might have
into the system.

811
00:50:18,920 --> 00:50:21,520
The level of access that you
might have into the system.

812
00:50:22,160 --> 00:50:25,920
Because you can come up with
this amazing question that is

813
00:50:26,200 --> 00:50:30,960
actually unanswerable with the
tools that we have available to

814
00:50:30,960 --> 00:50:35,200
us.
And you can also come up with a

815
00:50:35,200 --> 00:50:39,600
question that might be
answerable in.

816
00:50:39,800 --> 00:50:42,440
So it's, it's answerable in
principle, but not in practice.

817
00:50:42,440 --> 00:50:44,880
That's one kind.
But then there are others that

818
00:50:44,880 --> 00:50:48,400
might like not be answerable in
principle, at least not yet.

819
00:50:49,320 --> 00:50:54,000
Because we don't have, it's not
that we don't have the right

820
00:50:54,000 --> 00:50:56,800
tool, the right neuroimaging
technique or the right model or

821
00:50:56,800 --> 00:50:58,320
something.
It's that we don't know how to

822
00:50:58,560 --> 00:51:01,080
ask that question by the right
way yet.

823
00:51:01,720 --> 00:51:05,000
And and you said something,
Lauren, that that really struck

824
00:51:05,000 --> 00:51:08,440
me, this kind of limited
visibility into the world idea

825
00:51:08,440 --> 00:51:15,160
that we always have this, these
barriers that shape the types of

826
00:51:15,160 --> 00:51:17,800
explanations we seek, the types
of questions that we can shape,

827
00:51:18,080 --> 00:51:22,120
types of answers that we can go
out and look for and the kinds

828
00:51:22,120 --> 00:51:27,080
of data that we can acquire.
But I do think that there are

829
00:51:27,160 --> 00:51:29,440
there are other kinds of
limitations that are not these

830
00:51:29,440 --> 00:51:32,520
kind of practical like, you
know, the parts of the world are

831
00:51:32,520 --> 00:51:35,160
unobservable.
I think that there are other

832
00:51:35,160 --> 00:51:40,360
limitations that that we should
acknowledge in building these

833
00:51:40,360 --> 00:51:44,280
questions as well.
So, you know, imagine a case

834
00:51:44,280 --> 00:51:48,080
where you have I've built some
sort of magical machine in the

835
00:51:48,080 --> 00:51:51,160
future, some magical brain
imaging device that has perfect

836
00:51:51,160 --> 00:51:54,320
visibility into everything that
the that every neuron is do on

837
00:51:54,320 --> 00:51:57,360
every every synapse.
I have the morphology, the shape

838
00:51:57,360 --> 00:51:59,360
of every neuron.
I have the structure of the

839
00:51:59,360 --> 00:52:01,040
dendritic tree.
I have all the chemical

840
00:52:01,040 --> 00:52:03,240
interactions.
I have literally everything

841
00:52:03,240 --> 00:52:07,640
about the brain.
I still could shape all sorts of

842
00:52:07,640 --> 00:52:11,160
different kinds of questions.
I can't just take that model of

843
00:52:11,160 --> 00:52:13,600
the brain and shove it into some
artificial intelligence and be

844
00:52:13,600 --> 00:52:15,400
like, poof, great.
I understand.

845
00:52:15,720 --> 00:52:19,120
I have an explanation.
It's still like even if we had

846
00:52:19,120 --> 00:52:22,280
perfect visibility, the
questions are still going to be

847
00:52:22,680 --> 00:52:26,320
the primary driver and the lack
of visibility into certain kinds

848
00:52:26,320 --> 00:52:29,440
of systems is still going to be
the limitation.

849
00:52:30,040 --> 00:52:34,040
And that lack of visibility is
now not coming from the tools

850
00:52:34,040 --> 00:52:36,840
that we have available.
It's like the lack of conceptual

851
00:52:36,840 --> 00:52:41,320
clarity, the lack of, of being
very precise about the target of

852
00:52:41,320 --> 00:52:45,680
explanation.
So yeah, I think it's, it's all

853
00:52:45,680 --> 00:52:48,680
got to come down to the
questions that you that you ask,

854
00:52:48,680 --> 00:52:51,640
the shape of those and how those
questions interact with the

855
00:52:51,640 --> 00:52:54,440
goals that you have as a, as a
scientist.

856
00:52:55,560 --> 00:52:59,520
So do you want to build an
explanation that has clinical

857
00:52:59,520 --> 00:53:00,920
impact?
Do you want to build an

858
00:53:00,920 --> 00:53:06,040
explanation that is beautiful
and intuitive and simple and

859
00:53:06,040 --> 00:53:09,400
like easy to explain to others?
Do you, you know, So what is?

860
00:53:09,400 --> 00:53:12,200
What is the kind of explanation
that you want to build too, not

861
00:53:12,200 --> 00:53:13,800
just the kind of question that
you want to ask?

862
00:53:15,200 --> 00:53:16,480
Lauren, anything you want to add
to that?

863
00:53:17,920 --> 00:53:22,200
Absolutely.
We sometimes discuss this in

864
00:53:22,200 --> 00:53:26,720
philosophy in terms of having a
God's eye view of the world or

865
00:53:26,720 --> 00:53:32,120
the Laplacian demon sort of
knowledge about all of the stuff

866
00:53:32,320 --> 00:53:35,960
that's out there.
And it can be very tempting as a

867
00:53:35,960 --> 00:53:40,200
philosopher, sometimes a
scientist too, to think there's

868
00:53:40,200 --> 00:53:44,040
all of this stuff out there.
If I just knew more about all of

869
00:53:44,040 --> 00:53:49,560
the stuff, I would get the
perfect complete explanation.

870
00:53:50,000 --> 00:53:58,480
And the challenge for that
picture is we currently don't

871
00:53:58,480 --> 00:54:03,120
have that information yet.
We're successful at navigating

872
00:54:03,120 --> 00:54:05,960
the world.
So part of what we're looking at

873
00:54:05,960 --> 00:54:09,920
here as philosophers is how
scientists reason and how

874
00:54:09,920 --> 00:54:12,600
they're successful.
But also in everyday life, we

875
00:54:12,600 --> 00:54:16,880
give explanations, we engage in
causal reasoning, and we do that

876
00:54:16,880 --> 00:54:18,520
pretty well.
Are we perfect?

877
00:54:18,520 --> 00:54:24,520
No, but we do it pretty well.
And we just don't have that kind

878
00:54:24,520 --> 00:54:27,080
of full, complete information
about the world.

879
00:54:27,080 --> 00:54:30,080
So the question is, how do we do
that?

880
00:54:30,080 --> 00:54:34,800
It looks like we don't need that
sort of information.

881
00:54:34,800 --> 00:54:38,560
And if you wanna provide an an
account of how a human or a

882
00:54:38,560 --> 00:54:43,240
scientist ever studies the
world, you can never include

883
00:54:43,600 --> 00:54:49,840
that kind of picture because
it's just that's, that's a

884
00:54:49,840 --> 00:54:54,600
fantasy story, right?
Where all scientists are humans

885
00:54:54,600 --> 00:54:56,160
and they're engaged with the
world.

886
00:54:56,680 --> 00:55:01,120
And if you want to talk about
having all those details, you're

887
00:55:01,120 --> 00:55:03,400
talking about a future science
that doesn't exist.

888
00:55:03,400 --> 00:55:06,840
And I'm not sure my future
science is going to match up to.

889
00:55:06,880 --> 00:55:12,440
So what we want to talk about is
current science and and past and

890
00:55:12,440 --> 00:55:17,280
what has worked.
And so one of the fun parts of

891
00:55:17,440 --> 00:55:21,560
doing this kind of work for me,
I think Megan has this too, is

892
00:55:21,560 --> 00:55:25,080
you're looking at what has
worked in these different

893
00:55:25,080 --> 00:55:30,600
scientific contexts, and you
have a sort of domain, general

894
00:55:30,600 --> 00:55:35,320
view of real scientific practice
and how scientists manage those

895
00:55:35,320 --> 00:55:39,880
limitations to get information
about the world.

896
00:55:39,920 --> 00:55:41,760
And so, yeah, it can be very
tempting.

897
00:55:42,000 --> 00:55:45,360
There's interesting temptations
and interesting pictures we have

898
00:55:46,440 --> 00:55:50,880
in everyday life, philosophers
and scientists about getting

899
00:55:51,040 --> 00:55:53,840
full detail is very attractive
to us.

900
00:55:53,840 --> 00:55:55,920
Also reduction, which I think
will come up.

901
00:55:56,440 --> 00:55:59,640
If we just could get more
information about stuff at lower

902
00:55:59,640 --> 00:56:04,200
levels, we could get better
explanations or the view that

903
00:56:04,960 --> 00:56:09,440
that's where we should look to
get the right kind of

904
00:56:09,920 --> 00:56:18,120
explanatory account.
So yes, very much, very much

905
00:56:18,120 --> 00:56:22,920
compatible with this kind of
realistic picture of scientific

906
00:56:22,920 --> 00:56:31,120
practice and scientific work, as
opposed to this idealized view

907
00:56:31,120 --> 00:56:33,920
where we ever had access to all
of the details.

908
00:56:35,240 --> 00:56:37,800
There's a, let me just follow up
on that for two seconds.

909
00:56:37,800 --> 00:56:42,040
There's a, a favorite paper that
I like to send students in my

910
00:56:42,040 --> 00:56:45,040
neuroanalytics class to, to kind
of highlight this.

911
00:56:45,520 --> 00:56:48,320
If only we had perfect access to
everything, then we would

912
00:56:48,320 --> 00:56:53,040
definitely understand.
And it's this paper that Conrad

913
00:56:53,040 --> 00:56:55,320
recording and, and some
colleagues wrote, I don't know,

914
00:56:55,320 --> 00:56:58,480
10 to something years ago.
It's it's called could a

915
00:56:58,480 --> 00:57:01,080
neuroscient, Could a
neuroscientist understand a

916
00:57:01,080 --> 00:57:05,800
microprocessor?
And they have this toy example

917
00:57:05,800 --> 00:57:08,400
where they say, OK, I've got
this microprocessor and it runs

918
00:57:08,400 --> 00:57:12,440
like Donkey Kong and Sonic the
Hedgehog and Mario or something

919
00:57:12,440 --> 00:57:15,480
like that.
And they go about dissecting

920
00:57:15,480 --> 00:57:20,120
this microprocessor using all of
the fancy available tools, all

921
00:57:20,120 --> 00:57:22,240
of the models that we would use
in neurosciences.

922
00:57:22,240 --> 00:57:24,360
So they measure all the
resistors and they measure all

923
00:57:24,360 --> 00:57:28,080
the synapses between all the,
you know, nodes in the

924
00:57:28,080 --> 00:57:30,440
microprocessor.
And it's a simulation

925
00:57:30,440 --> 00:57:32,800
microprocessor.
So it literally they have

926
00:57:32,800 --> 00:57:35,480
perfect access, right?
There's like no noise in the

927
00:57:35,480 --> 00:57:39,640
system.
And they do like inactivation

928
00:57:39,640 --> 00:57:42,080
experiments and they measure
like the network connectivity

929
00:57:42,080 --> 00:57:44,080
and the like the state
transitions and they do all the

930
00:57:44,080 --> 00:57:47,720
tricks.
And they still don't end up with

931
00:57:47,720 --> 00:57:51,000
an explanation for why poking
the thing in this way makes it

932
00:57:51,000 --> 00:57:55,920
unable to run Mario or why
poking the thing in this way

933
00:57:55,920 --> 00:57:59,480
versus that way has no effect on
whether Mario can hop over the

934
00:57:59,480 --> 00:58:03,520
thing or not.
So it's really like a, a kind of

935
00:58:03,520 --> 00:58:07,960
cheeky demonstration that it
really matters what you think

936
00:58:07,960 --> 00:58:10,080
you're measuring.
Like do you have perfect access

937
00:58:10,080 --> 00:58:11,880
to the system?
Do you have perfect access to

938
00:58:11,880 --> 00:58:16,160
all of the things that are
actually the, the parts of the

939
00:58:16,160 --> 00:58:17,960
system that you need to have
access to?

940
00:58:17,960 --> 00:58:20,880
And there they have access to
all the physical system, but

941
00:58:20,880 --> 00:58:23,240
they're not like reading the
software.

942
00:58:23,880 --> 00:58:27,720
And they'd have to come up with,
you know, the software in order

943
00:58:27,720 --> 00:58:31,560
to, to kind of build more of an
explanation for how the software

944
00:58:31,560 --> 00:58:33,960
and the hardware interact.
So I don't know, it's, it's a

945
00:58:33,960 --> 00:58:36,000
fun one.
If you haven't seen that paper,

946
00:58:36,520 --> 00:58:39,600
those of you who are out there
listening, I I suggest you go

947
00:58:39,600 --> 00:58:42,840
and have a look because it is.
It's fun and it's cheeky and

948
00:58:42,840 --> 00:58:46,360
it's also quite profound.
I love that concept, Megan, if

949
00:58:46,360 --> 00:58:48,160
you've got the link, please
share it with me so I can put it

950
00:58:48,160 --> 00:58:49,120
in the.
Yeah, I will.

951
00:58:50,280 --> 00:58:53,240
In your work, from all the work
you've done, what are some of

952
00:58:53,240 --> 00:58:57,440
the ground breaking things you
guys have figured out during

953
00:58:57,440 --> 00:59:00,920
this time that allow us to ask
deeper philosophical questions?

954
00:59:01,360 --> 00:59:04,600
I think that some of so some of
the things that that I am

955
00:59:04,600 --> 00:59:08,160
interested in here is as we
said, the nature of

956
00:59:08,880 --> 00:59:12,320
metacognition and subjective
experience and how those two

957
00:59:12,320 --> 00:59:15,400
interacts and metacognition
being the process and subjective

958
00:59:15,400 --> 00:59:18,720
experience potentially being
like the output of that process.

959
00:59:20,160 --> 00:59:24,880
I like this approach because it
combines, it combines

960
00:59:24,880 --> 00:59:31,000
neuroscience and behavior and
psychophysics and psychometrics

961
00:59:31,000 --> 00:59:37,000
and also computational models in
a way that tries to build like

962
00:59:37,560 --> 00:59:41,680
this piece meal.
Small, tiny explanation of why

963
00:59:41,680 --> 00:59:46,000
it is that if I change this
particular aspect of the world,

964
00:59:46,000 --> 00:59:48,360
it changes your subjective
experience in this particular

965
00:59:48,360 --> 00:59:51,200
way, and it changes your
subjective experience in a way

966
00:59:51,200 --> 00:59:55,400
that's different from changing
your ability to just interact

967
00:59:55,400 --> 01:00:01,480
with the world in a meaningful,
goal directed, kind of

968
01:00:01,480 --> 01:00:04,640
evolutionarily optimized way.
And So what I mean by that is

969
01:00:04,640 --> 01:00:12,040
that for us, when we process the
world, you can think of a lot of

970
01:00:12,040 --> 01:00:15,200
what's happening in that
processing is going on under the

971
01:00:15,200 --> 01:00:19,360
hood, so to speak.
There's a lot of heavy lifting

972
01:00:19,400 --> 01:00:22,240
that the brain does that is not
available to us.

973
01:00:22,960 --> 01:00:25,960
Consciously, subjectively,
anything like that.

974
01:00:26,640 --> 01:00:29,360
And I'm not even just talking
reflexes, I'm talking all the

975
01:00:29,360 --> 01:00:32,360
processing that gives rise to
the fact that you see the world

976
01:00:32,360 --> 01:00:35,680
in 3D.
Can you like kind of consciously

977
01:00:35,680 --> 01:00:38,320
intervene on that and say like,
no, it's, I know that it's

978
01:00:38,320 --> 01:00:40,640
actually a 2D image on my
retina.

979
01:00:41,320 --> 01:00:43,600
No, like you just see the world
in 3D.

980
01:00:44,240 --> 01:00:47,040
It just happens magically
somehow.

981
01:00:47,400 --> 01:00:51,160
And so there's a lot of this
complex processing that goes on

982
01:00:51,160 --> 01:00:53,360
under the hood.
I'm a vision scientist.

983
01:00:53,360 --> 01:00:56,920
So vision science is my my
typical workhorse here.

984
01:00:57,520 --> 01:01:00,080
But you can play this game for a
lot of other things too.

985
01:01:00,080 --> 01:01:03,600
For any way that you interact
with the world, you see a

986
01:01:03,760 --> 01:01:10,480
complex, noisy, stochastic
dynamic environment and you are

987
01:01:10,560 --> 01:01:13,280
standing like you're standing on
a sidewalk and you're deciding

988
01:01:13,280 --> 01:01:14,840
whether to cross the road or
not.

989
01:01:15,200 --> 01:01:18,240
And you hear things and you see
things and you have to decide is

990
01:01:18,240 --> 01:01:20,360
it safe?
And, and that decision is going

991
01:01:20,360 --> 01:01:23,240
to impact your ability to
survive, right.

992
01:01:23,240 --> 01:01:25,400
If you if you get it wrong, you
get hit by a car.

993
01:01:27,960 --> 01:01:32,440
But so much of that could be
said to be done potentially

994
01:01:32,920 --> 01:01:36,600
without conscious awareness.
And so a lot of the work that

995
01:01:36,600 --> 01:01:39,760
we're doing is taking all of
these tools in our tool kit and

996
01:01:39,760 --> 01:01:44,960
pointing them at trying to
dissociate the conscious

997
01:01:44,960 --> 01:01:47,760
experience or subjective
experience part from all the

998
01:01:47,760 --> 01:01:51,880
other stuff that like a Tesla
could do or a Roomba could maybe

999
01:01:51,880 --> 01:01:55,120
do a very smart Roomba.
And there's nothing that it's

1000
01:01:55,120 --> 01:01:59,640
like to be a Roomba, presumably.
So I think the favorite thing

1001
01:01:59,640 --> 01:02:02,960
that I'm doing right now that
might hopefully have some impact

1002
01:02:03,440 --> 01:02:07,000
is the conceptual and
methodological distinction

1003
01:02:07,080 --> 01:02:11,960
between understanding the
behaviors, computations and

1004
01:02:11,960 --> 01:02:16,240
neural correlates that give rise
to adaptive goal directed

1005
01:02:16,240 --> 01:02:18,240
interaction with the
environment.

1006
01:02:18,480 --> 01:02:22,960
Not dying, not stepping in front
of the car and separating that

1007
01:02:22,960 --> 01:02:25,880
from the computations and neural
circuits and neural

1008
01:02:25,880 --> 01:02:30,560
representations that are
uniquely driving or uniquely

1009
01:02:30,560 --> 01:02:32,920
correlated with the subjective
experience part.

1010
01:02:34,280 --> 01:02:38,640
And I think the reason, so I'm
not the only one who's working

1011
01:02:38,640 --> 01:02:40,760
on this, by the way, there's
like quite a lot of us who see

1012
01:02:40,760 --> 01:02:44,240
that distinction as being really
important, but there's also

1013
01:02:44,240 --> 01:02:47,760
quite a lot of people who don't.
And I would say that there are

1014
01:02:47,760 --> 01:02:50,240
some folks who are in the
artificial intelligence space,

1015
01:02:50,240 --> 01:02:54,720
for example, who conflate
intelligent behavior with

1016
01:02:55,040 --> 01:02:58,000
probability of being, of someone
being in there, of being

1017
01:02:58,000 --> 01:03:02,960
conscious, or even worse,
conflate intelligent looking

1018
01:03:02,960 --> 01:03:05,840
behavior with, you know,
subjective experience or

1019
01:03:05,840 --> 01:03:08,440
consciousness with threat.
You know, we say, oh, no,

1020
01:03:08,440 --> 01:03:10,560
Skynet's going to wake up and
it's going to kill all of us.

1021
01:03:11,440 --> 01:03:15,600
And I think that some of the
work that we're doing,

1022
01:03:15,920 --> 01:03:19,440
admittedly with very simple
stimuli and very simple models

1023
01:03:19,760 --> 01:03:23,040
will help drive at that really
important distinction that just

1024
01:03:23,040 --> 01:03:26,480
because you've got a system
that's seeming intelligent, it's

1025
01:03:26,480 --> 01:03:29,680
seeming like it can solve
problems, that doesn't mean that

1026
01:03:29,680 --> 01:03:32,000
anyone's in there, that it's
that there's anything that it's

1027
01:03:32,000 --> 01:03:34,600
like to be that system.
It also doesn't mean that it's a

1028
01:03:34,600 --> 01:03:36,560
threat.
Things can be threatening

1029
01:03:36,840 --> 01:03:39,160
without having subjective
experience and without being

1030
01:03:39,160 --> 01:03:41,880
intelligent.
And so all of those things are

1031
01:03:41,880 --> 01:03:44,720
independent.
So I think that was maybe the

1032
01:03:44,720 --> 01:03:47,760
thing that I would say is
potentially the impact of the

1033
01:03:47,760 --> 01:03:52,160
work that we're doing.
Lauren, if you had to envision a

1034
01:03:52,160 --> 01:03:55,120
philosophically informed
neuroscience infused with your

1035
01:03:55,120 --> 01:03:58,880
work regarding mechanism,
explanation, etcetera, what

1036
01:03:58,880 --> 01:04:00,280
would that look like in
practice?

1037
01:04:00,280 --> 01:04:03,280
An experiment, design, theory,
formation, or even peer review.

1038
01:04:05,840 --> 01:04:13,560
Part of what it would look like
is a kind of neuroscience that

1039
01:04:13,560 --> 01:04:17,040
we partly already see.
But part of what it would

1040
01:04:17,040 --> 01:04:21,760
highlight is clarity about the
types of causes and causal

1041
01:04:21,760 --> 01:04:26,440
systems that neuroscientists
study and that that researchers

1042
01:04:26,440 --> 01:04:32,240
are studying in this space.
One way to see this is we often

1043
01:04:32,240 --> 01:04:37,240
find the term mechanism being
used to refer loosely to any

1044
01:04:37,240 --> 01:04:42,960
kind of causal system.
Part of what my work has done is

1045
01:04:43,040 --> 01:04:46,520
specified that there are
different types of causes out in

1046
01:04:46,520 --> 01:04:50,280
the world that scientists study.
They have very different types

1047
01:04:50,320 --> 01:04:56,440
of features, and those matter
for how we study the systems.

1048
01:04:56,440 --> 01:04:58,600
They matter for the behaviors
they produce.

1049
01:04:59,120 --> 01:05:01,760
And you start to see some of
these distinctions show up

1050
01:05:01,760 --> 01:05:05,080
already when scientists talk
about causes that are

1051
01:05:05,440 --> 01:05:09,360
probabilistic versus
deterministic, causes that are

1052
01:05:09,360 --> 01:05:13,280
more or less strong, more or
less stable, or when scientists

1053
01:05:13,280 --> 01:05:17,440
use terms like referring to a
causal system as a pathway or a

1054
01:05:17,440 --> 01:05:20,880
circuit or a cascade.
There's a reference here and

1055
01:05:20,880 --> 01:05:24,440
then analogy to different types
of causal systems.

1056
01:05:24,440 --> 01:05:28,840
So part of what my work is
compatible with and can

1057
01:05:28,840 --> 01:05:35,200
encourage is far more clarity
about the types of causes that

1058
01:05:35,200 --> 01:05:40,200
are out there that we study.
We partly need words to refer to

1059
01:05:40,200 --> 01:05:44,160
the different types we have,
different features that they

1060
01:05:44,760 --> 01:05:52,240
have, and this is partly going
to inform the standards that we

1061
01:05:52,240 --> 01:05:57,480
have, basically being clear
about the standards that we have

1062
01:05:57,480 --> 01:06:00,400
for the kind of causal
information that neuroscientists

1063
01:06:00,400 --> 01:06:03,840
need to provide right now.
The standard is usually phrased

1064
01:06:03,840 --> 01:06:06,560
as a mechanism.
Scientists needs to provide

1065
01:06:06,920 --> 01:06:09,800
mechanistic information about a
system.

1066
01:06:10,280 --> 01:06:13,640
You see this in grant calls, you
see it in journal publication

1067
01:06:13,640 --> 01:06:16,680
guidelines.
In order to get funded, in order

1068
01:06:16,680 --> 01:06:20,520
to get published, A researcher
needs to identify a mechanism or

1069
01:06:20,520 --> 01:06:23,840
provide mechanistic insights.
But then the editors very

1070
01:06:23,840 --> 01:06:26,800
quickly follow up by saying that
they can't tell you what a

1071
01:06:26,800 --> 01:06:30,800
mechanism is.
And then it's often the case

1072
01:06:30,800 --> 01:06:33,280
that two or more researchers
reviewing the same paper

1073
01:06:33,280 --> 01:06:36,760
completely disagree about
whether the same paper provides

1074
01:06:36,760 --> 01:06:40,160
mechanistic insights or not.
So right now you have a causal

1075
01:06:40,160 --> 01:06:46,440
standard for the field that is
this word mechanism, and we have

1076
01:06:46,440 --> 01:06:50,400
different people defining that
term in different ways, and

1077
01:06:50,400 --> 01:06:53,120
there's no consensus on what
exactly it refers to.

1078
01:06:53,120 --> 01:06:56,680
Is it lower level cellular
details?

1079
01:06:56,680 --> 01:07:00,120
Is it higher level network
information?

1080
01:07:00,160 --> 01:07:03,480
You have researchers pointing to
both as real mechanisms.

1081
01:07:04,160 --> 01:07:07,440
Partly we need to put mechanism
aside, and when we're interested

1082
01:07:07,440 --> 01:07:09,960
in causal explanation, we just
need to talk about these as

1083
01:07:09,960 --> 01:07:13,840
causal systems, causal
relationships, the way that

1084
01:07:13,840 --> 01:07:15,960
causes are organized and
arranged.

1085
01:07:16,600 --> 01:07:20,280
How do you know you have the
right kind of causal information

1086
01:07:20,280 --> 01:07:22,440
that's explanatory, relevant to
your target?

1087
01:07:22,440 --> 01:07:29,440
So part of it is clarity on the
standards for the field and

1088
01:07:29,440 --> 01:07:34,080
getting more clarity on what we
mean by mechanism if that's the

1089
01:07:34,680 --> 01:07:37,400
current specification of the
field standard.

1090
01:07:38,440 --> 01:07:40,600
Yeah, I'm, I'm looking forward
to having both of you separately

1091
01:07:40,600 --> 01:07:43,200
on the channel so we can explore
specific aspects of both your

1092
01:07:43,200 --> 01:07:45,200
work.
But I think at this point to get

1093
01:07:45,200 --> 01:07:49,280
back to the main topic here, if
you both had to look at science

1094
01:07:49,280 --> 01:07:51,960
and philosophy and moving into
the future, bringing them both

1095
01:07:51,960 --> 01:07:55,120
together, what would this new,
what would a new picture of mind

1096
01:07:55,120 --> 01:07:58,480
would emerge from this?
Would it be something different,

1097
01:07:58,480 --> 01:08:00,760
do you think, do you think it
would change anything?

1098
01:08:01,160 --> 01:08:04,400
And what advantage does this
have for new fields specifically

1099
01:08:04,560 --> 01:08:07,200
that will arise?
I'll take that.

1100
01:08:07,200 --> 01:08:09,800
I I love that you said
advantages for new fields

1101
01:08:09,800 --> 01:08:13,920
because I think that one of the
challenges that we have in

1102
01:08:13,920 --> 01:08:16,120
neuroscience, again, I'm a
neuroscientist, so that's where

1103
01:08:16,120 --> 01:08:19,680
I'm coming from, is that this is
still a new field.

1104
01:08:20,080 --> 01:08:23,200
It's really young, especially
like the neuroscience of, you

1105
01:08:23,200 --> 01:08:24,800
know, consciousness or something
like that.

1106
01:08:24,800 --> 01:08:27,399
Like psychology, yeah.
Has been, oh, it's been around

1107
01:08:27,399 --> 01:08:31,560
for, you know, 150 years in its
present state and, and, you

1108
01:08:31,560 --> 01:08:33,960
know, quantitative computational
psychology.

1109
01:08:33,960 --> 01:08:36,479
Yeah.
But like, that's not very long.

1110
01:08:37,319 --> 01:08:41,319
That's really not very long.
Modern science is not very old

1111
01:08:41,319 --> 01:08:47,200
in general, but when it comes to
modern science, philosophy or

1112
01:08:47,479 --> 01:08:52,680
psychology and neuroscience
specifically, like our first

1113
01:08:52,680 --> 01:08:56,800
neural signatures are only about
100 and some odd years old, you

1114
01:08:56,800 --> 01:08:58,479
know, when EEG was first
invented.

1115
01:08:59,040 --> 01:09:01,200
And so this is a really young
field.

1116
01:09:01,720 --> 01:09:05,640
And so I think that new fields
and emerging fields like this is

1117
01:09:05,640 --> 01:09:10,479
where the value is.
This is where we need help at

1118
01:09:10,479 --> 01:09:15,840
getting conceptual clarity
because a lot, in a lot of cases

1119
01:09:15,840 --> 01:09:21,720
for new and emerging fields,
the, the major tool that we have

1120
01:09:21,720 --> 01:09:25,800
to say, well, where do we even
begin is something like

1121
01:09:26,560 --> 01:09:29,399
intuition.
I came up with an idea and like,

1122
01:09:29,399 --> 01:09:31,479
let's just run with it and see
what happens.

1123
01:09:32,240 --> 01:09:37,359
And as we, I think all have
probably discovered through one

1124
01:09:37,359 --> 01:09:40,439
point or another in our lives,
what you think is happening and

1125
01:09:40,439 --> 01:09:43,080
what is actually happening is
never like your first guess is

1126
01:09:43,080 --> 01:09:47,840
never the right one.
And so recognizing the value of

1127
01:09:47,840 --> 01:09:51,120
philosophy of science in young
and emerging fields and fields

1128
01:09:51,120 --> 01:09:54,520
that have yet to emerge, I think
is really powerful.

1129
01:09:54,600 --> 01:10:00,200
And as Lauren said, some, you
know, this idea that especially

1130
01:10:00,200 --> 01:10:04,400
at the beginning in a young
science, seeing the

1131
01:10:04,560 --> 01:10:09,360
commonalities in the structure
across this new emerging field

1132
01:10:09,400 --> 01:10:12,800
and maybe a more established
discipline that has kind of

1133
01:10:12,800 --> 01:10:16,280
already figured out some stuff.
So we've got a lot of really

1134
01:10:16,280 --> 01:10:20,440
precise terminology in how we
understand mechanism, whatever

1135
01:10:20,440 --> 01:10:23,800
that prestige wars is, by the
way, like, yeah, we all, we all

1136
01:10:23,800 --> 01:10:25,640
know that we want to go for a
mechanistic or causal

1137
01:10:25,640 --> 01:10:27,560
explanation and get what even is
that?

1138
01:10:28,120 --> 01:10:33,680
But there are, there are even
among the young modern science

1139
01:10:33,680 --> 01:10:36,800
fields, there are some that are
very, very young, you know,

1140
01:10:36,800 --> 01:10:38,640
they're children.
And then there are some that are

1141
01:10:38,640 --> 01:10:42,520
a little bit more middle-aged.
And so on the surface they're

1142
01:10:42,520 --> 01:10:45,040
all gonna have these extremely
different features, these

1143
01:10:45,040 --> 01:10:47,760
extremely different kind of
surface level properties of or

1144
01:10:47,760 --> 01:10:52,400
observables.
But causal and mechanistic

1145
01:10:52,400 --> 01:10:55,360
explanations are a unifying
principle.

1146
01:10:56,160 --> 01:10:59,280
And so recognizing that the
shape of the problems that we're

1147
01:10:59,280 --> 01:11:03,160
trying to solve might actually
be quite similar in this new and

1148
01:11:03,160 --> 01:11:05,640
emerging field to a more
established field.

1149
01:11:05,840 --> 01:11:08,240
But when you're a scientist and
you are reading the science

1150
01:11:08,240 --> 01:11:12,480
journals and you're kind of like
in your little your little box,

1151
01:11:12,720 --> 01:11:15,120
you don't have time to pop your
head out and go read some

1152
01:11:15,120 --> 01:11:18,120
astrophysics journal.
It's just not going to happen.

1153
01:11:18,640 --> 01:11:21,040
Or some material Science Journal
or something.

1154
01:11:21,040 --> 01:11:28,600
And so having though this target
of building explanatory models,

1155
01:11:29,080 --> 01:11:32,080
of getting conceptual clarity,
of understanding the types of

1156
01:11:32,080 --> 01:11:34,960
causal and mechanistic
explanations that we can go for

1157
01:11:35,440 --> 01:11:39,760
that can provide a bridge.
And you say, OK, well, we're

1158
01:11:39,760 --> 01:11:42,920
talking about completely
different systems, completely

1159
01:11:42,920 --> 01:11:44,800
different targets of
explanation, but the kinds of

1160
01:11:44,800 --> 01:11:47,600
explanations that we're trying
to build might actually be quite

1161
01:11:47,600 --> 01:11:51,240
similar.
And I have experiences with this

1162
01:11:51,480 --> 01:11:56,240
where I, I wrote this, this
paper with my one of my graduate

1163
01:11:56,240 --> 01:11:59,000
students and another professor
and his graduate student.

1164
01:11:59,240 --> 01:12:05,440
And he's a microbiologist.
He studies the microbiome of

1165
01:12:05,480 --> 01:12:08,960
pregnant women and how the
microbiome of pregnant women

1166
01:12:08,960 --> 01:12:11,880
impacts birth outcomes and
maternal outcomes.

1167
01:12:12,880 --> 01:12:15,920
I don't do that at all.
I have no idea even what half of

1168
01:12:15,920 --> 01:12:19,920
the vocabulary is that he says.
And yet through talking with

1169
01:12:19,920 --> 01:12:22,560
him, we discovered that the
shape of the problem that we are

1170
01:12:22,560 --> 01:12:25,480
trying to solve was actually
very similar.

1171
01:12:25,480 --> 01:12:27,800
And so we wrote a paper about
that and how like these kind of

1172
01:12:27,800 --> 01:12:30,120
modern, you know, machine
learning tools might be able to

1173
01:12:30,120 --> 01:12:33,160
help us with that.
And that's what we need in these

1174
01:12:33,160 --> 01:12:37,120
young emerging fields is to see,
well, someone else solved a

1175
01:12:37,120 --> 01:12:38,760
problem that had a similar
shape.

1176
01:12:39,680 --> 01:12:44,840
And, and if we can get that
right, it will propel the new

1177
01:12:44,840 --> 01:12:47,720
fields that we have right now
forward and emerging fields that

1178
01:12:47,720 --> 01:12:51,320
come in the future.
I think that that will be a

1179
01:12:51,880 --> 01:12:57,080
major step forward in building
better science, building more

1180
01:12:57,080 --> 01:13:08,120
coherent science that is self
perpetuating. 100% agree with

1181
01:13:08,480 --> 01:13:19,200
Megan here and.
We partly already see nice

1182
01:13:19,480 --> 01:13:24,400
features and aspects of current
work in neuroscience that show

1183
01:13:25,120 --> 01:13:29,960
this interdisciplinary aspect.
We've got lots of neuroscientist

1184
01:13:29,960 --> 01:13:33,960
philosophers who are aware of
both fields.

1185
01:13:34,160 --> 01:13:37,880
You see this in Megan's work.
You see it in the work of Anil

1186
01:13:37,880 --> 01:13:43,840
Seth, Danny Bassett, other
cognitive scientists like Nadia

1187
01:13:43,840 --> 01:13:48,040
Cherniak, Karen Walker.
There really are lots of

1188
01:13:48,840 --> 01:13:52,080
scientists and academics who are
engaged in this

1189
01:13:52,080 --> 01:13:57,200
interdisciplinary approach.
We 100% need it too.

1190
01:13:57,400 --> 01:14:01,200
For many of these challenging
questions, we have an all hands

1191
01:14:01,200 --> 01:14:04,040
on deck type situation.
We.

1192
01:14:04,040 --> 01:14:07,760
Need many different people from
many different perspectives to

1193
01:14:07,760 --> 01:14:12,920
help out with these questions.
The challenges are it can be

1194
01:14:12,920 --> 01:14:16,560
pretty uncomfortable to do this
kind of work because you're

1195
01:14:16,560 --> 01:14:19,960
never the main expert.
When I'm talking to scientists,

1196
01:14:20,640 --> 01:14:24,800
I mean, they are always so much
more of a deep expert in their

1197
01:14:24,800 --> 01:14:28,960
area of work than I than I could
ever be.

1198
01:14:29,240 --> 01:14:31,920
And I'm, it's partly, it's the
way it has to be.

1199
01:14:31,920 --> 01:14:33,840
I'm talking to social
scientists, I'm talking to

1200
01:14:33,840 --> 01:14:36,360
cognitive scientists, I'm
talking to neuroscientists.

1201
01:14:37,120 --> 01:14:40,640
It's a bit of a a stretch
sometimes, but for me, it's very

1202
01:14:40,640 --> 01:14:45,160
important for me to put myself
in their perspective.

1203
01:14:45,160 --> 01:14:47,440
What are they interested in?
Can I bring philosophy of

1204
01:14:47,440 --> 01:14:49,680
science?
That's useful and that's helpful

1205
01:14:49,960 --> 01:14:54,480
in order to figure out if it's a
person on the team or a hand on

1206
01:14:54,480 --> 01:14:56,800
your deck.
They do need to be useful and

1207
01:14:56,800 --> 01:14:59,920
they do need to be helpful.
And it's not easy for

1208
01:15:00,680 --> 01:15:03,840
philosophers to fill those shoes
sometimes with respect to

1209
01:15:03,840 --> 01:15:06,520
scientific work because it can
be uncomfortable.

1210
01:15:07,080 --> 01:15:10,440
You have to learn a lot of
science, and you're still never

1211
01:15:10,440 --> 01:15:14,160
going to know, You're never
going to have the same kind of

1212
01:15:14,360 --> 01:15:18,880
picture.
But these discussions can show

1213
01:15:18,880 --> 01:15:21,800
you types of philosophy that
will be really helpful for

1214
01:15:21,800 --> 01:15:23,240
scientists to have.
You don't.

1215
01:15:23,480 --> 01:15:25,360
We also don't want to reinvent
the wheel.

1216
01:15:25,400 --> 01:15:28,680
And we've seen this in cases
where you have researchers that

1217
01:15:28,680 --> 01:15:30,440
aren't interacting with each
other, right?

1218
01:15:30,440 --> 01:15:33,720
Someone spends a lot of their
career developing an approach

1219
01:15:34,000 --> 01:15:38,200
that someone built basically 3
decades earlier.

1220
01:15:38,200 --> 01:15:39,640
So you don't want to reinvent
the wheel.

1221
01:15:40,920 --> 01:15:44,520
You do want some pushback.
I need it from Megan.

1222
01:15:44,600 --> 01:15:49,360
I try to give it to her too.
I think the, the standard thing

1223
01:15:49,360 --> 01:15:52,000
you'll probably hear scientists
say about philosophers is

1224
01:15:52,000 --> 01:15:55,000
they're sort of the one asking
that question of, well, what do

1225
01:15:55,000 --> 01:15:59,360
you mean by mechanism?
You know, And then you give an

1226
01:15:59,360 --> 01:16:01,800
answer and then we think of
counter examples and it's like,

1227
01:16:01,800 --> 01:16:04,920
well, if that's what you mean,
then there's a problem that

1228
01:16:04,920 --> 01:16:06,680
shows up.
Or if that's what you mean by

1229
01:16:06,680 --> 01:16:09,600
explanation, you're including
all these cases that you don't

1230
01:16:09,600 --> 01:16:12,960
want to include.
So we are trained to think

1231
01:16:13,640 --> 01:16:18,400
abstractly and we are trained to
kind of want that precision.

1232
01:16:18,400 --> 01:16:21,560
And so that is something that we
can attribute.

1233
01:16:21,720 --> 01:16:28,680
And there are scientists who
lean into this interdisciplinary

1234
01:16:28,680 --> 01:16:30,840
approach by bringing
philosophers on board, but of

1235
01:16:30,840 --> 01:16:33,480
course scientists from all sorts
of other domains.

1236
01:16:34,200 --> 01:16:38,320
There's interesting examples
where philosophy can suggest

1237
01:16:38,320 --> 01:16:41,720
ideas that scientists go on to
study in their empirical work

1238
01:16:42,280 --> 01:16:46,720
that they might not have thought
of originally or as quickly,

1239
01:16:46,720 --> 01:16:49,920
because it's a little easier to
see them in some frameworks.

1240
01:16:49,920 --> 01:16:53,200
We see this in Cogsai with
studies of different types of

1241
01:16:53,280 --> 01:16:58,920
causal relationships, things
like stability and strength, for

1242
01:16:58,920 --> 01:17:03,240
example.
But yeah, I think part of what

1243
01:17:03,240 --> 01:17:09,840
we, part of what I envision for
a kind of future here, and the

1244
01:17:09,840 --> 01:17:14,320
advantages come from this
interdisciplinary work.

1245
01:17:15,760 --> 01:17:19,480
I also think part of what would
be helpful is for scientists to

1246
01:17:19,480 --> 01:17:26,320
have a little bit more time and
space to to do theorizing right.

1247
01:17:26,560 --> 01:17:33,080
So, yeah, so they're the, you
really start to appreciate the

1248
01:17:33,080 --> 01:17:38,200
challenges of the scientific
work when you look at the sense

1249
01:17:38,200 --> 01:17:41,840
in which they're trying to
tackle new problem spaces.

1250
01:17:43,160 --> 01:17:46,840
You, you know, often the
funding, the funding incentives

1251
01:17:46,840 --> 01:17:49,200
are for this tried and true
method.

1252
01:17:49,200 --> 01:17:51,560
And if you kind of already know
it works, you can do a lot of

1253
01:17:51,560 --> 01:17:53,560
that.
If you're expected to publish a

1254
01:17:53,560 --> 01:17:57,080
lot, that doesn't always
incentivize taking the time to

1255
01:17:57,080 --> 01:18:02,600
think about all these different
routes you could take and being

1256
01:18:02,600 --> 01:18:06,680
able to discuss which one, which
ones you should follow.

1257
01:18:06,680 --> 01:18:10,560
So I think having a little bit
more space for scientists to

1258
01:18:10,560 --> 01:18:14,560
have the time, like Megan
mentioned in philosophy, we have

1259
01:18:14,680 --> 01:18:17,680
a little less of the pressures
that they have.

1260
01:18:19,720 --> 01:18:26,640
But part of it is in having the
kind of time and and incentives

1261
01:18:26,720 --> 01:18:32,000
to take advantage of
interdisciplinary connections in

1262
01:18:32,000 --> 01:18:33,880
work.
And that's not always easy for

1263
01:18:33,880 --> 01:18:37,160
scientists to do, given the
constraints that they have.

1264
01:18:38,760 --> 01:18:41,480
You know, this, this question of
time and, and you know,

1265
01:18:41,480 --> 01:18:45,000
publisher perish mentality,
there's a a lot of people who

1266
01:18:45,000 --> 01:18:46,840
are probably out there listening
right now.

1267
01:18:46,840 --> 01:18:49,120
We say like, why do we even care
that you're publishing papers?

1268
01:18:49,120 --> 01:18:53,880
Who, who reads those papers?
And to a certain extent, you're

1269
01:18:53,880 --> 01:18:56,640
absolutely right.
Like we, you know, the metric of

1270
01:18:56,640 --> 01:19:00,120
our success and the thing that
allows us as academics to

1271
01:19:00,120 --> 01:19:04,160
proceed through the ranks and
get promoted and, you know, do

1272
01:19:04,160 --> 01:19:08,240
our jobs well and so on is to
get grants and to publish

1273
01:19:08,240 --> 01:19:11,120
papers.
And it feels very insular.

1274
01:19:11,120 --> 01:19:14,240
It feels very much like you're
kind of in a little echo

1275
01:19:14,240 --> 01:19:19,040
chamber.
And I think that that's that's a

1276
01:19:19,160 --> 01:19:22,200
correct way of looking at this,
that this is an old school way

1277
01:19:22,200 --> 01:19:28,560
of thinking about how we should
go about the enterprise of doing

1278
01:19:28,560 --> 01:19:31,920
science.
And it shouldn't be contrasted

1279
01:19:31,920 --> 01:19:35,320
with the way that industry
professionals are doing science,

1280
01:19:35,320 --> 01:19:40,160
which is to produce products and
to do to engage in activities

1281
01:19:40,160 --> 01:19:43,480
that have the potential for
clinical or societal benefit,

1282
01:19:44,360 --> 01:19:46,440
basic science and foundational
science.

1283
01:19:46,440 --> 01:19:49,240
And it has to be there in order
for those those kind of more

1284
01:19:49,240 --> 01:19:52,640
applied approaches to have legs
to have a foundation to stand

1285
01:19:52,640 --> 01:19:55,000
on.
But I do think that the model of

1286
01:19:55,720 --> 01:19:58,000
do a thing and then write a
paper and then get a grant to

1287
01:19:58,000 --> 01:20:01,360
continue doing the thing and
then writing another paper is,

1288
01:20:01,760 --> 01:20:05,760
is doomed.
Ultimately, to put it bluntly,

1289
01:20:06,200 --> 01:20:10,920
that it is striking to me that
in 2025 we are still doing

1290
01:20:10,920 --> 01:20:15,680
science the way we did in the
1800s, that we've got scientists

1291
01:20:15,680 --> 01:20:19,280
who are doing.
Science and then writing a

1292
01:20:19,280 --> 01:20:21,760
little paper that other
scientists will read and that

1293
01:20:21,760 --> 01:20:25,560
maybe makes a big splash and has
some sort of impact on some, you

1294
01:20:25,560 --> 01:20:29,800
know, applied science later.
The basic science has to be done

1295
01:20:29,880 --> 01:20:31,240
right.
The reason that we have

1296
01:20:31,240 --> 01:20:34,840
technologies like GPS, for
example, is because someone at

1297
01:20:34,840 --> 01:20:37,280
some point was like, huh, I
wonder if we can do that.

1298
01:20:37,680 --> 01:20:41,160
And so they figured out how to
do the technological basis, the

1299
01:20:41,160 --> 01:20:45,640
foundation that became GPS.
And it wasn't because they went

1300
01:20:45,640 --> 01:20:49,240
about trying to invent GPS from
the beginning and an applied

1301
01:20:49,320 --> 01:20:51,480
technology.
It's because they did the basic

1302
01:20:51,480 --> 01:20:55,240
science work first.
But this practice of just

1303
01:20:55,560 --> 01:20:58,360
writing a little paper and then
like, you know, packaging it and

1304
01:20:58,360 --> 01:21:00,520
tying it up with a nice little
bow and sending it to a journal

1305
01:21:00,520 --> 01:21:03,480
and paying thousands of dollars
to publish it and then having it

1306
01:21:03,480 --> 01:21:07,400
be locked behind a paintball.
This is a rant, but it's also, I

1307
01:21:07,400 --> 01:21:12,760
think, a recognition that in
order to realize the future that

1308
01:21:12,760 --> 01:21:16,560
Lauren and I have been really
talking about, we need to change

1309
01:21:16,560 --> 01:21:20,480
this.
Because it's not just a time

1310
01:21:20,480 --> 01:21:25,160
constraint, it's a societal and
like expectation constraint on

1311
01:21:25,160 --> 01:21:29,480
the way that we as basic
scientists and academics are

1312
01:21:29,480 --> 01:21:33,240
engaging this enterprise.
It's hamstringing us.

1313
01:21:33,720 --> 01:21:36,840
It's preventing us from engaging
in this future that Lauren and

1314
01:21:36,840 --> 01:21:40,960
I, I think have laid out and
that we're both very excited

1315
01:21:40,960 --> 01:21:43,080
about and I think that others
are excited about too.

1316
01:21:43,800 --> 01:21:46,840
That we need to find a way to be
more interconnected, to

1317
01:21:46,840 --> 01:21:52,480
capitalize on the fact that we
do have a global scientific

1318
01:21:52,480 --> 01:21:57,200
community that doesn't need to
wait for a paper to get

1319
01:21:57,200 --> 01:21:59,680
published in order to learn
about some new scientific

1320
01:21:59,680 --> 01:22:01,160
finding.
There's got to be a better way.

1321
01:22:01,160 --> 01:22:06,240
And it isn't social media.
We need something in between and

1322
01:22:06,680 --> 01:22:10,160
so, and it's not, you know,
conferences only.

1323
01:22:10,160 --> 01:22:12,480
I think that there's got to be a
better way to do this.

1324
01:22:13,000 --> 01:22:17,160
And I don't know exactly what it
looks like, but there's a call

1325
01:22:17,160 --> 01:22:21,120
to action for the folks
listening in here that if you

1326
01:22:21,120 --> 01:22:24,520
think that this future sounds
cool and exciting and powerful,

1327
01:22:25,080 --> 01:22:27,000
think about how to make it a
reality.

1328
01:22:27,000 --> 01:22:28,680
And this is something that I
think about a lot.

1329
01:22:29,280 --> 01:22:32,120
And then some of the activities
I'm engaged in are trying to do,

1330
01:22:32,120 --> 01:22:34,720
but, but I think we need more
people.

1331
01:22:34,840 --> 01:22:38,520
So there, that's my my plea to
get involved in making this

1332
01:22:38,520 --> 01:22:42,600
future a reality.
Let's get back to the this idea

1333
01:22:42,600 --> 01:22:44,480
of consciousness, computation,
and causation.

1334
01:22:46,240 --> 01:22:49,000
Megan, you've described the
brain as a probabilistic machine

1335
01:22:49,000 --> 01:22:52,960
navigating uncertainty.
Would you describe consciousness

1336
01:22:52,960 --> 01:22:56,720
as a byproduct of computation or
an adaptive feature of it?

1337
01:22:58,800 --> 01:23:01,920
Yeah, I, I don't know if I want
to weigh in on that and, and

1338
01:23:01,920 --> 01:23:04,800
pick a hill to die on.
This is a big question.

1339
01:23:04,800 --> 01:23:07,720
Is consciousness in every
phenomenon, is it just kind of

1340
01:23:07,720 --> 01:23:13,040
there as a by product or does it
serve some kind of meaningful

1341
01:23:13,040 --> 01:23:17,880
function in our ability to, you
know, from an evolutionarily

1342
01:23:18,160 --> 01:23:21,880
evolutionary perspective, stay,
stay alive, engage, procreate,

1343
01:23:22,200 --> 01:23:24,840
that kind of thing.
So I think an important

1344
01:23:25,200 --> 01:23:30,200
component of this question is to
differentiate among a, a

1345
01:23:30,200 --> 01:23:34,160
potential function of
consciousness versus a potential

1346
01:23:34,160 --> 01:23:37,680
function for consciousness
versus functions associated with

1347
01:23:37,680 --> 01:23:41,360
consciousness.
So there's you're asking, is

1348
01:23:41,360 --> 01:23:43,560
consciousness an epiphenomenon?
That would be there is no

1349
01:23:43,560 --> 01:23:46,040
function at all.
It's just kind of it happens

1350
01:23:46,040 --> 01:23:49,880
because you know, that just is
with the way the universe is set

1351
01:23:49,880 --> 01:23:52,320
up.
I, I personally think that it's

1352
01:23:52,560 --> 01:23:55,840
probably the case that it's not
totally an epiphenomenon that it

1353
01:23:55,880 --> 01:24:00,120
is emerges as a component in a
giant functional system that

1354
01:24:00,120 --> 01:24:02,880
probably was evolutionarily
optimized in some way.

1355
01:24:03,400 --> 01:24:06,480
So I think that there is a
function of consciousness it, it

1356
01:24:06,480 --> 01:24:11,160
has a purpose, there is
something that it does that is

1357
01:24:11,360 --> 01:24:15,560
adaptive and facilitatory for
the Organism that possesses it.

1358
01:24:16,320 --> 01:24:20,520
It allows you to bring
information into a global

1359
01:24:20,520 --> 01:24:23,040
workspace so that you can
manipulate it in a kind of a

1360
01:24:23,040 --> 01:24:28,240
domain general way, or it allows
you to differentiate between

1361
01:24:28,400 --> 01:24:31,280
something that is real out there
in the world and something that

1362
01:24:31,280 --> 01:24:33,840
you just kind of hallucinated or
made-up in your head or just

1363
01:24:33,840 --> 01:24:35,320
noise.
So this is sometimes called

1364
01:24:35,320 --> 01:24:38,960
reality monitoring.
And so the, the presence of

1365
01:24:38,960 --> 01:24:42,840
phenomenal experience is the
result of some reality

1366
01:24:42,840 --> 01:24:46,200
monitoring tagging system that
says these are the components of

1367
01:24:46,200 --> 01:24:47,800
the world that are probably
real.

1368
01:24:47,800 --> 01:24:50,240
And these are the components of
your internal representation

1369
01:24:50,240 --> 01:24:52,960
that are probably just noise or
you just made it up.

1370
01:24:55,040 --> 01:24:57,440
And then then there's a
function, you know, for

1371
01:24:57,440 --> 01:25:02,360
consciousness that is the
internal machinations that gave

1372
01:25:02,360 --> 01:25:05,640
rise to the the conscious
experience that's very different

1373
01:25:05,640 --> 01:25:09,040
than the reason that we have it.
And then there would be all the

1374
01:25:09,040 --> 01:25:13,240
other things that go along with
consciousness in us anyway, like

1375
01:25:13,240 --> 01:25:15,720
language and executive
functioning and reasoning and

1376
01:25:15,720 --> 01:25:18,800
problem solving and, you know,
stuff like that, that seem to be

1377
01:25:18,800 --> 01:25:21,440
present when you are conscious
and seem to be absent when

1378
01:25:21,440 --> 01:25:26,280
you're not, or seem to be
present when you are conscious

1379
01:25:26,280 --> 01:25:28,760
of a particular piece of
information and absent when

1380
01:25:28,760 --> 01:25:30,560
you're not.
So there was a big debate for a

1381
01:25:30,560 --> 01:25:32,920
while about can you do math
unconsciously?

1382
01:25:32,920 --> 01:25:35,520
Can you do arithmetic or
addition unconsciously, that

1383
01:25:35,520 --> 01:25:38,880
kind of thing.
So the truth is we don't know if

1384
01:25:38,880 --> 01:25:43,040
consciousness has a function.
I think that something like the

1385
01:25:43,040 --> 01:25:47,920
ability to decide when to update
your model that you've built of

1386
01:25:47,920 --> 01:25:52,400
the world based on new incoming
information, that seems like a

1387
01:25:52,400 --> 01:25:56,320
useful thing for a reality
monitoring or similar mechanism

1388
01:25:56,320 --> 01:25:58,800
to do.
I don't know that phenomenal

1389
01:25:58,800 --> 01:26:03,040
experience per SE is the
component that has the

1390
01:26:03,040 --> 01:26:07,440
functional, that is the
functional or like kind of

1391
01:26:07,440 --> 01:26:10,360
causally efficacious knob in the
system.

1392
01:26:11,240 --> 01:26:18,280
But all indications seem to
point to in in my mind that

1393
01:26:18,720 --> 01:26:24,680
without phenomenal consciousness
you cannot do some things that

1394
01:26:24,680 --> 01:26:29,080
it does have some sort of
facilitatory function for us.

1395
01:26:29,080 --> 01:26:30,640
So I think that there is a
function.

1396
01:26:31,200 --> 01:26:34,440
It probably has to do with
learning adaptive behavior,

1397
01:26:34,440 --> 01:26:39,560
updating of world models.
Pretty hand WAVY answer, but I

1398
01:26:39,560 --> 01:26:40,880
don't think it's an EPI
phenomenon.

1399
01:26:40,880 --> 01:26:42,920
I think that there's probably a
reason that it's there.

1400
01:26:44,000 --> 01:26:47,360
Lauren, when it comes to the
philosophy side of this and the

1401
01:26:47,360 --> 01:26:50,080
question of what is
consciousness, are we even

1402
01:26:50,080 --> 01:26:58,520
asking the right question?
I think that there are many

1403
01:26:58,520 --> 01:27:01,720
questions that are being asked
right now in this space.

1404
01:27:02,080 --> 01:27:06,000
It's, it's a mistake, I would
say, to think that there is one

1405
01:27:06,400 --> 01:27:12,200
question.
And it's helpful to consider the

1406
01:27:12,200 --> 01:27:16,080
sense in which we're trying to
figure out, even if there are

1407
01:27:16,080 --> 01:27:20,520
many questions that we're
asking, that any given question

1408
01:27:20,800 --> 01:27:24,640
involves, there's a lot of boxes
that need to be checked to make

1409
01:27:24,640 --> 01:27:31,520
sure that it's well defined.
And so there are as, as Megan

1410
01:27:31,520 --> 01:27:36,600
has suggested and as we see from
a cursory understanding of

1411
01:27:36,600 --> 01:27:39,880
research in this space, there
are really different types of

1412
01:27:40,160 --> 01:27:43,480
topics of interest that
consciousness researchers are

1413
01:27:43,600 --> 01:27:47,480
focused on.
One helpful thing we can do is

1414
01:27:47,480 --> 01:27:53,000
to separate out those questions.
It would be unhelpful to think

1415
01:27:53,000 --> 01:27:57,320
that there's one.
I'm also skeptical about the

1416
01:27:57,320 --> 01:28:04,280
need for some unifying theory
that they all need to strictly

1417
01:28:04,320 --> 01:28:11,480
fall under, although that might
take and require a longer set of

1418
01:28:11,480 --> 01:28:13,800
discussions.
I think there is some kind of

1419
01:28:13,800 --> 01:28:17,400
unification that's helpful, but
it's somewhat loose.

1420
01:28:18,240 --> 01:28:23,200
What we do want are very
principled, clear questions.

1421
01:28:23,200 --> 01:28:26,840
And so we don't have this
anything goes, you know, ask

1422
01:28:27,280 --> 01:28:29,560
whatever question you want.
There's all these different

1423
01:28:29,560 --> 01:28:32,040
facets.
No, the questions that we ask in

1424
01:28:32,040 --> 01:28:38,160
this space need to be so precise
that one of the main challenges

1425
01:28:38,160 --> 01:28:39,840
is asking the right question,
right?

1426
01:28:39,840 --> 01:28:43,160
That's something that's been
showing up repeatedly in this

1427
01:28:43,160 --> 01:28:46,920
discussion.
It reminds me of, there's this

1428
01:28:46,920 --> 01:28:52,440
great quote, I think it's from
the band U2, which is we thought

1429
01:28:52,440 --> 01:28:55,560
we knew the answers, it was the
questions we had wrong.

1430
01:28:56,520 --> 01:29:02,080
And so a big challenge in
scientific space is asking the

1431
01:29:02,080 --> 01:29:05,760
right questions.
And we often think of that as

1432
01:29:05,760 --> 01:29:08,280
the starting point for giving an
explanation.

1433
01:29:08,920 --> 01:29:11,480
I can't give you an explanation
for something until you first

1434
01:29:11,480 --> 01:29:13,840
tell me exactly what it is you
want explained.

1435
01:29:14,320 --> 01:29:17,120
And we sometimes start on that
path and we get stuck at the

1436
01:29:17,120 --> 01:29:20,600
1st, that first step, specifying
the target.

1437
01:29:21,000 --> 01:29:24,920
And that's where a lot of
discussion is in this space.

1438
01:29:25,360 --> 01:29:29,320
It would be silly to think you
could give the explanation if

1439
01:29:29,320 --> 01:29:33,160
the target isn't sufficiently
precise yet.

1440
01:29:33,240 --> 01:29:37,520
There are different targets of
interest, that's just fine.

1441
01:29:38,080 --> 01:29:42,280
I can't think of many scientific
spaces where that's not the

1442
01:29:42,280 --> 01:29:45,040
standard for any kind of system.
There's so many different

1443
01:29:45,040 --> 01:29:47,680
questions you could ask.
There's some that we might want

1444
01:29:47,680 --> 01:29:50,640
to put outside the space of an
interest of a consciousness

1445
01:29:50,640 --> 01:29:53,960
researcher.
So that's up for debate too.

1446
01:29:53,960 --> 01:29:57,360
What's the, what are the bounds
on the space of explanatory why

1447
01:29:57,360 --> 01:30:00,360
questions here for consciousness
research?

1448
01:30:00,360 --> 01:30:01,840
We're interested in
consciousness.

1449
01:30:01,840 --> 01:30:06,040
What are we, what are we
interested in explaining?

1450
01:30:06,600 --> 01:30:15,120
So I think it's helpful to think
that an important part of

1451
01:30:15,120 --> 01:30:18,720
scientific work is asking the
right questions.

1452
01:30:19,160 --> 01:30:27,240
And I don't think that in this
space there's a lot of fixed

1453
01:30:28,960 --> 01:30:31,840
consensus on exactly what those
are.

1454
01:30:32,320 --> 01:30:34,200
But that's the way science
works.

1455
01:30:34,240 --> 01:30:38,680
And it's helpful to think that
that's the first step that you

1456
01:30:38,680 --> 01:30:42,920
need to accomplish before you
can get the proper answer.

1457
01:30:43,120 --> 01:30:45,880
So if you want to skip that step
and start looking for the

1458
01:30:45,880 --> 01:30:51,160
answer, you're going to be
wading through a mess of stuff

1459
01:30:51,200 --> 01:30:53,120
and you just won't have the
right guidelines because you

1460
01:30:53,120 --> 01:30:54,920
don't yet know what you're
looking for.

1461
01:30:54,920 --> 01:30:58,320
And sometimes in science, we
start with a rough question and

1462
01:30:58,320 --> 01:31:01,480
we go and we look for the causes
and based on what we find, we go

1463
01:31:01,480 --> 01:31:05,400
back and we refine the question.
You see this happen in medicine,

1464
01:31:05,400 --> 01:31:07,440
psychiatric medicine, right?
We start with the disease

1465
01:31:07,440 --> 01:31:10,040
category.
We think we've got the right and

1466
01:31:10,040 --> 01:31:12,760
then we go and we look for what
the causes are.

1467
01:31:13,200 --> 01:31:16,600
We might re describe the target
on the basis of what we find.

1468
01:31:16,880 --> 01:31:20,080
That's a kind of brick.
It's a very smart strategy that

1469
01:31:20,080 --> 01:31:22,280
scientists use to get order in
the world.

1470
01:31:22,280 --> 01:31:29,200
So I don't think we're there
yet, but it's and I don't think

1471
01:31:29,200 --> 01:31:32,520
there's one question in that
space, but a lot of the research

1472
01:31:32,520 --> 01:31:35,120
is focused there as I think it
should be.

1473
01:31:36,800 --> 01:31:38,920
Megan, when it comes to
consciousness, it's almost

1474
01:31:38,920 --> 01:31:41,560
impossible to nowadays have a
conversation about it without

1475
01:31:41,560 --> 01:31:45,680
mentioning AI.
So I feel like we have to touch

1476
01:31:45,680 --> 01:31:48,640
on this.
So can AI systems or large

1477
01:31:48,640 --> 01:31:52,280
language models ever genuinely
experience uncertainty, or will

1478
01:31:52,280 --> 01:31:55,640
there always be simulations
without subjectivity?

1479
01:31:57,480 --> 01:31:59,800
You really want a definitive
answer to this, don't you?

1480
01:32:02,520 --> 01:32:06,800
I so I there's two big, there's
two big camps in the

1481
01:32:06,800 --> 01:32:08,640
consciousness science field
about this and you've

1482
01:32:08,800 --> 01:32:13,640
articulated them very nicely. 1
is that artificial systems have

1483
01:32:13,640 --> 01:32:16,520
the potential.
I think most people would agree

1484
01:32:16,520 --> 01:32:19,040
that they don't now have some
sort of consciousness, but that

1485
01:32:19,040 --> 01:32:23,640
in the future they have the
potential to manifest subjective

1486
01:32:23,640 --> 01:32:27,280
experience or phenomenal
consciousness or whatever

1487
01:32:27,520 --> 01:32:30,240
terminology you want to use for
someone being in there, the

1488
01:32:30,240 --> 01:32:33,480
lights being on, etcetera.
And then there's the other camp,

1489
01:32:33,480 --> 01:32:35,880
which is kind of the more
biological naturalism camp,

1490
01:32:35,880 --> 01:32:38,280
which says like, no, there's
really something very special

1491
01:32:38,280 --> 01:32:43,200
about biology and silicon based
systems or something.

1492
01:32:43,240 --> 01:32:46,320
Something that is not biological
is never going to be able to

1493
01:32:46,320 --> 01:32:49,320
instantiate this type of this
type of thing.

1494
01:32:49,320 --> 01:32:53,120
And you have really smart people
on both sides arguing both

1495
01:32:53,120 --> 01:32:55,480
camps.
So, you know, Anil Seth has just

1496
01:32:55,480 --> 01:33:00,160
written a piece in Behavioral
Brain Sciences that's one of

1497
01:33:00,160 --> 01:33:02,520
those kind of target article.
And then there's a bunch of

1498
01:33:02,520 --> 01:33:06,400
commentaries that come out
associated with it that will say

1499
01:33:06,400 --> 01:33:12,240
things like, so Anil's piece
says it argues for the point

1500
01:33:12,240 --> 01:33:14,880
that, you know, there is
something special about, as he

1501
01:33:14,880 --> 01:33:18,120
puts it, being a beast machine,
that biology does have

1502
01:33:18,440 --> 01:33:21,760
components that allow it to
maybe manifest the types of

1503
01:33:21,760 --> 01:33:24,960
computations that are necessary
in order to instantiate

1504
01:33:24,960 --> 01:33:27,080
consciousness.
But he actually issues the idea

1505
01:33:27,080 --> 01:33:30,520
of computational functionalism
in general and says it's not a

1506
01:33:30,520 --> 01:33:33,720
function that there really is
something special about, you

1507
01:33:33,720 --> 01:33:36,560
know, synapses and biology and
the squishy piece of wetware.

1508
01:33:37,640 --> 01:33:40,640
And the philosopher Ned Block
has, you know, written a

1509
01:33:40,640 --> 01:33:42,720
commentary that kind of agrees
with him that says there's

1510
01:33:42,720 --> 01:33:45,120
something that, you know, might
be, although I don't want to

1511
01:33:45,120 --> 01:33:47,720
mischaracterize Ned.
But then there's other

1512
01:33:47,720 --> 01:33:50,440
philosophers who and scientists
who have argued against this.

1513
01:33:50,440 --> 01:33:53,400
And I tend to be more in the
more functionalism camp.

1514
01:33:53,800 --> 01:33:57,960
So Matthias, Michelle also
argues that, yeah, we can say

1515
01:33:57,960 --> 01:34:02,000
that there is something special
about biology, but the special

1516
01:34:02,000 --> 01:34:05,120
thing about biology might be
that it has the particular, it's

1517
01:34:05,160 --> 01:34:07,640
the, it's the only kind of
substrate that can instantiate

1518
01:34:07,640 --> 01:34:09,640
that function.
But the function is the key

1519
01:34:09,640 --> 01:34:12,920
component.
The function or the computation

1520
01:34:12,920 --> 01:34:15,840
is the key component that gives
rise to consciousness.

1521
01:34:15,840 --> 01:34:21,920
And so in the future, it is
possible that maybe we figure

1522
01:34:21,920 --> 01:34:24,960
out what it is that might have
been special about biology and

1523
01:34:24,960 --> 01:34:27,360
we actually build an artificial
system that has all those

1524
01:34:27,360 --> 01:34:30,840
special components and now it
can instantiate consciousness as

1525
01:34:30,840 --> 01:34:33,840
well.
So that's a very tight view.

1526
01:34:34,080 --> 01:34:36,800
There's also a more general view
that says, oh, well, maybe

1527
01:34:37,880 --> 01:34:41,720
neuromorphic systems might be
able to instantiate

1528
01:34:41,720 --> 01:34:44,480
consciousness.
Neuromorphic really just is a

1529
01:34:44,480 --> 01:34:48,920
fancy word for brain inspired
and it can mean it either

1530
01:34:48,920 --> 01:34:54,040
instantiates the algorithms that
we are discovering in the brain

1531
01:34:54,920 --> 01:34:58,440
or more likely neuromorphic
refers to something hardware

1532
01:34:58,440 --> 01:35:01,240
based that there's this
particular kind of spiking

1533
01:35:01,240 --> 01:35:05,440
neural network that is in that
is manifested or instantiated on

1534
01:35:05,440 --> 01:35:08,200
a particular kind of hardware.
Where we did some material

1535
01:35:08,200 --> 01:35:11,080
science to come up with the
resistors and stuff that would

1536
01:35:11,080 --> 01:35:15,720
actually like look a little bit
more like brain as opposed to

1537
01:35:16,200 --> 01:35:19,120
traditional.
You know, when you think vacuum

1538
01:35:19,120 --> 01:35:24,240
tubes style 1960s computers,
memory is over here and

1539
01:35:24,240 --> 01:35:27,760
computation is over here.
And so then you move information

1540
01:35:27,760 --> 01:35:30,280
between memory and computation
and then you put it back in

1541
01:35:30,280 --> 01:35:33,360
memory.
And so there's, when we talk

1542
01:35:33,360 --> 01:35:37,720
about artificial intelligence,
anything that is not biology is

1543
01:35:37,720 --> 01:35:41,800
in this big pile, but we have to
think about differentiating it a

1544
01:35:41,800 --> 01:35:45,320
little bit more.
And then the very abstract

1545
01:35:45,320 --> 01:35:48,320
version of this is it doesn't
matter what the substrate is.

1546
01:35:48,320 --> 01:35:52,800
It could be a traditional, it
could be a neuromorphic system.

1547
01:35:52,800 --> 01:35:55,000
It could be a von Neumann
architecture, which is like

1548
01:35:55,000 --> 01:35:56,400
this.
You know, memory is over here

1549
01:35:56,400 --> 01:35:59,280
and compute is over here.
It could be your laptop.

1550
01:35:59,520 --> 01:36:03,640
It could be some technology that
we haven't come up with yet, all

1551
01:36:03,640 --> 01:36:05,280
of those.
It could be a large language

1552
01:36:05,280 --> 01:36:07,000
model that runs on a server
farm.

1553
01:36:07,000 --> 01:36:10,440
It could be kind of anything.
And it's the computations that

1554
01:36:10,440 --> 01:36:12,440
matter.
It it doesn't matter what the

1555
01:36:12,440 --> 01:36:15,360
hardware is at all.
It's just the computations and

1556
01:36:15,360 --> 01:36:18,400
the type of like representations
that the system can have.

1557
01:36:19,400 --> 01:36:22,840
And so from that perspective,
maybe large language models are

1558
01:36:22,840 --> 01:36:27,160
like this close to waking up.
I tend to be more on the

1559
01:36:27,160 --> 01:36:28,880
computational functionalist
side.

1560
01:36:29,040 --> 01:36:31,280
So that was a long winded way of
saying I think it's the

1561
01:36:31,280 --> 01:36:33,560
computations that matter.
I don't think that there is

1562
01:36:33,560 --> 01:36:39,920
anything particularly magical or
special about biology, except

1563
01:36:39,920 --> 01:36:44,000
perhaps that it can instantiate
certain kinds of computations

1564
01:36:44,000 --> 01:36:47,160
that we don't yet know how to do
or that might end up being

1565
01:36:47,160 --> 01:36:51,720
impossible to do in certain
kinds of non biological systems.

1566
01:36:53,520 --> 01:36:58,280
So from that perspective, I
would say yeah, probably in the

1567
01:36:58,280 --> 01:37:01,240
future artificial systems could
wake up.

1568
01:37:02,040 --> 01:37:04,480
Is it around the corner?
Probably not.

1569
01:37:05,520 --> 01:37:12,000
Don't think that GPT 5 is on the
cusp of having subjective

1570
01:37:12,000 --> 01:37:18,480
experiences and maybe this is
not the place to go into this

1571
01:37:18,480 --> 01:37:21,560
necessarily.
But let's say that you disagree

1572
01:37:21,560 --> 01:37:26,080
with me and you say no, it does.
How would you test for that?

1573
01:37:26,680 --> 01:37:31,440
How would you know?
That's like a whole other whole

1574
01:37:31,440 --> 01:37:35,560
other conversation that maybe we
can get into it at another time.

1575
01:37:35,560 --> 01:37:41,720
But this idea that we have ways
of evaluating whether someone is

1576
01:37:41,720 --> 01:37:48,960
in there or not for us, for, you
know, neurotypical awake

1577
01:37:48,960 --> 01:37:51,960
behaving human versus a
neurotypical asleep human who is

1578
01:37:51,960 --> 01:37:57,520
not behaving or a human who is
in a coma or that kind of thing.

1579
01:37:57,520 --> 01:38:00,880
Like maybe those tests work
pretty well for a clinical and

1580
01:38:00,880 --> 01:38:02,960
bedside.
But as soon as you get outside

1581
01:38:02,960 --> 01:38:07,600
of the population on which
they've been validated, like

1582
01:38:07,600 --> 01:38:09,720
what do you do?
You can't apply them to the

1583
01:38:09,720 --> 01:38:12,280
artificial systems.
You fall back on tests of

1584
01:38:12,280 --> 01:38:15,200
intelligence, which is, as we've
discussed, not the same thing.

1585
01:38:15,960 --> 01:38:20,760
So I think it's very possible
that artificial systems will be

1586
01:38:20,760 --> 01:38:23,240
able to have subjective
experiences in the future.

1587
01:38:23,680 --> 01:38:25,400
It is not a hill that I'm going
to die on.

1588
01:38:25,880 --> 01:38:32,480
And the the way of answering.
Has that happened yet or at some

1589
01:38:32,480 --> 01:38:37,040
point in the future, When does
it happen is really, really

1590
01:38:37,040 --> 01:38:40,000
hard.
It's really hard to answer that.

1591
01:38:41,400 --> 01:38:43,080
Lauren, do you do you have
anything to add to that?

1592
01:38:44,560 --> 01:38:50,920
I do, I think part of what can
be helpful in terms of looking

1593
01:38:50,920 --> 01:38:56,680
at progress explanations and
work in this space is that this

1594
01:38:56,680 --> 01:39:01,920
is an explanatory target that is
so much different from many

1595
01:39:01,920 --> 01:39:04,800
others that we're interested in
in science.

1596
01:39:05,000 --> 01:39:06,760
And that's part of the
challenge.

1597
01:39:06,880 --> 01:39:14,400
And that can partly explain why
we we don't yet have an answer,

1598
01:39:14,400 --> 01:39:17,200
but also have specifying the
standards is difficult.

1599
01:39:17,880 --> 01:39:21,560
So this is a type of thing we
want to explain that is

1600
01:39:21,560 --> 01:39:24,640
different from other types of
things we want to explain in

1601
01:39:24,640 --> 01:39:26,080
science.
And then we have explained.

1602
01:39:26,320 --> 01:39:30,400
And so we partly need to figure
out what those differences are.

1603
01:39:30,800 --> 01:39:34,800
And then the second is this
interesting feature where for

1604
01:39:34,800 --> 01:39:38,040
the types of targets that we do
see across different life

1605
01:39:38,040 --> 01:39:40,720
sciences that we are interested
in explaining, we give

1606
01:39:40,720 --> 01:39:46,600
explanations for.
There is often a set of

1607
01:39:46,600 --> 01:39:49,640
challenges that show up with
respect to how much detail you

1608
01:39:49,640 --> 01:39:52,520
need to cite to give an
explanation.

1609
01:39:52,840 --> 01:39:55,760
And one thing I find is that
there's sometimes a kind of

1610
01:39:55,760 --> 01:40:03,160
confusion between stuff in the
system or stuff that's necessary

1611
01:40:03,440 --> 01:40:07,360
and stuff that's explanatory.
And this partly relates to

1612
01:40:07,360 --> 01:40:10,320
reduction and just figuring out
what details.

1613
01:40:10,840 --> 01:40:13,800
That a scientist needs to cite
and should cite in their

1614
01:40:13,800 --> 01:40:15,600
explanations.
And this is where we find

1615
01:40:15,600 --> 01:40:22,320
various interesting, confusing
things that show up when we're

1616
01:40:22,320 --> 01:40:23,800
interested in giving
explanation.

1617
01:40:23,800 --> 01:40:32,400
So how, how low do we need to go
in giving an explanation?

1618
01:40:32,640 --> 01:40:36,440
And how far back in the causal
history of something do we need

1619
01:40:36,440 --> 01:40:38,920
to go?
Is another question that shows

1620
01:40:38,920 --> 01:40:41,640
up.
Explanations are selective, they

1621
01:40:41,640 --> 01:40:45,320
are choosy and they pick some of
those details, not all of them.

1622
01:40:45,920 --> 01:40:50,920
One confusion, confusion that
can show up is you can admit a

1623
01:40:50,920 --> 01:40:55,640
kind of physicalist position for
a biological system, a neural

1624
01:40:55,640 --> 01:40:59,560
system, and agree that there's
physical stuff at lower scales,

1625
01:41:00,520 --> 01:41:02,240
but that doesn't mean it's
explanatory.

1626
01:41:03,240 --> 01:41:07,160
And when someone is referring to
factors at a higher scale is

1627
01:41:07,160 --> 01:41:10,800
explanatory, they're not denying
that physicalist picture.

1628
01:41:10,800 --> 01:41:12,680
And sometimes those get
confused.

1629
01:41:13,360 --> 01:41:16,880
And so we need to separate
explanatory relevance from

1630
01:41:17,440 --> 01:41:19,400
physicalism because they're very
different.

1631
01:41:19,760 --> 01:41:23,000
I mean, if we needed to cite all
of that physical stuff, we would

1632
01:41:23,000 --> 01:41:25,640
almost never be able to give an
explanation, but we also don't

1633
01:41:25,960 --> 01:41:28,640
need to.
And so the the way I think about

1634
01:41:28,640 --> 01:41:32,560
many of our causal explanations
here is that a causal

1635
01:41:32,560 --> 01:41:36,960
explanation isn't a game of how
low can you go, but a game of

1636
01:41:36,960 --> 01:41:40,120
what gives you control.
And depending on your

1637
01:41:40,120 --> 01:41:43,120
explanatory target of interest,
the factors that give you

1638
01:41:43,120 --> 01:41:44,920
control might be at a higher
scale.

1639
01:41:45,800 --> 01:41:51,040
And so this is partly where we
need to kind of make these

1640
01:41:51,120 --> 01:41:57,000
helpful distinctions to solve
these kinds of things that can

1641
01:41:57,000 --> 01:42:01,680
get tricky, where a scientist
might think that if you include

1642
01:42:01,680 --> 01:42:04,040
more and more lower level
detail, you're always giving a

1643
01:42:04,040 --> 01:42:06,840
better explanation.
Or that network neuroscientists

1644
01:42:06,840 --> 01:42:11,240
deny physicalism when that's not
what they're doing.

1645
01:42:11,240 --> 01:42:12,920
If they're making an explanatory
claim.

1646
01:42:12,920 --> 01:42:18,560
Or there's a puzzle that that
philosophers sometimes run into

1647
01:42:18,560 --> 01:42:21,960
where they think The Big Bang,
since it's in the causal history

1648
01:42:21,960 --> 01:42:26,120
of of everything, that it's
something you should cite in

1649
01:42:26,120 --> 01:42:28,880
your explanation.
So do you need to cite The Big

1650
01:42:28,880 --> 01:42:32,840
Bang in explaining why we're all
here today or why a patient has

1651
01:42:32,840 --> 01:42:34,960
a disease?
That sounds so silly to us.

1652
01:42:36,440 --> 01:42:40,120
A philosopher's job is partly to
say why that's silly and why

1653
01:42:40,120 --> 01:42:43,240
that's not explanatory.
But we get stuck on those cases.

1654
01:42:43,480 --> 01:42:47,800
So we get stuck on reductionism
and we get stuck on the entire

1655
01:42:47,800 --> 01:42:51,720
causal history and sometimes
distinctions like physicalism

1656
01:42:51,720 --> 01:42:54,960
and explanatory relevance and
necessity too.

1657
01:42:55,320 --> 01:42:58,480
Something can be necessary for
an outcome doesn't mean it

1658
01:42:58,480 --> 01:43:03,400
explains it, right?
The Big Bang is necessary for my

1659
01:43:03,400 --> 01:43:06,880
having asthma, but it doesn't
explain why I have it.

1660
01:43:06,880 --> 01:43:10,160
If I went into the position and
they started and I asked, you

1661
01:43:10,160 --> 01:43:12,960
know, why do I have asthma and
The Big Bang is that what's that

1662
01:43:12,960 --> 01:43:19,000
doesn't sound right.
So part of what we see in these

1663
01:43:19,200 --> 01:43:26,040
spaces are really important
questions about how to how

1664
01:43:26,040 --> 01:43:29,280
scientists are making are making
progress, the types of

1665
01:43:29,280 --> 01:43:32,520
explanatory targets they have
and important distinctions we

1666
01:43:32,520 --> 01:43:38,720
need to make to get over these
puzzles that that show up that

1667
01:43:38,720 --> 01:43:43,440
can kind of lead us astray and
that don't capture the rationale

1668
01:43:43,440 --> 01:43:45,640
that does underlie our
explanation.

1669
01:43:45,640 --> 01:43:47,520
I mean, you partly see it with
control, right?

1670
01:43:47,520 --> 01:43:50,720
The Big Bang isn't something, if
you were to hypothetically

1671
01:43:50,720 --> 01:43:54,520
manipulate it, it doesn't
control whether a patient has

1672
01:43:54,520 --> 01:43:57,200
measles or not.
So it doesn't explain that

1673
01:43:57,200 --> 01:43:59,120
outcome.
So, so scientists.

1674
01:44:00,200 --> 01:44:04,160
Sorry, sure.
Are you sure?

1675
01:44:04,160 --> 01:44:06,360
It doesn't explain why they have
measles?

1676
01:44:07,000 --> 01:44:09,800
I'm just being cheeky.
I mean in terms of what's

1677
01:44:09,920 --> 01:44:12,800
currently on offer in.
Fair enough.

1678
01:44:14,640 --> 01:44:18,040
But but part of what philosophy
of science, when it's at its

1679
01:44:18,040 --> 01:44:21,840
best, it can help with a bit of
this science communication

1680
01:44:21,840 --> 01:44:27,480
element, which is what is the
justification for why physicians

1681
01:44:27,480 --> 01:44:30,840
say that there's a virus that
causes measles and not

1682
01:44:30,840 --> 01:44:36,120
fundamental physics or not The
Big Bang or, Yeah, why

1683
01:44:36,120 --> 01:44:41,000
neuroscientists are working so
hard to explain something like

1684
01:44:41,480 --> 01:44:45,680
consciousness and why this is
actually more difficult than

1685
01:44:45,680 --> 01:44:49,840
explaining just any kind of
trait in biology.

1686
01:44:51,480 --> 01:44:57,880
So yeah, just a bit of a follow
up there and support of various

1687
01:44:57,880 --> 01:45:03,920
things, Megan said.
Megan, I asked Lauren about what

1688
01:45:04,080 --> 01:45:06,520
a philosophically informed
neuroscience would look like, So

1689
01:45:06,520 --> 01:45:08,160
I'm curious to know from your
side, what would a

1690
01:45:08,160 --> 01:45:12,480
neuroscientifically informed
philosophy of mind look like in

1691
01:45:12,480 --> 01:45:22,880
practice for you?
I think I will go back to

1692
01:45:23,360 --> 01:45:25,960
something that Lawrence said
actually very much at the

1693
01:45:25,960 --> 01:45:31,560
beginning, which is this
recognition of the complexity of

1694
01:45:31,560 --> 01:45:35,320
the system that we're trying to
understand, that we're trying to

1695
01:45:35,320 --> 01:45:42,280
explain that in some cases toy
examples and and simplified

1696
01:45:42,280 --> 01:45:46,760
models are really the only thing
that we have available to us.

1697
01:45:47,240 --> 01:45:51,360
And they can be very powerful.
And sometimes a really highly

1698
01:45:51,360 --> 01:45:55,240
oversimplified explanation or
model or description of what's

1699
01:45:55,240 --> 01:45:58,240
going on is surprisingly
powerful.

1700
01:45:59,200 --> 01:46:03,320
It's it's really kind of
remarkable how something as

1701
01:46:03,320 --> 01:46:07,600
simple as, well, I'll use an
example from my own field,

1702
01:46:07,720 --> 01:46:11,240
Signal detection theory can
actually do a remarkable job at

1703
01:46:11,240 --> 01:46:15,840
explaining how, or at least
describing, maybe I shouldn't

1704
01:46:15,840 --> 01:46:19,760
use the word explanation, but
describing how an observer like

1705
01:46:19,760 --> 01:46:24,280
you or me is going to deal with
noise in our environment or in

1706
01:46:24,280 --> 01:46:26,760
our own minds.
And signal detection theory, it

1707
01:46:26,760 --> 01:46:28,680
turns out, was not even
developed for psychology.

1708
01:46:28,680 --> 01:46:31,480
It was developed to understand
and characterize the noise in

1709
01:46:31,480 --> 01:46:33,880
electrical circuits in like the
1950s.

1710
01:46:34,960 --> 01:46:36,240
Yeah.
How do you find the signal in

1711
01:46:36,240 --> 01:46:37,800
the noise?
That's basically what it's

1712
01:46:37,800 --> 01:46:41,200
trying to do, really.
Almost stupidly simple

1713
01:46:41,200 --> 01:46:45,280
explanation.
Stupidly simple system does a

1714
01:46:45,280 --> 01:46:52,880
pretty good job at targeting how
and maybe why certain kinds of

1715
01:46:52,880 --> 01:46:56,880
behaviors emerge in certain
kinds of situations from, you

1716
01:46:56,880 --> 01:46:59,520
know, a human or or animal
observer model.

1717
01:47:00,000 --> 01:47:04,520
And yet ultimately the thing
that we are trying to capture to

1718
01:47:04,520 --> 01:47:10,920
explain is things that we know
exists.

1719
01:47:10,920 --> 01:47:13,440
It's one of the most complex
things on this planet.

1720
01:47:14,040 --> 01:47:17,360
Brains are really hard.
They're really highly non linear

1721
01:47:17,360 --> 01:47:19,520
dynamical systems.
There's a lot of components that

1722
01:47:19,520 --> 01:47:23,080
we have no visibility into.
There's a lot of stuff that we

1723
01:47:23,080 --> 01:47:27,240
are still kind of floundering
around in the dark to try to

1724
01:47:27,240 --> 01:47:34,280
build even just a just so post
hoc story of why the system did

1725
01:47:34,280 --> 01:47:35,560
what it did.
What are the kinds of

1726
01:47:35,560 --> 01:47:37,600
informational structures that
are present?

1727
01:47:38,200 --> 01:47:40,240
What even could the software
look like?

1728
01:47:40,240 --> 01:47:44,640
Is it software are we like, what
are we even doing here, man?

1729
01:47:45,320 --> 01:47:53,040
And so the recognition of just
the sheer mind boggling,

1730
01:47:53,160 --> 01:47:57,280
unfathomable complexity of what
it is that we're trying to

1731
01:47:57,280 --> 01:48:01,120
reverse engineer.
I think that and, and the gulf

1732
01:48:01,160 --> 01:48:05,400
between that and billiard balls
on the table, which is a causal

1733
01:48:05,400 --> 01:48:07,720
explanation of why this ball
went into the pocket or didn't

1734
01:48:07,720 --> 01:48:12,360
go into the pocket or something
that I think would be we would,

1735
01:48:12,360 --> 01:48:17,200
we would all do very well to to
recognize the size of that gulf

1736
01:48:17,600 --> 01:48:20,880
and to try to try to shrink it a
little bit.

1737
01:48:22,800 --> 01:48:25,480
So for young researchers who
feel pressure to pick a side,

1738
01:48:25,600 --> 01:48:28,440
scientist or philosopher, what
would both of you tell them

1739
01:48:28,440 --> 01:48:30,960
about integrating both parts
meaningfully?

1740
01:48:32,760 --> 01:48:37,240
Anyone can start.
Don't pick a side.

1741
01:48:37,400 --> 01:48:39,240
Look at me and Lauren.
We didn't pick a side.

1742
01:48:39,240 --> 01:48:43,760
And maybe this discussion is
also highlighted the, the

1743
01:48:43,760 --> 01:48:47,360
extraordinary value of not
picking a side of not burying

1744
01:48:47,360 --> 01:48:51,240
your nose in the sand and just
kind of doing the one thing.

1745
01:48:51,240 --> 01:48:55,120
And that, yeah, it's
uncomfortable, as Lauren said,

1746
01:48:55,120 --> 01:48:57,680
to maybe not always be the
expert in the room.

1747
01:48:58,080 --> 01:49:01,480
I'm certainly no, not the expert
in the room in a lot of ways

1748
01:49:02,080 --> 01:49:05,000
that there's a lot of things
that I, I want to have my

1749
01:49:05,000 --> 01:49:07,840
fingers in a lot of pies.
I want to understand a little

1750
01:49:07,840 --> 01:49:10,400
bit about a lot of things.
And I do have deep expertise in

1751
01:49:10,400 --> 01:49:14,000
a couple areas, but there are a
lot of spaces that I have been

1752
01:49:14,000 --> 01:49:19,560
in where the folks around me
know way more about a particular

1753
01:49:20,280 --> 01:49:24,560
than I do.
And that direction that that can

1754
01:49:24,560 --> 01:49:27,720
be the norm and that that's OK.
And that a lot of other people

1755
01:49:27,720 --> 01:49:31,640
in the room might seem like they
are topic matter experts in

1756
01:49:31,640 --> 01:49:34,800
something that you understand.
And they are, but you're also a

1757
01:49:34,800 --> 01:49:37,400
topic matter expert in something
that they're not and you see

1758
01:49:37,400 --> 01:49:39,600
things that they're not able to
see.

1759
01:49:40,200 --> 01:49:45,040
And there's a one example of
this from my own life is that,

1760
01:49:45,080 --> 01:49:47,560
you know, I sometimes go to
these, these conferences or

1761
01:49:47,560 --> 01:49:51,840
workshops that are really
focused on computational and

1762
01:49:51,840 --> 01:49:54,880
theoretical neuroscience and,
and even neurotechnology.

1763
01:49:55,040 --> 01:49:59,440
I'm not a neurotechnologist.
I know things about that, but I

1764
01:49:59,440 --> 01:50:04,120
definitely am not that person.
And there are things that I can

1765
01:50:04,120 --> 01:50:06,560
bring to the table as someone
who's a little bit more of a

1766
01:50:06,560 --> 01:50:12,440
generalist, like that's really
like, like bringing in.

1767
01:50:12,440 --> 01:50:15,120
I, I remember recently I brought
in actually some of Lauren's

1768
01:50:15,120 --> 01:50:17,080
work.
I said, what you're doing is

1769
01:50:17,080 --> 01:50:20,160
trying to build an explanation
of, you know, how the brain does

1770
01:50:20,160 --> 01:50:24,080
something in order to drive like
a neuroprosthetic, for example.

1771
01:50:25,400 --> 01:50:27,640
And it would it, you really
don't want to just drive the

1772
01:50:27,640 --> 01:50:31,880
neuroprosthetic, which we can do
already using neural recordings.

1773
01:50:32,440 --> 01:50:34,720
But in order to optimize that,
it would be really great if you

1774
01:50:34,720 --> 01:50:37,920
could understand why that kind
of model is working better than

1775
01:50:37,920 --> 01:50:42,920
this other kind of model, or why
one type of model is more or

1776
01:50:42,920 --> 01:50:44,720
less susceptible to neural
drift.

1777
01:50:44,720 --> 01:50:47,200
Like once you put the implant in
and you train the model, you

1778
01:50:47,200 --> 01:50:48,920
come back next week, it doesn't
work anymore.

1779
01:50:48,920 --> 01:50:51,480
Why?
Why did that model fail and this

1780
01:50:51,480 --> 01:50:55,640
other model might not fail?
Like those kinds of explanatory,

1781
01:50:56,080 --> 01:50:58,360
those kinds of explanations
could be really useful from a

1782
01:50:58,360 --> 01:51:00,800
practical perspective.
And a lot of the folks in

1783
01:51:00,800 --> 01:51:03,240
neurotechnology do not think
about explanation.

1784
01:51:03,760 --> 01:51:08,520
They don't prediction ability to
capture variance in a system

1785
01:51:08,920 --> 01:51:12,120
that is the target and that is
the thing that matters to them.

1786
01:51:13,080 --> 01:51:16,320
And so differentiating between
prediction and explanation and

1787
01:51:16,320 --> 01:51:20,760
differentiating between, you
know, models and targets of

1788
01:51:20,760 --> 01:51:24,080
different levels of complexity
is something that I can bring to

1789
01:51:24,080 --> 01:51:27,280
the table.
And I can't help them optimize

1790
01:51:27,320 --> 01:51:31,560
their neural implant, but they
can inform me about what they're

1791
01:51:31,560 --> 01:51:33,840
doing and I can inform them
about what what I'm doing.

1792
01:51:33,840 --> 01:51:37,760
And so I guess learning to build
calluses and tolerate that

1793
01:51:37,760 --> 01:51:40,320
uncertainty and that discomfort
of not being the expert in the

1794
01:51:40,320 --> 01:51:42,920
room.
No one is the expert in

1795
01:51:42,920 --> 01:51:45,440
everything though.
And so like to a certain extent,

1796
01:51:45,440 --> 01:51:48,720
even if you build deep expertise
in one area, you're going to

1797
01:51:48,720 --> 01:51:51,320
have to navigate spaces where
you're not the expert anyway.

1798
01:51:51,320 --> 01:51:52,600
So you might as well get used to
it now.

1799
01:51:56,840 --> 01:51:57,880
Lauren, anything you want to add
there?

1800
01:52:01,000 --> 01:52:04,040
Yeah, there's a few things I
would add.

1801
01:52:04,840 --> 01:52:13,240
I think that finding work that
you like, finding people and

1802
01:52:13,240 --> 01:52:17,280
researchers that are doing work
that you're interested in, and

1803
01:52:17,280 --> 01:52:22,720
as Megan says already may be
interested in science and

1804
01:52:22,720 --> 01:52:26,840
philosophy is helpful.
These are academic fields.

1805
01:52:26,840 --> 01:52:33,560
I mean, academia is still pretty
siloed, so I don't always get

1806
01:52:35,240 --> 01:52:40,240
easy access to scientists and
I'm not always credited for

1807
01:52:40,320 --> 01:52:45,520
working with them or taking the
time to talk to them or even

1808
01:52:45,520 --> 01:52:49,720
writing in writing publications
that get published in scientific

1809
01:52:49,720 --> 01:52:52,480
journals.
So there are interesting

1810
01:52:52,480 --> 01:52:55,160
standards of my field
philosophy, philosophy of

1811
01:52:55,160 --> 01:52:57,080
science, that are quite
different from various

1812
01:52:57,080 --> 01:53:00,200
scientific fields where, I mean,
they also might not get credit

1813
01:53:00,200 --> 01:53:02,600
for talking to a philosopher or
writing with one.

1814
01:53:03,040 --> 01:53:06,960
In really different scientific
fields value philosophy very

1815
01:53:06,960 --> 01:53:09,840
differently.
One of the advantages of talking

1816
01:53:09,840 --> 01:53:15,960
to neuroscientists is they
already value philosophy a bit

1817
01:53:15,960 --> 01:53:19,400
and they're already more aware
of it than other scientific

1818
01:53:19,400 --> 01:53:21,880
fields.
So when I talk to a biologist, I

1819
01:53:21,880 --> 01:53:25,320
might have to do a little bit
more legwork to tell them what I

1820
01:53:25,320 --> 01:53:30,320
do and to persuade them that I'm
someone useful to be in the room

1821
01:53:30,320 --> 01:53:33,680
in the first place.
That's not the case with many

1822
01:53:33,680 --> 01:53:38,040
neuroscientists, cognitive
scientists, same kind of thing.

1823
01:53:38,400 --> 01:53:40,640
You know, cogs size, a field
that views itself as

1824
01:53:40,640 --> 01:53:44,280
interdisciplinary, and one of
its areas is philosophy,

1825
01:53:44,280 --> 01:53:47,440
computer science, psychology.
So it really does depend on the

1826
01:53:47,440 --> 01:53:51,960
scientific field and that you're
interested in.

1827
01:53:52,560 --> 01:53:55,560
And it helps to talk to people
who work in that space because

1828
01:53:55,560 --> 01:53:58,840
they know a bit about the norms
in the field and the

1829
01:53:58,840 --> 01:54:02,320
expectations.
You know, Megan has different

1830
01:54:02,320 --> 01:54:06,880
expectations on her than I do in
the field of philosophy.

1831
01:54:07,640 --> 01:54:12,200
We're both probably doing more
than the standard person in the

1832
01:54:12,200 --> 01:54:15,800
sense that, you know, people
aren't expecting me to get

1833
01:54:15,800 --> 01:54:17,520
grants.
They're not expecting me to work

1834
01:54:17,520 --> 01:54:19,880
with neuroscientists.
But I care about that work, and

1835
01:54:19,880 --> 01:54:22,760
it's important.
One thing I sometimes say is

1836
01:54:22,760 --> 01:54:25,480
that being a philosopher of
science is a bit odd because you

1837
01:54:25,480 --> 01:54:28,400
sometimes feel like you're
telling scientists why

1838
01:54:28,400 --> 01:54:32,120
philosophy matters and
philosophers why science

1839
01:54:32,120 --> 01:54:35,840
matters.
And so there's also philosophers

1840
01:54:36,200 --> 01:54:42,240
who I'm talking to, and they do
not think that the way to

1841
01:54:42,240 --> 01:54:46,960
understand the fundamental
causal structure of the world is

1842
01:54:46,960 --> 01:54:49,440
to look at anything scientists
are doing.

1843
01:54:49,760 --> 01:54:52,600
Why would you do that?
Why would you?

1844
01:54:52,600 --> 01:54:59,680
So not only do I, you know, not
get credit for interdisciplinary

1845
01:54:59,680 --> 01:55:03,880
work in that sense, but they
don't see why they should care

1846
01:55:03,880 --> 01:55:09,240
about science if they're
interested in causation or

1847
01:55:09,240 --> 01:55:12,360
explanation in some cases or
understanding the world.

1848
01:55:12,600 --> 01:55:18,360
So in in all of our fields,
there are different groups of

1849
01:55:18,360 --> 01:55:21,120
people who are approaching
problems in different ways.

1850
01:55:21,120 --> 01:55:26,680
It's helpful to find the work
that speaks to you, the

1851
01:55:26,680 --> 01:55:29,440
researchers who are doing things
you're interested in.

1852
01:55:29,960 --> 01:55:32,520
Also to look more pragmatically
at.

1853
01:55:33,120 --> 01:55:36,440
I mean, it's one thing to study
philosophy and to study

1854
01:55:36,600 --> 01:55:41,720
neuroscience as more of a hobby,
but in terms of going into it as

1855
01:55:41,840 --> 01:55:46,920
a APHD student or a professor,
you know, there's certain types

1856
01:55:46,920 --> 01:55:51,400
of aspects of the, of those
cultures that it's helpful to

1857
01:55:51,400 --> 01:55:53,720
learn a bit about.
And there's also differences,

1858
01:55:53,720 --> 01:55:55,400
right, in terms of different
types of people.

1859
01:55:55,840 --> 01:55:59,720
But it is fascinating the
differences across fields.

1860
01:55:59,720 --> 01:56:03,600
But I have the advantage of, I
mean, Megan started studying

1861
01:56:03,600 --> 01:56:08,280
philosophy before I did, you
know, I started studying

1862
01:56:08,280 --> 01:56:10,640
philosophy.
I took my first class was

1863
01:56:10,640 --> 01:56:13,960
basically at the end of
undergrad, and then it shows up

1864
01:56:14,280 --> 01:56:20,280
a lot later.
So I don't have to do as much

1865
01:56:20,280 --> 01:56:22,560
legwork when I'm talking to
Megan.

1866
01:56:23,120 --> 01:56:25,720
And.
But when I am working with

1867
01:56:25,720 --> 01:56:29,720
scientists, a main goal is to
bring the philosophy that's

1868
01:56:29,720 --> 01:56:32,040
useful for what they're
interested in.

1869
01:56:32,280 --> 01:56:34,520
If they want to get pulled into
some of the philosophical

1870
01:56:34,520 --> 01:56:38,280
debates, we can do that too.
There's jargon that we're using

1871
01:56:38,280 --> 01:56:42,440
that.
Yeah, You know, I don't that I

1872
01:56:42,440 --> 01:56:46,040
don't want to kind of burden
people with, but part of these

1873
01:56:46,040 --> 01:56:48,720
interdisciplinary connections is
learning how to speak to people

1874
01:56:48,840 --> 01:56:50,800
who use very different
vocabularies.

1875
01:56:53,720 --> 01:56:56,320
When you have someone that knows
a bit of the philosophy already,

1876
01:56:56,320 --> 01:56:58,880
they already know the
vocabulary.

1877
01:56:59,120 --> 01:57:03,000
And then, you know, I've trained
in medicine, so I know a bit of

1878
01:57:03,000 --> 01:57:07,400
theirs too.
But there is still this needing

1879
01:57:07,400 --> 01:57:10,680
to be comfortable in an
uncomfortable situation where

1880
01:57:10,680 --> 01:57:17,080
you're not the main expert and
you're leaning on other people

1881
01:57:17,160 --> 01:57:21,080
and looking for their input too.
But once you start to see the

1882
01:57:21,080 --> 01:57:25,840
value of that discomfort and
that approach, and you're you're

1883
01:57:25,840 --> 01:57:30,960
among academics who have the,
the, you know, ideal disposition

1884
01:57:30,960 --> 01:57:35,440
of being open to being wrong, to
pursuing big ideas and taking

1885
01:57:35,800 --> 01:57:40,120
risks, but also reorienting the
Sky's the limit.

1886
01:57:40,200 --> 01:57:45,800
And then you do have a kind of
team in a kind of group that can

1887
01:57:45,800 --> 01:57:48,960
start to ask the right kinds of
questions so that we can

1888
01:57:49,080 --> 01:57:54,160
ultimately get helpful answers.
But you know, it's very

1889
01:57:54,160 --> 01:57:56,920
interesting to think of the
differences across fields.

1890
01:57:56,920 --> 01:58:04,240
And as a philosopher of science,
it's non trivial to convey to

1891
01:58:04,240 --> 01:58:08,160
different types of scientists
what it is that I do, how it

1892
01:58:08,160 --> 01:58:10,640
might be useful.
And the same goes for the public

1893
01:58:10,720 --> 01:58:13,520
or any kind of audience.
But part of what those

1894
01:58:13,520 --> 01:58:18,240
interdisciplinary connections
help you learn is just, is doing

1895
01:58:18,240 --> 01:58:23,680
just that, you know, speaking to
different audiences and and

1896
01:58:23,680 --> 01:58:25,880
working to do that effectively
or well.

1897
01:58:28,320 --> 01:58:32,400
I, I think you know, something
as you're talking, Lauren, I, I

1898
01:58:32,400 --> 01:58:34,960
feel like something's really
just crystallized in my mind

1899
01:58:34,960 --> 01:58:40,680
that in this type of discussion,
we really say, well, we're, you

1900
01:58:40,680 --> 01:58:42,640
know, if you're a
philosophically informed

1901
01:58:42,640 --> 01:58:46,240
scientist or vice versa, you're
the bridge between the, you

1902
01:58:46,240 --> 01:58:48,480
know, potentially domain matter
experts.

1903
01:58:49,560 --> 01:58:52,640
And so maybe you're not the
expert in the room on, you know,

1904
01:58:52,680 --> 01:58:55,880
one of whatever it is that's,
that's being spoken about, but

1905
01:58:55,880 --> 01:58:57,480
you know what?
You are the expert in the room

1906
01:58:57,480 --> 01:59:02,000
on making bridges, on finding
those connections like that is

1907
01:59:02,000 --> 01:59:05,480
your expertise.
You're not an expert in, you

1908
01:59:05,480 --> 01:59:11,600
know, the, the measles or
whatever, right, But you are an

1909
01:59:11,600 --> 01:59:14,640
expert at, at finding the shape
of the problem and, and building

1910
01:59:14,640 --> 01:59:17,880
those bridges and this science
communication, this, this

1911
01:59:17,880 --> 01:59:21,800
ability to translate between
specialized vocabularies.

1912
01:59:22,280 --> 01:59:29,920
That itself is an expertise area
and it's valuable not only in

1913
01:59:30,200 --> 01:59:34,080
academic or scientific or even
industry spaces, that kind of

1914
01:59:34,080 --> 01:59:36,680
thing.
It's also valuable in, as you

1915
01:59:36,680 --> 01:59:42,880
said, communication to a broader
audience to translating to make

1916
01:59:42,880 --> 01:59:46,160
sure that the people that you're
listening to and the people that

1917
01:59:46,160 --> 01:59:48,400
you're speaking to can actually
understand each other.

1918
01:59:48,400 --> 01:59:52,440
You're a translator, you're a, a
bridge between disciplines.

1919
01:59:52,880 --> 01:59:57,520
You're a holistic trees level or
sorry, forest level, not trees

1920
01:59:57,520 --> 02:00:01,040
level kind of perspective.
That is the domain expertise

1921
02:00:01,040 --> 02:00:03,560
that someone who wants to occupy
this space will bring.

1922
02:00:04,480 --> 02:00:10,560
And, and it brings with it the,
the requirement of developing

1923
02:00:10,560 --> 02:00:13,440
another skill too, which is not
just talking, but also

1924
02:00:13,440 --> 02:00:16,760
listening.
And I think that as a lot of

1925
02:00:16,760 --> 02:00:20,600
domain matter experts like we,
we tend to want to talk a lot.

1926
02:00:20,600 --> 02:00:25,640
We tend to want to, you know,
come up with our own description

1927
02:00:25,640 --> 02:00:28,640
or explanation for what's
happening and and push on that.

1928
02:00:29,120 --> 02:00:32,040
But it's harder to learn how to
listen, especially when you

1929
02:00:32,040 --> 02:00:35,920
don't really speak the language.
And so having essentially a

1930
02:00:35,920 --> 02:00:38,920
translator in the room is such a
valuable asset.

1931
02:00:38,920 --> 02:00:43,520
And being that expert, can be
can be the difference between a

1932
02:00:43,520 --> 02:00:46,360
breakthrough or just kind of
continuing on in parallel with

1933
02:00:46,360 --> 02:00:48,160
our blinders on and reinventing
the wheel.

1934
02:00:50,560 --> 02:00:51,680
Yeah, I think that's.
Interesting.

1935
02:00:52,280 --> 02:00:56,640
Oh, sorry.
There's something you have said

1936
02:00:56,640 --> 02:01:03,040
before to Megan about a
researcher showing as opposed to

1937
02:01:03,040 --> 02:01:05,240
just saying they're doing
something.

1938
02:01:06,320 --> 02:01:07,880
What was that?
Do you remember that expression?

1939
02:01:08,480 --> 02:01:10,560
Yeah.
Like show don't tell, you know,

1940
02:01:11,120 --> 02:01:13,680
like don't tell us that you
found the explanation for

1941
02:01:13,680 --> 02:01:16,680
something.
Show us what that explanation is

1942
02:01:16,680 --> 02:01:18,720
and how you're writing about it
and the story that you're

1943
02:01:18,720 --> 02:01:20,800
telling and the narrative that
you're constructing.

1944
02:01:21,080 --> 02:01:23,400
You know, you want to, you want
to take the listener or the

1945
02:01:23,400 --> 02:01:26,720
reader and guide them by the
hand so that they have that aha

1946
02:01:26,720 --> 02:01:30,880
moment along with you.
This is, you know, this is what

1947
02:01:30,880 --> 02:01:33,160
you want to do in storytelling
and narrative buildings, what

1948
02:01:33,160 --> 02:01:35,200
you want to do in film and
media, right?

1949
02:01:35,200 --> 02:01:37,800
You want to show the audience,
don't tell them.

1950
02:01:38,160 --> 02:01:40,960
No one wants to read a a story
about that's a list of

1951
02:01:40,960 --> 02:01:43,040
accomplishments.
They want to take the journey

1952
02:01:43,040 --> 02:01:45,360
with you.
So this is the same kind of

1953
02:01:45,360 --> 02:01:45,920
thing.
Yeah.

1954
02:01:46,760 --> 02:01:49,720
So I wonder if in terms of
talking about this

1955
02:01:49,720 --> 02:01:53,480
interdisciplinary approach and
perspective and an academic who

1956
02:01:53,480 --> 02:01:58,280
does this and does it well,
picking up on what you said, one

1957
02:01:58,280 --> 02:02:01,400
element of it is being a
listener in part.

1958
02:02:02,480 --> 02:02:06,440
And then another element of it,
I would add is this sort of,

1959
02:02:06,520 --> 02:02:08,920
you're open.
I mean, when you think

1960
02:02:08,920 --> 02:02:11,520
something's right, you really
stand by it, but you're also

1961
02:02:11,520 --> 02:02:14,360
open to being wrong.
And one of the challenges of

1962
02:02:14,680 --> 02:02:17,800
some work and philosophy is
someone becomes known for a view

1963
02:02:18,360 --> 02:02:21,400
and then they don't want to
change it because they're sort

1964
02:02:21,400 --> 02:02:23,800
of known for it.
So they're not really open to

1965
02:02:23,800 --> 02:02:28,160
modifying it or being wrong.
And some of the most impressive

1966
02:02:28,160 --> 02:02:31,920
academics I know are truly open
to that.

1967
02:02:31,920 --> 02:02:35,880
And it allows them to reach
certain types of peaks that they

1968
02:02:35,880 --> 02:02:39,400
wouldn't have had access to.
So listening, being open, being

1969
02:02:39,400 --> 02:02:42,560
open to considering new ideas,
maybe even being wrong.

1970
02:02:42,800 --> 02:02:46,480
But then also there's this
interesting piece where you do

1971
02:02:46,480 --> 02:02:50,640
have to pitch things and you do
have to tell a story and getting

1972
02:02:50,640 --> 02:02:54,960
a grant, I mean, Megan's more of
the expert here for sure.

1973
02:02:55,400 --> 02:02:59,080
But, but also, I mean, when we
write papers, you are pitching

1974
02:02:59,080 --> 02:03:01,360
an idea.
When we're writing arguments,

1975
02:03:01,360 --> 02:03:05,920
I'm trying to persuade someone
I'm in in similar with a grant

1976
02:03:05,920 --> 02:03:09,840
or in any kind of communication,
science communication, there is

1977
02:03:09,840 --> 02:03:15,960
a lot to the story that you
tell, but the best academics,

1978
02:03:16,200 --> 02:03:19,160
they can back it up.
And it isn't just a tell me,

1979
02:03:19,160 --> 02:03:21,880
it's also a show me.
So they can do both.

1980
02:03:21,880 --> 02:03:25,400
And maybe they don't even put
that story together until they

1981
02:03:25,400 --> 02:03:29,000
know that they could show you.
And so sometimes you see, I

1982
02:03:29,000 --> 02:03:34,080
mean, scientists are engaged in
a social, you know, this is a

1983
02:03:34,200 --> 02:03:37,080
this is a social space.
If I tell you I've got a

1984
02:03:37,080 --> 02:03:40,640
mechanism and I tell you I've
got an explanation, and I'm

1985
02:03:40,640 --> 02:03:45,000
coming from a fancy university
and I've done a couple things, I

1986
02:03:45,000 --> 02:03:47,480
can, you know, that might go a
long way.

1987
02:03:47,480 --> 02:03:50,360
And, and we do need to be able
to communicate well and some

1988
02:03:50,360 --> 02:03:54,320
people can check that box.
But if you really want to do the

1989
02:03:54,320 --> 02:03:58,240
best work, it's not just being a
communicator, you've got to back

1990
02:03:58,240 --> 02:04:00,840
it up.
And so then when someone asks

1991
02:04:00,840 --> 02:04:04,640
you, what do you mean that you
say you have an explanation, how

1992
02:04:04,640 --> 02:04:07,040
is how is this explanatorily
relevant?

1993
02:04:07,040 --> 02:04:10,280
What's your guiding principle?
You need to have an answer.

1994
02:04:10,600 --> 02:04:14,320
Or when they say, what do you
mean by causation here?

1995
02:04:14,720 --> 02:04:16,640
How is this a call?
What do you mean by mechanism?

1996
02:04:17,280 --> 02:04:19,040
Right?
They need to have an answer.

1997
02:04:19,040 --> 02:04:22,280
And so we have these buzzwords,
they're status terms.

1998
02:04:22,760 --> 02:04:28,320
And part of playing the game
well is knowing how to use words

1999
02:04:28,440 --> 02:04:34,000
that that gain some traction.
But if you want to play the game

2000
02:04:34,080 --> 02:04:36,520
the best, you just have to back
that up.

2001
02:04:36,520 --> 02:04:39,800
And really science should be
something that we can back up in

2002
02:04:39,800 --> 02:04:42,560
that way.
So that's that's a tall order

2003
02:04:42,880 --> 02:04:46,520
for a scientist or a researcher,
but it shows you how they're

2004
02:04:46,520 --> 02:04:51,880
willing to adapt and that they
can really tell you the value of

2005
02:04:51,880 --> 02:04:54,800
their work and the justification
for it.

2006
02:04:55,080 --> 02:04:58,400
But you start to see the kind of
theorizing that a philosopher

2007
02:04:59,160 --> 02:05:03,400
might do and that scientists are
doing with the scientific

2008
02:05:03,400 --> 02:05:06,440
practice, and then this
interesting aspect, which is

2009
02:05:06,920 --> 02:05:10,240
their need to pitch this work
right, to communicate it to

2010
02:05:10,240 --> 02:05:14,040
other people in papers, grants
and so on.

2011
02:05:15,120 --> 02:05:16,760
Yeah.
So if you wanna, you wanna do

2012
02:05:16,760 --> 02:05:19,920
that communication, well that
storytelling, well what better

2013
02:05:19,920 --> 02:05:22,840
way than to wear 2 hats?
Philosopher and scientist.

2014
02:05:23,760 --> 02:05:27,480
Yeah, I think you both are
excellent in in both fields and

2015
02:05:27,480 --> 02:05:30,040
and that skill you were talking
about Megan, that the fact that

2016
02:05:30,040 --> 02:05:32,200
Lauren has that skill, I think
you both technically do.

2017
02:05:32,200 --> 02:05:36,120
You both are these translators
in both fields and, and, and I

2018
02:05:36,120 --> 02:05:39,840
think you can see this becoming
a thing where most up and coming

2019
02:05:39,840 --> 02:05:42,960
scientists, researchers are
trying to make sure that they

2020
02:05:42,960 --> 02:05:45,120
understand both, both sides
nowadays.

2021
02:05:45,120 --> 02:05:47,840
So when you do look at young
researchers, they're, they're

2022
02:05:47,840 --> 02:05:51,200
ingrained into multidisciplinary
fields like never before.

2023
02:05:51,560 --> 02:05:54,360
You'll see someone doing
mathematics, AI, consciousness

2024
02:05:54,360 --> 02:05:56,360
research all in one go.
And, and it's kind of

2025
02:05:56,360 --> 02:05:59,000
surprising, but but super
exciting as well because it

2026
02:05:59,000 --> 02:06:01,200
means that the future is kind of
bright in that regard.

2027
02:06:01,920 --> 02:06:05,040
What do you think that we should
close off with?

2028
02:06:05,080 --> 02:06:06,680
Anything that you feel you
haven't said?

2029
02:06:06,680 --> 02:06:09,280
Is there anything about this
conversation, why science and

2030
02:06:09,280 --> 02:06:12,560
philosophy need each other, that
you feel you'd like to just hone

2031
02:06:12,560 --> 02:06:14,560
in on A drive home before we
close for me?

2032
02:06:15,320 --> 02:06:20,280
I think we've covered quite a
lot of ground here, but one

2033
02:06:20,280 --> 02:06:25,120
theme that maybe has been a
common thread throughout this

2034
02:06:25,120 --> 02:06:30,120
is, is the need for recognizing
that whatever you're doing,

2035
02:06:30,120 --> 02:06:33,160
whether you're a scientist or a
philosopher or some, you know,

2036
02:06:33,160 --> 02:06:35,400
blend of both, you're not doing
it in a vacuum.

2037
02:06:35,760 --> 02:06:38,080
There's all these other folks
around you and that, you know,

2038
02:06:38,120 --> 02:06:44,000
doing good science and good
philosophy is it's a social and

2039
02:06:44,000 --> 02:06:50,040
it's a, a networked enterprise
and that no one researcher, no

2040
02:06:50,040 --> 02:06:53,800
one expert is an island.
And this isn't just you gotta

2041
02:06:53,800 --> 02:06:55,640
read stuff.
Everybody knows you have to read

2042
02:06:55,640 --> 02:06:59,000
the literature and it's gobs
and, you know, piles of, of

2043
02:06:59,040 --> 02:07:01,800
literature all the time.
And if you're especially in like

2044
02:07:02,040 --> 02:07:04,600
artificial intelligence or
machine learning, like good luck

2045
02:07:04,600 --> 02:07:06,600
keeping up with archive, good
luck.

2046
02:07:07,280 --> 02:07:10,480
But it's not just that.
It's not just reading and, and

2047
02:07:10,480 --> 02:07:13,200
thinking and making connections
yourself and working with your,

2048
02:07:13,440 --> 02:07:15,320
your local research group and so
on.

2049
02:07:15,320 --> 02:07:19,880
It's, I think really trying to
get out and make your network as

2050
02:07:19,880 --> 02:07:22,440
big and as interdisciplinary as
possible.

2051
02:07:23,160 --> 02:07:27,040
You don't necessarily have to be
the true bridge.

2052
02:07:27,040 --> 02:07:29,440
If that's not your bag, that's
fine.

2053
02:07:30,080 --> 02:07:34,200
But recognizing the value of all
these different kinds of

2054
02:07:34,200 --> 02:07:39,200
approaches and kinds of ways of
doing science as a community,

2055
02:07:39,200 --> 02:07:44,480
rather than as a collection of
individuals, that there's,

2056
02:07:44,480 --> 02:07:47,280
there's an emergent property
that we should be going for

2057
02:07:47,280 --> 02:07:52,080
here.
And the, the way to do that is

2058
02:07:52,080 --> 02:07:57,280
to recognize value and, and
really celebrate the different

2059
02:07:57,280 --> 02:07:59,200
kinds of expertise that we can
all bring to the table.

2060
02:07:59,200 --> 02:08:01,560
So the community aspect, I think
is something that's been a

2061
02:08:01,560 --> 02:08:05,680
thread throughout all of this
that maybe I'll just bring to

2062
02:08:05,680 --> 02:08:08,880
the forefront at the, the end is
that you too, all of you

2063
02:08:08,880 --> 02:08:11,080
listeners, you can also be part
of this community.

2064
02:08:11,080 --> 02:08:12,320
And I'm sure that you already
are.

2065
02:08:15,520 --> 02:08:17,920
Lauren you.
Great.

2066
02:08:17,920 --> 02:08:25,720
Yeah, just building on that and
adding to that the type.

2067
02:08:25,840 --> 02:08:30,120
So why do science and philosophy
need each other?

2068
02:08:30,120 --> 02:08:36,040
Part of the answer is that the
projects that are involved in

2069
02:08:36,480 --> 02:08:41,880
both are intimately related.
Many scientists I know are doing

2070
02:08:42,440 --> 02:08:48,600
theorizing and theoretical work
that is similar to the kinds of

2071
02:08:48,600 --> 02:08:52,080
philosophy of science that I'm
engaged in and that other

2072
02:08:52,080 --> 02:08:55,680
philosophers are engaged in.
So there's a sense in which it's

2073
02:08:55,680 --> 02:09:00,280
hard to separate them if you're
looking at scientific research

2074
02:09:00,280 --> 02:09:02,720
and if you're looking at
scientifically informed

2075
02:09:02,920 --> 02:09:06,080
philosophy.
I think if we're looking at

2076
02:09:07,960 --> 02:09:11,480
current research that scientists
are doing, where they're

2077
02:09:11,480 --> 02:09:16,240
interested in big questions and
they're at the forefront and

2078
02:09:16,240 --> 02:09:22,240
they're trying to uncover and
understand new things that we

2079
02:09:22,240 --> 02:09:25,960
don't yet understand.
If you're looking at those open

2080
02:09:25,960 --> 02:09:29,160
questions in the sort of cutting
edge of science, or if you're

2081
02:09:29,160 --> 02:09:35,480
looking at justifying scientific
practice as it's taken place for

2082
02:09:35,640 --> 02:09:39,320
decades and centuries,
philosophy of science is very

2083
02:09:39,360 --> 02:09:41,520
useful for both of those
projects.

2084
02:09:42,240 --> 02:09:45,680
Philosophy of science here is a
kind of work that's focused on

2085
02:09:46,000 --> 02:09:50,920
foundations of science,
precision in the concepts and

2086
02:09:50,920 --> 02:09:54,000
the methods that scientists use,
the principles that guide their

2087
02:09:54,000 --> 02:09:59,360
research and how it how it is
that it works, the success that

2088
02:09:59,360 --> 02:10:01,920
they get, how they reach the
goals that they have.

2089
02:10:02,000 --> 02:10:06,240
And so this isn't something that
a philosopher can do in a

2090
02:10:06,240 --> 02:10:09,720
vacuum, right?
We're studying and hopefully

2091
02:10:09,720 --> 02:10:15,800
working with scientists to get
that precision principles and

2092
02:10:15,800 --> 02:10:19,720
those kinds of goals to show,
you know, how they actually

2093
02:10:19,720 --> 02:10:22,080
work.
And that's a, a kind of

2094
02:10:22,080 --> 02:10:25,600
philosophy that scientists do,
that philosophers of science do.

2095
02:10:25,600 --> 02:10:29,960
And it's helpful for both being
able to justify the scientific

2096
02:10:29,960 --> 02:10:34,360
method, how science gives us our
best understanding of the world,

2097
02:10:34,360 --> 02:10:37,960
but also when scientists are
tackling these big questions, it

2098
02:10:37,960 --> 02:10:45,160
helps to see and to look at with
a clear lens scientific practice

2099
02:10:45,160 --> 02:10:50,240
and all of these domains in the
principles that you find across

2100
02:10:50,240 --> 02:10:53,360
those contexts and across those
domains.

2101
02:10:53,520 --> 02:11:01,080
So, yeah, very much fields that
in some sense can be continuous.

2102
02:11:01,560 --> 02:11:05,160
And I think it's very much the
case that you find many

2103
02:11:05,160 --> 02:11:11,800
scientists engaged in theorizing
that we can think of and in many

2104
02:11:11,800 --> 02:11:15,280
cases should think of as
philosophical.

2105
02:11:16,080 --> 02:11:20,040
But of course it's going to
depend on what we mean by

2106
02:11:20,160 --> 02:11:23,160
philosophy.
And that's maybe also one point

2107
02:11:23,160 --> 02:11:27,560
of our discussions is that
philosophy, philosophical work,

2108
02:11:27,560 --> 02:11:30,200
philosophical thinking can mean
very different things in

2109
02:11:30,200 --> 02:11:33,400
different contexts.
But here it's focused on

2110
02:11:33,880 --> 02:11:38,080
critical thinking,
argumentation, and in particular

2111
02:11:38,360 --> 02:11:43,280
kind of foundations of of
science and how scientific

2112
02:11:43,280 --> 02:11:46,680
thinking, reasoning and
explanations work and how

2113
02:11:46,680 --> 02:11:51,000
they're so successful.
Well, I just want to say thank

2114
02:11:51,000 --> 02:11:53,200
you both for this wonderful
conversation.

2115
02:11:53,200 --> 02:11:55,080
You both are definitely experts
in the field.

2116
02:11:55,080 --> 02:11:57,840
I can't wait to dissect your
work individually as well,

2117
02:11:58,120 --> 02:11:59,960
showcase it highlights as much
as possible.

2118
02:11:59,960 --> 02:12:02,040
It's a true privilege and honor
to have you both.

2119
02:12:02,040 --> 02:12:04,480
And yeah, thank you so much.
This is a wonderful discussion

2120
02:12:04,480 --> 02:12:08,640
and I really enjoyed it.
Have thank you so much for

2121
02:12:08,640 --> 02:12:10,600
having us.
This has been really, really fun

2122
02:12:10,600 --> 02:12:12,400
and engaging.
Looking forward to the next

2123
02:12:12,400 --> 02:12:14,400
time.
And it's always fun to hang out

2124
02:12:14,400 --> 02:12:16,640
with Lauren and talk about
science and philosophy.

2125
02:12:16,640 --> 02:12:21,000
It's one of my favorite things.
Oh yeah, always, always fun to

2126
02:12:21,440 --> 02:12:23,520
talk more, learn more from Megan
and Ted.

2127
02:12:23,760 --> 02:12:26,680
Yeah, Thank you so much.
Great to be here and looking

2128
02:12:26,680 --> 02:12:27,320
forward to more.