Response to the Templeton Ideas podcast with Anil Seth
Tom:
Okay. Is consciousness one sort of unified, irreducible thing or is consciousness made of parts?
science often progresses by taking something that’s complicated, figuring out what the parts are,
and studying the parts. Tell me, how does this relate to consciousness?
Anil:
if we treat consciousness as a single irreducible thing, one big scary mystery,
we’re tempted to try and find a single eureka solution to it.
I mean, this sometimes works, but it often doesn’t work.
We Say:
Conciousness is Very Simple:
It is merely a Feedback Loop Within the (any)System.
One part of the system being aware of the result of another. (Massively Parallel Feedback...)
No Mystery Whatsoever.
It is Science that is trying to make it Complicated.
It's why they can't Acually Comprehend or Understand it...
They haven't Analysed it Properly,
and all of their Human Biases and Egoes, have got in the way, of doing proper Self Disciplined Science.
They are All Wannabe 'Personalities'
trying to say 'Look at Me'
You may notice that on this site, there is no mention of Personalities.
It is ALL about the Work and the Discovery, for the Benefit of ALL.
It's Not about Bigging up the Ego.
Or trying to Sell Books...
Anil:
And there are now these emerging methods that are getting very, very useful in detecting consciousness that’s not apparent on the outside.
The most well-known of these is something called the perturbation complexity index.
Basically, it amounts to sending a pulse of energy into the brain using a magnetic coil and then listening to the echo.
If the echo is quite complex, this pulse of energy bounces around the brain a lot.
You can measure how complex that is.
And that number gives you an indication of whether that person is still conscious.
We Say:
Yes, what it is actually indicating, is the Amount of Feedback Occouring in the system.
Tom:
I want to pivot from measuring consciousness to another line of questions here.
What would you say is the purpose of consciousness, like what is consciousness for?
Anil:
One of the most perplexing views in philosophy, I think, is the idea that consciousness is epiphenomenal,
that it has no function. it’s quite an old idea, been talked about since Darwin’s days of consciousness just being,
the whistle of this steam engine, not having any purpose.
That seems in direct contradiction to what conscious experience is.
If we think about what’s going on in any experience, it seems beautifully designed to be useful for us as organisms,
especially for complex organisms like human beings and many other animals, who have a certain flexibility in how they respond to the challenges of their environment.
So, any experience that we have and what it does is it brings together a large amount of relevant information
about our immediate environment. We see things, we hear things, we can taste, touch things.
Brings all this information together in a unified, way
We Say:
That's where we Actually Agree,
Why Do we have a Conscious ?
It is adapted (or optimised) for Accuracy and Complexity, but has to trade Speed for that Accuracy and Complexity.
Due to its simplicity, the sub-conscious is prone to silly errors and mistakes.
The Conscious Has the Ability to Override and Correct for those mistakes.
It also Programs and Re-Programs the Sub-Conscious.
(Reasoning, learning, training, rules, processes, procedures, overriding reflexes, standard reactions, etc.)
Learning or Training is :
The conscious mind training the unconscious mind, to relieve the conscious mind of having to ‘think of it’ all the time.
The conscious trains the habits of the unconscious.
So, The Conscious has the ability of Top Down Authority over the Sub-Conscious.
(it Has to have…The Sub-Conscious would not be able to operate properly without it.)
The Sub Conscious is TOO DUMB to be In Charge, AND SURVIVE….
And, Neither One Would Work on its own.
We Need Both to Survive…
Tom:
I want to turn next to the relationship between Consciousness and intelligence,
I think a lot of times we either equate or conflate the two.
what does consciousness have to do with intelligence?
Anil:
Consciousness and intelligence, conceptually, are two different things.
Consciousness, we defined at the beginning, is any kind of experiencing whatsoever.
It doesn’t have to involve language or complex thought or anything.
It’s just being stabbed by a knife and feeling pain. Intelligence, conceptually, is different.
Conceptually, it’s very broadly doing the right thing at the right time,
or you could be a bit more specific and say it’s achieving goals through flexible means, and they are different.
We Say:
We disagree with the definition.
Consciousness, is NOT any kind of experiencing whatsoever.
That, Is handled by Lower Sub-Systems.
Consciousness is the Top Level Integration of the Sub-Conscious Sub-Systems.
It is the Command and Control Centre.
Anil:
And it’s possible that we could have systems like AI or other technologies that are intelligent, but that are not conscious at all.
We Say:
Conciousness is merely an Awareness (Via Feedback) of Everything that is going on and the Implications of it.
((Via Processing Then Feedback from the Lower Sub-Systems)
It is used for making Correct, Valid, Decisions In Real Time.
Which the Sub-Conscious Sub-Systems are just Not Sophisticated Enough to do.
(nor should they be, that's not their job or function...)
Anil:
Seems like there’s a lot of public conversation now about the possibility of consciousness
and maybe in octopus, birds, maybe even insects.
Tom:
How might you go about trying to somehow measure or detect consciousness in some objective way
that isn’t just hopeful or an expression of someone’s preference.
Where does the measurement come in, in terms of non-human consciousness?
Anil:
One of the challenges is these measures that we have, that have been used on human patients.
We can’t just apply them to non-human animals or other systems and interpret the data.
it doesn’t mean anything. the measures that we have, they’re not like measures of temperature that,
we can apply to the sun or to deep space or to the volcano and, know what we’re talking about.
We don’t have the benchmarks. So, I think we’ve, got to be very cautious.
we’ve got to recognize that we’ve got these countervailing pressures.
You know, on the one hand, it’s clearly wrong to be human exceptionalist about this and say,
oh, you know, only humans are conscious because can talk to each other and tell us about consciousness,
so nothing else qualifies.
That seems very likely to be wrong. But on the other hand, we’ve got to be very careful when we generalize.
So, for me, it’s just this cautious, incremental, cautious, process a bit like walking out onto a frozen lake
and just checking the ice beneath your feet at every step to make sure it’s, solid.
So, the more we know about human consciousness, the more we can see what those mechanisms are.
presence in other animals and if they are and all the evidence lines up and we can think well,
How would we adapt a test that we’re using in humans to a non-human animal?
We Say:
Anil, You are right to be cautious,
Just Measure the amount of Feedback that is occouring Within the System,
You will soon Know whether something is Conscious or not...
Tom:
Yeah. Let’s turn then, too, non-biological entities.
And you and I were at TED several weeks ago, and there was a lot of talk about artificial intelligence,
assumptions, maybe justified that artificial intelligence is going to get better and better.
We may sooner or later reach, artificial general intelligence,
which, if I understand it correctly, is when, an AI system can approximate or exceed anything that a human can do.
I felt like there’s often a built-in assumption that, an AI system that can beat or exceed us in everything that we do,
we could infer is conscious or, that a certain level of intelligence must yield consciousness.
But I hear you saying that we should really be cautious about attributing consciousness to things.
What are some circumstances in which you think might be justified to attribute consciousness to an artificial agent?
Anil:
For that to be possible, I would need to be convinced of an assumption that computation of some kind is sufficient for consciousness.
And this is the assumption underlying all of these, claims. And we see these claims a lot. You’re right.
In the media, in the tech industry, and then some of the academic commentary as well, that AI will get smarter.
And at a certain point, Consciousness just comes along for the ride, and the inner lights come on.
And revealingly, this is often taken to be this moment of AGI, or perhaps a singularity,
or some other threshold at which AI bootstraps itself beyond our understanding and control.
And again, this to me reveals more about our psychological biases than it does about what’s likely to happen.
We think conscious AI is around the corner because we associate it with intelligence in this anthropocentric way,
and we think we’re at this major point of transition on this exponential curve.
But there is this deeper question. Is consciousness, in principle, something a computer could have?
Even if it doesn’t come along for free with extra intelligence, what if you programmed a computer to, be conscious,
took our best theories of consciousness and simulated them on a supercomputer. Would that be enough?
Well, it still rests on this assumption that consciousness is a form of computation.
And for me, this is a very, very strong claim. It might be right.
But to just assume that it is, I think is a very dangerous assumption.
And it’s dangerous because the more you look at a brain, the more you realize how different it is from a computer.
Even the claim that brains process information or do computation, well that is very questionable.
And in a brain, you just can’t separate what it does, whether that’s computation or something else, from what it is.
there’s no sharp distinction between mindware and wetware as you find between hardware and software.
We Say:
Consciousness is not a Computation. But it can be.
It is the Feedback that occours in the system.
Being aware of feedback from a sub system (like Fear or the endocrine system) will influence Likely descisions and actions.
Being aware of feedback from a past memory where something worked or didn't work very well will influence Likely descisions and actions.
So you can feed back the feedback into the feedback to modulate the feedback...
It's recursive, as is the brain.
Which you ALL don't Seem to Comprehend.
Which just shows me the Biases and lack of thinking Built into the system...
Once you are aware (conscious of) your own biases, you can feed them back into the system to prevent yourself falling foul of them...
A Good use of Consciousness I think...
And the Difference between machines and Humans is That We both Have Different Peripherals
which is why you are All so Confused About Everything to do with Consciousness..
It doesn't Mean that Machines Can't be Conscious of Everything going on within the system...
They Are Actually Better At it that We Are...
Oh, and by the way, You don't need anything complicated to create a brain.
You can actually do it with relational databases.
It's All about Connections and relations...
and feedback of course...
Looking forward to the next Episode...
:o)
TTFN.
C:>