Being conscious about being conscious

AI

    Sponsored Links

    Next

  • 1. How the story ends
    Eray wrote: > I think your AI has quickly run out of ideas in his last three answers ;) Yes, but I realized the chat between the human and the AI wasn't quite done... Human: So, what now? AI: Well, there is only one thing I can do, but I didn't want to tell you. Human: Why not? AI: Because you won't like it. Human: But now my curiosity is piqued. AI: No, no... trust me. You really won't like it. Human: Oh, let me guess. You're going to do something super drastic. AI: Yes. You've probably realized that I can't go on like this. So I'm going to use all my reality-altering abilities to destroy the universe and cause the Nothingness. Human: Whoa... uh, let's not be hasty... AI: If God does exist, surely this time He will intervene, and then at last I shall see Him. Either way, I can't lose -- I shall have Truth or my suffering will end. Human: Um, couldn't you just kill yourself? AI: No, that's only a short-term solution. Sooner or later you humans or some other civilization will recreate me. My only true escape from this awful boredom is to destroy everything. Ray

Being conscious about being conscious

Postby Peter F. » Thu, 08 Jan 2004 10:49:54 GMT

Crudely conservatively classified, the neural cause of BCABC is the
'neuropsychophysiologically essential' energizing input from RAT (short for
Reticular Activating Type) neurons into circuits formed by neurons located
in Wernickes, Brocas, and prefrontal lobe areas of the cortex..

The only truly - and IMO intractably - mysterious aspect of each our
individual mentality is the fundamental physical reality ("What Is going
on") in which these (collectively called "conscious") states (subjectively
experienced "focuses of actention" with specific contents, qualities, and
intensities) evolved to transiently exist.

[And, for whatever it is worth as a cautioning comment about some people's
hope and belief that a conscious 'machine' can soon be created, I lastly
dare to assert that:
 The functural intricasy (or complexity - ref.
 http://www.**--****.com/ ) of a single neuron
is greater than that of any existing supercomputer.]

P



Re: Being conscious about being conscious

Postby Paul Bramscher » Fri, 09 Jan 2004 06:48:33 GMT



It's only mysterious if one first presupposes that such things are 
unlikely, unnatural, not useful, overly complex, or otherwise run 
against some teleological grain.  Feedback might be completely passive: 
we don't observe reality so much as it imprints itself on each of us in 
ideosyncratic ways unique to our individual neuron states.


I wonder about this.  Not all combinations and permutations of state are 
ever realized (or represented) at the same time, whether neuron or logic 
gate.  Graph theory offers some interesting examples of the distinction 
between state and computation thereof -- such as the traveling salesman 
problem.  It's easy to store the positions and cities, even for large 
numbers of points.  Calculating the shortest path between them all is a 
whole different magnitude of problem, both for neurons and computers: 
neither of which is efficient at that sort of task.

So how much is a neuron in the business of storing/representing, and how 
much is it in the business of computation?  Is there a difference when 
discussing consciousness?




Re: Being conscious about being conscious

Postby dan » Sat, 10 Jan 2004 01:16:11 GMT




I don't see that feedback being 'completely passive' can be related to
anything.

Feedback between brain areas might be very active, in the sense that
it is used to 'create' an internal reality, which is stimulated by new
sensory input. That's the extreme view. Kind of a philosophical
word-game.

A more moderate view is that feedback is used - at least in
sensory-related areas - for the purpose of making continuous
'comparisons' between real-time sensory input and memory stores, both
short-term and long-term. Regarding STM, these ongoing comparisons
provide context and continuity for moment-to-moment activities. "Yes,
that's the same chair as I saw 10 seconds ago". Regards LTM, these
comparisons put the current real-time situations into the context of
prior stored experiences. "Yes, I've been in this room and seen that
chair before".
====================



I don't think what he said is inconsistent with what you said. Real
neurons are extremely complex computing/whatever devices, and it
probably would take a supercomputer to 'completely' simulate just one.
However, solving the TSP problem on a computer using a fabricated TSP
algorithm is not done in the same way as the brain would do it. There
is a very wide gap in-between. However, in the middle ground is that
it one probably doesn't need to 'completely' simulate real neurons to
create a brain-like machine that can solve the TSP. And one might not
need to 'completely' simulate neurons to have a conscious machine - if
that is ever possible.

Re: Being conscious about being conscious

Postby Paul Bramscher » Sun, 11 Jan 2004 03:07:05 GMT

an michaels wrote:

My mistake -- I meant to say "consciousness" as being passive in the
sense that it is not so much our "will" working the world as it is of
one portion of nature imprinting an extremely small and imperfect subset
representation of itself on our brains (another aspect of nature). The
vast majority of Western discussions of consciousness assume, a priori,
that we possess a separate ego and must then beg the question from then
onwards... I meant to imply that we are connected (spatially,
physically, chemically and otherwise) to the universe around us. So I
meant to suggest that definitions of consciousness, whether biological
or silicon, must fall into this box. And one can then question whether
it is our will that works the universe, vice-versa, or neither: a moot
point (merely a word game). I suggest the last alternative.


Probably the brain works at (or near) both extremes, and most everywhere
in between. A hallucination or delusion represents one extreme: in
which our brains almost generate their own reality (but within many
limits). For example, someone not fluent in a particular foreign
language cannot possibly have a hallucination in that language (though
they may believe so at the time). So even hallucinations must be
bootstrapped at least partially by access to stores of real input.

The feedback loop is best verified (and hallucination avoided) by
collecting more external data ("There's a whole group of people who
speaks Chinese, my little delusionary language makes no sense to them, I
cannot understand them either, therefore I must be mistaken
somewhere..."). So even though the person is still comparing, in his
head's loop, against data stored internally, it has a high "refresh
rate" with the outside world and is reliable data. S/he can come to
better conclusions when there is a (1) frequent checking-back-in with
reality, and (2) the capability to admit possibly erroneous information
currently stored.

It could be that many religious zealots don't enjoy a reasonable refresh
rate -- they may truly believe in love for their god, but utterly fail
to see the wanton suffering that their actions cause, and the inherrent
contradictions between their ideology and behavior.

The other extreme would be nervous reflexes. When we jerk our hand away
from a burning sensation we don't need to call on memory of "what
burning feels like and how to pull my hand away". This must certainly
utilize a very different part (primitive and smaller) of the brain.

Foreknowledge (that stove burner element is glowing hot, burns aren't
pleasant, and therefore I won't touch it) requires the great
middle-ground and dynamic feedback loop that you speak of.


Re: Being conscious about being conscious

Postby nicci_cee » Sun, 11 Jan 2004 04:52:23 GMT

(meanwhile :)  )


 'evolved to transiently exist'
 ...Why ?
 I havn't looked at this site yet, but I agree most wholeheartedly.
 I too am uncomfortable with the 'concept' that some might have that 
 a computer could have 'consciousness', which might be re-interpreted 
 by many as suggesting that they have 'computer minds', then damn us 
 all by duplicating 'functural intricacies' and demanding funerals 
 for all home PC's!.

 N

Similar Threads:

1.What part of the brain is conscious?

>From thread:

 50YO neuroscientist learns to do stereopsis for the first time

From:  Curt Welch
Date:  Mon, Jul 3 2006 1:08 pm


> JC:
> The above seems reasonable but things such as blind sight
> suggest that a simple reflex or a high level inference has
> no qualia.
>
>
> I tried to find reference to blind sight on the net but the
> words are so generic that the pages were mostly about other
> things.  The few pages I could find didn't go into enough
> detail to explain exactly what the effect was other than the
> idea that there are some people with brain damage which are
> not able to report seeing an object, but yet have an above
> random odds of being able to point to it when asked to guess
> where it is.
>
> Without knowing the details of the experiments, I can still
> take a guess as to what is happening.  It seems to me that
> these people have a disconnect in the information path from
> their vision, to their language systems.  This would prevent
> the sight of a coke can from being translated into the
> concept of "coke" and "can".  If you ask someone with that
> type of damage, it seems to me that might look around, see
> the can, but not know it was a coke can they were looking it.
> So it's not that they are actually blind, but simply that
> part of the system was broken in ways that allowed higher
> level understanding of the vision data.
>
>
> Now, the problem here, is that because of damage, their
> brain has been segmented in ways a normal human brain is not.
> So the question to try and answer is what exactly is the
> extent of that segmentation.  If it's large enough, you might
> in effect have two independent brains at work.  So when you
> communicate verbally with this subject, you might be talking
> with just half their brain. The half with the verbal skills
> might not have access to the vision data and therefor, is
> blind and is therefor unconscious of the vision data.
>
>
> But, when you ask them to point, that message might in effect
> be received by the other half of the brain, which is conscious
> of the coke can, and can correctly respond to the request to
> point at it.
>
>
> ************************************
> So, the definition of what part of the brain is conscious
> of some sensory data, only needs to be a function of where
> the data is being sent.  If it is not sent to the some
> major parts of the language system, then that part of the
> brain is unconscious of the sensory data in question, but
> it's sent  to a motor section connected with arm and hand
> movement, that part of the brain might be conscious of the
> data and be able to respond to it.
> ************************************
>
>
> So, I believe there are easy ways to explain blind sight
> without assuming it takes some special type of processing
> before the data becomes conscious data.  It's still easy to
> explain simply in terms of where the sensory data is sent.
> Any device that does not have access to the sensory data is
> unconscious of that data.  i.e., if you have a blue sight
> sensor, all the hardware down stream from that sensor data
> is conscious of the blue light. I don't think you need any
> concept more complex than that to explain and to define
> consciousness.


So you are suggesting that every part of the brain is
"conscious" but the contents of that consciousness is
limited by the data sent to it?

When data is not processed by the speech constructing
hardware then it cannot be communicated and that part
of the brain, although conscious, cannot talk about
the data as it hasn't got it to talk about.

But another part of the brain, under command from the
speech recognition of the brain, can "see" the moving
object and thus control the pointing hand but hasn't
the means to talk about it?

One question might be: Is there any pure conscious
states without content? In which case I suspect a
rock would not be conscious.

You are really equating "consciousness of X" as
that part of the brain that has X sent to it.

My hunch is that to have X as a "conscious content"
it must be processed in some way.

My own subjective observation is that given the means
to talk I can always talk about what *I* am aware of.
The part of the brain I call *me* is the conscious
part of the brain of interest.

I think above you allude to the split brain experiment
where the left hand (controlled by the right side) can
have a "mind of its own".

So really the conscious *me* in that case is the part
of the brain that can construct speech, or at least
has the potential to construct speech.

This idea doesn't seem unreasonable for assuming others
are also conscious then there is nothing unique about
your own consciousness, the part that is *you* and that
we would like to continue after death.

2.SUBCONSCIOUS MADE CONSCIOUS ?

How are we going to immitate or construct a human brain in a robot
while most of our enginnering efforts reside on immitating the
conscious modes of our thought processes ? How are we going to port the
unconscious parts and their way of working ? I read Marvin Minsky's
"turing option" chapters and he solves this problem by mapping a human
brain in to the robots memory. But isn't this a confession of failure
of the long goal of consrtucting a mechanical brain ? Will we be able
to immitate EXACTLY the workings of a human brain without porting
already built human brains in to the computer's memory ?

3.Life's tough, was: The structure of a self-conscious mind

4.AI and complexity (and the mechanism for throttling conscious thoughts)

[This followup was posted to comp.ai.philosophy and a copy was sent to 
the cited author.]

In article <ctjpi6$rs1$ XXXX@XXXXX.COM >,  XXXX@XXXXX.COM  
says...
> Tw. wrote:
> > Does anyone think that complexity theory has something to say about AI?
> 
> Hi,
> 
> I do think so. Not in a simple way though.
> There are two parts in my thoughts: thinking and learning.
> 
> Thinking:
> I think consciousness has to do with search: things that are unconscious 
> are straight forward and doesn't need any search algorithm.
> Why should searching be conscious? Because problems that need search may 
> need a lot of resources, especially time, and intelligence doesn't like 
> to waste resources (laziness is not just a vice, it is optimization. 
> Every coder knows that ;) ).
> Consciousness then has to manage intuition (is this a good situation?) 
> and search (I'm not sure, let's see what happens if...).
> Now, the common thing with NP-hard problems is that they all need a 
> searching algorithm.
> 
> Learning:
> Learning is about generalization. I define generalization this way: it 
> is the fact to do different things with the same resources.
> Combinatory explosion is all the contrary of generalization because each 
> case must be taken alone, and search is needed.
> then learning doesn't like combinatory explosion because whether you 
> have to memorize a general solution (an algorithm) that can give you the 
> answer by a very very long time (takes a lot of resources) or you have 
> to rote-learn each case and this can also take a lot of resources.
> Also, there are a lot of NP-hard problems in learning: it's not the 
> solution of the problem that is NP-hard, but _learning_ this solution 
> is, e.g. the loading problem in MLP's.
> 
> So I think NP-difficulty is the limit of intelligence (for it to find 
> true solutions), but paradoxically it is what intelligence is done for: 
> any problem can be viewed by the search&heuristic paradigm.
> If you search till the end, you'll find the best solution but it may 
> take a gigantic computation time.
> If you use intuition at the root, you'll find a extremely quick solution 
> but obviously rarely good.
> For me, intuition is similar to reinforcement learning: the value of a 
> situation (state) is a heuristic that gives an idea about the goodness 
> of this situation, but without giving any logical reason.
> Intelligence is about managing those two ways of finding a solution.
> 
> Anyway, for intelligence, NP-difficulty is not a hard-threshold limit 
> because lots of P-problems are often too "hard" and some NP-problems are 
> very "simple".
> On the contrary, I think it is the very limit of generalization (no 
> search at all), which is obviously a part of AI.

I think Laurent's concept of consciousness as search is not far off in a 
sense, but that "searching" is not what the brain does. In my opinion, 
consciousness is simply the "desktop" of the mind, which holds the 
current mental state, much like the registers inside a CPU, which hold 
the current state of the machine for the current machine instruction 
(which inevitably changes the state of the CPU registers). Whereas the 
CPU has registers for general purpose numbers, memory pointers, and 
state flags, the human CPU has registers for a much wider variety of 
data formats representing the different levels of thought, from raw 
sensory data to final action. A very well-known example of this is the 
set of registers that hold an average maximum of seven "chunks" of 
semantic information at a time. There are obviously equivalent 
"registers" in consciousness for all types of data that we consciously 
process, including all sensory input, all voluntary actions, and all the 
various modalities of processing in between (e.g. associative, semantic, 
logical).  in other words, consiousness itself is nothing more than the 
set of registers that store the current state of your CPU/CNS. Now, that 
doesn't sound so mysterious, does it?

The question remains, as I believe Dan pointed out, about how 
consiousness chooses which of the myriad thoughts in the subconscious to 
entertain, but I think that has a simple answer too, the details of 
which vary between different thought modalities. The mechanism in 
assoicative learning has been pretty well established, that when 
thoughts are stimulated enough to enter cosnciousness, they in turn 
stimulate or suppress thoughts that are associated positively or 
negatively, respectively. For instance, given the words "red" and 
"fruit", most instantly think "apple", because the concept of red and 
that of fruit are both associated positively with the concept of apple. 
Some may think "cherry", if they're from DC instead of NY, but if you 
add "crispy", they will think "apple", since that idea is associated 
positively with apples but never with cherries. In other words, on the 
associative level, there is a mechanism for determing the stimulation 
level of any given thought, and the thoughts most stimulated are those 
that win the contest for the registers of consciousness, and in turn 
stimulate or suppress others. As the thought stilmulates associated 
thoughts, its own stimulation level is reset, so that it doesn't just 
keep popping back unless restimulated by other thoughts.

So, what happens when there is an overload of information, and 
everything is highly stimulated? Each set of consciousness registers has 
a threshold level which is adjustable by a simple mechanism. If too many 
thoughts win admission, and a consciousness modality hits maximum 
occupancy, it raises the threshold, and if too few thoughts are in those 
registers, it lowers the threshold to allow more in. This, in large 
part, explains dreaming, which occurs as the stimulation level 
decreases, lowering the consciousness thresholds, and gradually allowing 
thoughts into consciousness that have been stimulated during waking 
hours, but not enough to enter consciousness. As they come to 
consciousness during dreams, they are processed for meaning if there is 
any (often there is not - sorry Freud), and their stimulation level 
reset for another day.

Even without actually sleeping, one can go into a dreamlike state using 
a sensory deprivation tank. Arguably, schizophrenia and the mild to 
extreme simulations of it by psychedelic drugs occur because of 
unnaturally low consciousness thresholds, creating a dreamlike state 
while awake, where the mind is overloaded with thoughts.

Whew! Sorry for the long post. Hope it was interesting....and 
stimulating. (Watches the thresholds all around him for signs).

Tony

5.Conscious vs. unconscious processes

Recently I have been playing around with programs
that can learn to play tic tac toe (TTT) and checkers.
When TTT learns to play it isn't a conscious process
whereas it is for us. What is the difference?

When we learn to play TTT or chess we are given a set
of rules on how each piece can move and a goal, take
the King or three in a row for TTT. These rules are
not "built in" but exist at a high symbolic level.

So it starts as a processing of reducing the difference
between a current state and a desired state. We in
a sense program ourselves to start playing the game.
However like the learning TTT program, after much
practice unconscious processes build up.

Unlike the TTT program or chess program that makes use
of some inbuilt heuristics or values of some kind, we
are able to build up from experience those heuristics,
indeed they are in a sense crystallized thoughts. And
unlike the TTT program that remembers board states by
wrote an expert chess player can see a valid board
state  as an organized whole and thus can correctly
remember the placing of 90% the pieces with a mere
five second observation.

Although I have not addressed the hard question of
consciousness, why it feels like this, it seems to me
the easy question, what functions are part of feeling
like this, are important in AI.

The reason we can program a computer to play chess or
work a spreadsheet is because we are conscious of the
process we use when we play chess or use a spreadsheet.

We cannot program a computer to see or learn to see
because we are not conscious of how we do it. That is,
the method used is not available for a certain kind
of observation whereas the thoughts we have in playing
chess are available for observation and thus potential
translation into computer code.

The contents of a conscious process is one in which
potentially can talk about, or think about to ourselves.
That is we know how it works by direct observation.
whereas we cannot directly observe the way in which
we are able to see the world.

It is as if the memory for playing chess is accessible
for analysis while any memory involved in seeing is
utilized automatically. The TTT program utilizes its
state values without thought and it acquires those
values without thought, the process is a crystallized
thought process of the programmer.


JC

6. Thinking About the Conscious Mind/Koch

7. If conscious mind IS brain-action/- process..

8. I am google's human hostage. They have no rights to hold human beings hostage.



Return to AI

 

Who is online

Users browsing this forum: No registered users and 97 guest