Yes, bees have five eyes. Two large compound eyes and three ocelli on top of their head. The ocelli are probably used for detecting a horizon and/or detecting the angle to the sun (they are sensitive to polarization). The two large compound eyes--at least in honey bees--are not capable of stereoptsis. They basically work as one large wide angle (fisheye) lens. And that is what I simulate in my coding.
“This all sounds promising. But how does it work? Pylyshyn did not provide many suggestions that I am aware of. He described the phenomena of visual indexing in human test subjects, but he did not propose a mechanism for how it worked.”
Sounds like an ambitious project Tom! But it also leaves me scratching my head about how it could be consistent with your last post that the brain/computer analogy is a bad one. Aren’t you using a simple computer that drives a drone by means of indexing, to demonstrate mechanisms by which the human might visually index what’s “seen”? Thus in that sense at least shouldn’t the brain be considered “computational”?
I happen to like the brain/computer analogy because I think the brain accepts incoming nervous system information, processes it algorithmically, and then the processed information goes on to operate various output mechanisms. To me that seems like a good definition for “computer”. The brain should quite directly operate heart muscles this way for example.
(One implication is that there must be something which this computer operates that exists as consciousness. I suspect certain parameters of electromagnetic field serve this role. Essentially the right sort of synchronous neuron firing gets into a physics mandated zone such that the produced field itself resides as all that we see, think, and so on. Then the thinker’s decisions (which of course reside under the unified field) hit energies that alter neuron firing to cause muscle function in ways that correspond with what was decided.)
Though I’d love for you to assess my EMF consciousness position some day, I don’t mean to make this about me. If I’ve misinterpreted this post regarding human indexing, or the last one about how the brain/computer analogy is a bad one, then I figure I should give you the opportunity to straighten me out.
Hey Eric, thanks for the question and sorry for the delayed response.
I think the disconnect comes from the seeming inconsistency between my dislike of the brain-is-a-computer metaphor with my recognition that the brain is a processor of information. The parts of the brain-is-a-computer metaphor that bug me the most include:
1. A computer is usually used as a reflexive device: input->output. An intelligent animal comprises a brain in a continuous loop with a body and a sensed environment (umwelt). Every one of our actions come with some expectation of that action. If that expectation is not met, we learn something. See #2.
2. A computer does not learn any more than a lump of clay learns to assume the shape of a vase. It is programmed, formed by an external intelligent agent. It cannot even make mistakes (software bugs are always human error). The use of the term "learning" as in "machine learning" or "self-supervised learning" is pure anthromorphic hogwash.
3. Knowledge in computers is stored in standardized formats: ASCII, unsigned 8-bit integer, MPEG-4. Our brains are born without representational formats...only nerves that fire or not. This is a huge issue that nobody is talking about.
4. A lot of computer simulations of cognitive functions use algorithmic techniques incompatible with neurons and neuroscience. Brains do not implemented unbounded recursion, iteration, or tree-search. Likewise, there is no evidence that back-propagation (used to train artificial neural networks) exists in real brains.
In my computer simulations, I try to avoid these traps. #3 is the hardest but I am working toward a solution for it as well.
No worries Tom! I think we might better align here by formally acknowledging the validity of nominalism. When you use the “computer” term you seem to mean a reasonably specific sort of technological machine. Therefore when I’m trying to understand your positions I need to keep your definition of “computer” in mind. I certainly agree that brains don’t function as computers do in the ways that you’ve mentioned.
Then if you’re reading some of my stuff I’d hope for you to not be put off of my own far more liberal use of the “computer” term. From this definition I consider the brain to function computationally in the sense that it accepts nervous system input information and algorithmically processes it to do various things. Furthermore I even permit the consciousness element of brain function to reside as a very different kind of computer. Here vision and such exist as input, and then fueled by the desire to feel good we use thought to process such input. Then the various decisions that we make reside as output, some of which might even feed back to the brain so that associated muscle operations occur.
Anyway nominalism might help us grasp each other if we’re clear about how we’re using the “computer” term.
I thought bees have five eyes though?
Yes, bees have five eyes. Two large compound eyes and three ocelli on top of their head. The ocelli are probably used for detecting a horizon and/or detecting the angle to the sun (they are sensitive to polarization). The two large compound eyes--at least in honey bees--are not capable of stereoptsis. They basically work as one large wide angle (fisheye) lens. And that is what I simulate in my coding.
“This all sounds promising. But how does it work? Pylyshyn did not provide many suggestions that I am aware of. He described the phenomena of visual indexing in human test subjects, but he did not propose a mechanism for how it worked.”
Sounds like an ambitious project Tom! But it also leaves me scratching my head about how it could be consistent with your last post that the brain/computer analogy is a bad one. Aren’t you using a simple computer that drives a drone by means of indexing, to demonstrate mechanisms by which the human might visually index what’s “seen”? Thus in that sense at least shouldn’t the brain be considered “computational”?
I happen to like the brain/computer analogy because I think the brain accepts incoming nervous system information, processes it algorithmically, and then the processed information goes on to operate various output mechanisms. To me that seems like a good definition for “computer”. The brain should quite directly operate heart muscles this way for example.
(One implication is that there must be something which this computer operates that exists as consciousness. I suspect certain parameters of electromagnetic field serve this role. Essentially the right sort of synchronous neuron firing gets into a physics mandated zone such that the produced field itself resides as all that we see, think, and so on. Then the thinker’s decisions (which of course reside under the unified field) hit energies that alter neuron firing to cause muscle function in ways that correspond with what was decided.)
Though I’d love for you to assess my EMF consciousness position some day, I don’t mean to make this about me. If I’ve misinterpreted this post regarding human indexing, or the last one about how the brain/computer analogy is a bad one, then I figure I should give you the opportunity to straighten me out.
Hey Eric, thanks for the question and sorry for the delayed response.
I think the disconnect comes from the seeming inconsistency between my dislike of the brain-is-a-computer metaphor with my recognition that the brain is a processor of information. The parts of the brain-is-a-computer metaphor that bug me the most include:
1. A computer is usually used as a reflexive device: input->output. An intelligent animal comprises a brain in a continuous loop with a body and a sensed environment (umwelt). Every one of our actions come with some expectation of that action. If that expectation is not met, we learn something. See #2.
2. A computer does not learn any more than a lump of clay learns to assume the shape of a vase. It is programmed, formed by an external intelligent agent. It cannot even make mistakes (software bugs are always human error). The use of the term "learning" as in "machine learning" or "self-supervised learning" is pure anthromorphic hogwash.
3. Knowledge in computers is stored in standardized formats: ASCII, unsigned 8-bit integer, MPEG-4. Our brains are born without representational formats...only nerves that fire or not. This is a huge issue that nobody is talking about.
4. A lot of computer simulations of cognitive functions use algorithmic techniques incompatible with neurons and neuroscience. Brains do not implemented unbounded recursion, iteration, or tree-search. Likewise, there is no evidence that back-propagation (used to train artificial neural networks) exists in real brains.
In my computer simulations, I try to avoid these traps. #3 is the hardest but I am working toward a solution for it as well.
No worries Tom! I think we might better align here by formally acknowledging the validity of nominalism. When you use the “computer” term you seem to mean a reasonably specific sort of technological machine. Therefore when I’m trying to understand your positions I need to keep your definition of “computer” in mind. I certainly agree that brains don’t function as computers do in the ways that you’ve mentioned.
Then if you’re reading some of my stuff I’d hope for you to not be put off of my own far more liberal use of the “computer” term. From this definition I consider the brain to function computationally in the sense that it accepts nervous system input information and algorithmically processes it to do various things. Furthermore I even permit the consciousness element of brain function to reside as a very different kind of computer. Here vision and such exist as input, and then fueled by the desire to feel good we use thought to process such input. Then the various decisions that we make reside as output, some of which might even feed back to the brain so that associated muscle operations occur.
Anyway nominalism might help us grasp each other if we’re clear about how we’re using the “computer” term.