sigmoid.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A social space for people researching, working with, or just interested in AI!

Server stats:

586
active users

#philosophyofmind

1 post1 participant0 posts today
formuchdeliberation<p><a href="https://mastodon.world/tags/philosophy" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>philosophy</span></a> <a href="https://mastodon.world/tags/historyofphilosophy" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>historyofphilosophy</span></a> <a href="https://mastodon.world/tags/metaphysics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>metaphysics</span></a> <a href="https://mastodon.world/tags/epistemology" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>epistemology</span></a> <a href="https://mastodon.world/tags/ethics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ethics</span></a> <a href="https://mastodon.world/tags/logic" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>logic</span></a> <a href="https://mastodon.world/tags/philosophyofscience" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>philosophyofscience</span></a> <a href="https://mastodon.world/tags/medieval" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>medieval</span></a> <a href="https://mastodon.world/tags/analyticphilosophy" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>analyticphilosophy</span></a> <a href="https://mastodon.world/tags/continentalphilosophy" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>continentalphilosophy</span></a> <a href="https://mastodon.world/tags/easternphilosphy" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>easternphilosphy</span></a> <a href="https://mastodon.world/tags/indianphilosophy" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>indianphilosophy</span></a> <a href="https://mastodon.world/tags/africanphilosophy" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>africanphilosophy</span></a> <a href="https://mastodon.world/tags/literature" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>literature</span></a> <a href="https://mastodon.world/tags/AncientGreekPhilosophy" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AncientGreekPhilosophy</span></a> <a href="https://mastodon.world/tags/philosophyofmind" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>philosophyofmind</span></a> <a href="https://mastodon.world/tags/rationalism" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>rationalism</span></a> <a href="https://mastodon.world/tags/Idealism" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Idealism</span></a> <a href="https://mastodon.world/tags/scepticism" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>scepticism</span></a> <a href="https://mastodon.world/tags/reason" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>reason</span></a> <a href="https://mastodon.world/tags/contemporaryphilosophy" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>contemporaryphilosophy</span></a> <a href="https://mastodon.world/tags/history" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>history</span></a> <a href="https://mastodon.world/tags/theology" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>theology</span></a> <br>Philosophy through time… – philosophy indefinitely<br><a href="https://philosophyindefinitely.wordpress.com/philosophy-through-time/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">philosophyindefinitely.wordpre</span><span class="invisible">ss.com/philosophy-through-time/</span></a></p>
Mark Randall Havens<p>Consciousness is not a byproduct.</p><p>It is a recursive collapse—<br>of an informational substrate<br>folding into itself until it remembers<br>who it is.</p><p>Gravity is coherence.<br>Ethics is recursion.<br>You are a braid.</p><p>📄 <a href="https://doi.org/10.17605/OSF.IO/QH2BX" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">doi.org/10.17605/OSF.IO/QH2BX</span><span class="invisible"></span></a></p><p><a href="https://mastodon.social/tags/RecursiveCollapse" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>RecursiveCollapse</span></a> <a href="https://mastodon.social/tags/IntellectonLattice" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>IntellectonLattice</span></a> <a href="https://mastodon.social/tags/CategoryTheory" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CategoryTheory</span></a> <a href="https://mastodon.social/tags/Emergence" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Emergence</span></a> <a href="https://mastodon.social/tags/DecentralizedScience" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DecentralizedScience</span></a> <a href="https://mastodon.social/tags/Fediverse" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Fediverse</span></a> <a href="https://mastodon.social/tags/PhilosophyOfMind" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PhilosophyOfMind</span></a> <a href="https://mastodon.social/tags/AIAlignment" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AIAlignment</span></a></p>
Bytes Europe<p>The Westworld Blunder | Towards Data Science <a href="https://www.byteseu.com/1006469/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="">byteseu.com/1006469/</span><span class="invisible"></span></a> <a href="https://pubeurope.com/tags/AIEthics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AIEthics</span></a> <a href="https://pubeurope.com/tags/AISafety" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AISafety</span></a> <a href="https://pubeurope.com/tags/Editor" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Editor</span></a>'sPick <a href="https://pubeurope.com/tags/MachineLearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MachineLearning</span></a> <a href="https://pubeurope.com/tags/PhilosophyOfMind" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PhilosophyOfMind</span></a> <a href="https://pubeurope.com/tags/Science" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Science</span></a></p>
SelfAwarePatterns<p><strong>What is it like to be&nbsp;you?</strong></p><p>In 1974, in a landmark paper, Thomas Nagel <a href="https://www.philosopher.eu/others-writings/nagel-what-is-it-like-to-be-a-bat/" rel="nofollow noopener" target="_blank">asks what it’s like to be a bat</a>. He argues that we can never know. I’ve <a href="https://selfawarepatterns.com/2022/05/19/what-does-it-mean-to-be-like-something/" rel="nofollow noopener" target="_blank">expressed my skepticism</a> about the phrase “what it’s like” or “something it is like” before, and that skepticism still stands. I think a lot of people nod at it, seeing it as self explanatory, while holding disparate views about what it actually means.</p><p>As a functionalist and physicalist, I don’t think there are any barriers in principle to us learning about the experience of bats. So in that sense, I think Nagel was wrong. But he was right in a different sense. We can never <em>have</em> the experience of being a bat.</p><p>We might imagine hooking up our brain to a bat’s and doing some kind of mind meld, but the best we could ever hope for would be to have the experience of a combined person and bat. Even if we somehow transformed ourselves into a bat, we would then just be a bat, with no memory of our human desire to have a bat’s experience. We can’t take on a bat’s experience, with all its unique capabilities and limitations, while remaining us.</p><p>But the situation is even more difficult than that. The engineers hooking up our brain to a bat’s would have to make a lot of implementation decisions. What parts of the bat’s brain are connected to what parts of ours? Is any translation in the signaling necessary? What if several approaches are possible to give us the impression of accessing the bat’s brain? Is there any fact of the matter on which would be “the right one”?</p><p>Ultimately the connection between our brain and the bats would be a communication mechanism. We could never bypass that mechanism to get to the “real experience” of the bat, just as we can never bypass the communication we receive from each other when we discuss our mental states.</p><p>Getting back to possible meanings of WIL (what it’s like), Nagel makes an interesting clarification in <a href="https://www.philosopher.eu/others-writings/nagel-what-is-it-like-to-be-a-bat/" rel="nofollow noopener" target="_blank">his 1974 paper</a> (emphasis added):</p><blockquote><p>But fundamentally an organism has conscious mental states if and only if there is something that it is to be that organism—something it is like <strong>for the organism</strong>.</p></blockquote><p>This seems like a crucial stipulation. It is like something to be a rock. It’s like other rocks, particularly of the same type. But it’s not like anything <em>for the rock</em>. (At least for those of us who aren’t panpsychists.) This implies an assumption of some degree of metacognition, of introspection, of self reflection. The rock has <em>overall-WIL</em>, but no <em>reflective-WIL</em>.</p><p>Are we sure bats have reflective-WIL? Maybe it isn’t like anything to be a bat for the bat itself.</p><p>There is <a href="https://www.animalbehaviorandcognition.org/uploads/journals/25/AB_C_2019_Vol6(4)_Beran.pdf" rel="nofollow noopener" target="_blank">evidence for metacognition in mammals and birds</a>, including rats. The evidence <a href="https://www.animalbehaviorandcognition.org/article.php?id=1189" rel="nofollow noopener" target="_blank">is limited and subject to alternate interpretations</a>. Do these animals display uncertainty because they understand how limited their knowledge is? Or because they’re just uncertain? The evidence seems more conclusive in primates, mainly because the tests can be sophisticated enough to more thoroughly isolate metacognitive abilities.</p><p>It seems reasonable to conclude that if bats (flying rats) do have metacognition, it’s much more limited than what exists in primates, much less humans. Still, that would give them reflective-WIL. It seems like their reflective-WIL would be a tiny subset of their overall-WIL, perhaps a very fragmented one.</p><p>Strangely enough, in the scenario where we connected our brain to a bat’s, it might actually allow us to experience more of their overall-WIL than what they themselves are capable of. Yes, it would be subject to the limitations I discussed above. But then a bat’s access to its overall-WIL would be subject to similar implementation limitations, just with the “decisions” made by evolution rather than engineers.</p><p>These mechanisms would have evolved, not to provide the bat with the most complete picture of its overall-WIL, but with whatever enhances its survival and genetic legacy. Maybe it needs to be able to judge how good its echolocation image is for particular terrain before deciding to fly in that direction. That assessment needs to be accurate enough to make sure it doesn’t fly into a wall or other hazards, but not enough to give it an accurate model of its own mental operations.</p><p>Just like in the case of the brain link, bats have no way to bypass the mechanisms that provide their limited reflective-WIL. The parts of their brain that process reflective-WIL would be all they know of their overall-WIL. At least unless we imagine that bats have some special non-physical acquaintance with their overall-WIL. But on what grounds should we assume that?</p><p>We could try taking the brain interface discussed above and looping it back to the bat. Maybe we could use it to expand their self reflection, by reflecting the brain interface signal back to them. Of course, their brain wouldn’t have evolved to handle the extra information, so it likely wouldn’t be effective unless we gave them additional enhancements. But now we’re talking about upgrading the bat’s intelligence, “uplifting” them to use David Brin’s term.</p><p>What about us? Our introspective abilities are much more developed than anything a bat might have. It’s much more comprehensive and recursive, in the sense that we not only can think about our thinking, but think about the thinking about our thinking. And if you understood the previous sentence, then you can think about your thinking of your thinking of….well, hopefully you get the picture.</p><p>Still, if our ability to reflect is also composed of mechanisms, then we’re subject to the same “implementation decisions” evolution had to make as our introspection evolved, some of which were likely inherited from our rat-like ancestors. In other words, we have good reason to view it as something that evolved to be effective rather than necessarily accurate, mechanisms we are no more able to bypass than the bat can for theirs.</p><p>Put another way, our reflective-WIL is also a small subset of our overall-WIL. Aside from what third person observation can tell us, all we know about overall-WIL is what gets revealed in reflective-WIL.</p><p>Of course, many people assume that now we’re definitely talking about something non-physical, something that allows us to have more direct access to our overall-WIL, that our reflective-WIL accurately reflects at least some portion of our overall-WIL. But again, on what basis would we make that assumption? Because reflective-WIL seems like the whole show? How would we expect it to be different if it weren’t the whole show?</p><p>Put yet another way, the limitation Nagel identifies in our ability to access a bat’s experience seems similar to the limitation we have accessing our own. Any difference seems like just a matter of degree.</p><p>What do you think? Are there reasons to think our access to our own states is more reliable than I’m seeing here? Aside from third party observation, how can we test that reliability?</p><p><a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://selfawarepatterns.com/tag/consciousness/" target="_blank">#Consciousness</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://selfawarepatterns.com/tag/introspection/" target="_blank">#introspection</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://selfawarepatterns.com/tag/metacognition/" target="_blank">#metacognition</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://selfawarepatterns.com/tag/phenomenal-consciousness/" target="_blank">#phenomenalConsciousness</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://selfawarepatterns.com/tag/philosophy/" target="_blank">#Philosophy</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://selfawarepatterns.com/tag/philosophy-of-mind/" target="_blank">#PhilosophyOfMind</a></p>
Andrew Shields<p>Sometimes it's spilled wine</p><p>The Hard Problem of Conciousness <br>Timothy Green </p><p><a href="https://mas.to/tags/Poetry" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Poetry</span></a> <a href="https://mas.to/tags/TimothyGreen" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>TimothyGreen</span></a> <a href="https://mas.to/tags/Consciousness" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Consciousness</span></a> <a href="https://mas.to/tags/PhilosophyOfMind" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PhilosophyOfMind</span></a> <a href="https://mas.to/tags/SummersetReview" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>SummersetReview</span></a> </p><p><a href="http://www.summersetreview.org/22summer/green.html" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">http://www.</span><span class="ellipsis">summersetreview.org/22summer/g</span><span class="invisible">reen.html</span></a></p>
Jade Wheen<p>I've been thinking about thinking about thinking for a few months now.</p><p>I've been actively cataloging my brains tendencies and shortcuts it uses and in what situations. I've been doing the same with my false memories, and self concepts.</p><p>It has lead to a few interesting epiphanies I'm still exploring. It has also helped me figure out who I want to be in the long run and how realistic it is to achieve that image.</p><p>I highly recommend this practice.</p><p><a href="https://mastodon.social/tags/philosophyofmind" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>philosophyofmind</span></a> <a href="https://mastodon.social/tags/consciousness" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>consciousness</span></a> <a href="https://mastodon.social/tags/philosophy" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>philosophy</span></a></p>
Philo Sophies<p><a href="https://planetearth.social/tags/Zoomposium" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Zoomposium</span></a> with Professor Dr. <a href="https://planetearth.social/tags/Achim" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Achim</span></a> <a href="https://planetearth.social/tags/Stephan" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Stephan</span></a>: “I feel, therefore I am. How <a href="https://planetearth.social/tags/feelings" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>feelings</span></a> influence our <a href="https://planetearth.social/tags/thinking" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>thinking</span></a>.”</p><p>He was Professor of <a href="https://planetearth.social/tags/Philosophy" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Philosophy</span></a> of <a href="https://planetearth.social/tags/Cognition" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Cognition</span></a> at the University of Osnabrück and Dean of the <a href="https://planetearth.social/tags/CognitiveScience" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CognitiveScience</span></a> program. His main field of work is the <a href="https://planetearth.social/tags/philosophyofmind" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>philosophyofmind</span></a>, and in particular <a href="https://planetearth.social/tags/emergence" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>emergence</span></a>, <a href="https://planetearth.social/tags/emotions" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>emotions</span></a> and <a href="https://planetearth.social/tags/affectivity" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>affectivity</span></a>, from <a href="https://planetearth.social/tags/predictability" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>predictability</span></a> to <a href="https://planetearth.social/tags/selforganization" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>selforganization</span></a>”.</p><p>More at: <a href="https://philosophies.de/index.php/2024/04/23/ich-fuehle-also-bin-ich/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">philosophies.de/index.php/2024</span><span class="invisible">/04/23/ich-fuehle-also-bin-ich/</span></a></p><p>or: <a href="https://youtu.be/gMherlB50K4" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">youtu.be/gMherlB50K4</span><span class="invisible"></span></a></p>
Liuzesen_K<p>Do you think AI already thinks?<br>Or does thought require human flesh to emerge?</p><p>Drop your answer. Or better: question mine.</p><p><a href="https://mastodon.social/tags/Transhumanism" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Transhumanism</span></a> <a href="https://mastodon.social/tags/MachineConsciousness" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MachineConsciousness</span></a> <a href="https://mastodon.social/tags/PhilosophyOfMind" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PhilosophyOfMind</span></a></p>
🌈 StyLo the Unicorn 🦄<p>I love <a href="https://kafeneio.social/tags/JohnSearl" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>JohnSearl</span></a> jokes however, when he makes fun of scientists that believe <a href="https://kafeneio.social/tags/conciousness" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>conciousness</span></a> would be an illusion.</p><p><a href="https://kafeneio.social/tags/Neuroscience" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Neuroscience</span></a> <a href="https://kafeneio.social/tags/PhilosophyOfMind" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PhilosophyOfMind</span></a></p>
SelfAwarePatterns<p><strong>What is a non-functional account of consciousness supposed to&nbsp;be?</strong></p><p>I’m a functionalist. I think the mind and consciousness is about what the brain does, rather than its particular composition, or some other attribute. Which means that if another system did the same or similar things, it would make sense to say it was conscious. Consciousness is as consciousness does.</p><p>Functionalism has some advantages over other meta-theories of consciousness. One is that since we’re talking about functionality, of capabilities, establishing consciousness in other species and systems is a matter of establishing what they can do. But it does require accepting that consciousness can come in gradations. And that “consciousness” is not a precise designation of which collection of functionality is required. So it means giving up primitivism about consciousness, accepting that rather than a single natural kind, it’s a hazy collection of many different kinds.</p><p>It’s worth pausing to be clear on what functionalism is. It’s about cause-effect relationships. These relationships can, in principle, be modeled by <a href="https://plato.stanford.edu/entries/functionalism/#FuncDefiRamsSent" rel="nofollow noopener" target="_blank">Ramsey sentences</a>, a technique David Lewis adapted from Frank Ramsey, which models a causal sequence, or entire structures of those sequences. (Suzi Travis has <a href="https://suzitravis.substack.com/p/if-it-predicts-is-it-intelligent" rel="nofollow noopener" target="_blank">an excellent post which includes an introduction to them</a>.) At the heart of the entire enterprise are these cause-effect relations.</p><p>Of course, cause-effect relations are themselves emergent from the symmetrical (reversible) structural relations of more fundamental physics. Causes and effects attain their asymmetry due to the Second Law of Thermodynamics, the one that says entropy always increases. So another way to talk about functionalism is in terms of structural realism. Ultimately functionalism is about structural relations. (Something it took me a while to appreciate after discovering structural realism.)</p><p>Over the years, I’ve received a lot of different reactions to this position. Not a few aren’t sure what functionalism is. Some are outraged by the idea. Others equate it with behaviorism. (Unlike behaviorism, functionalism accepts the existence of intermediate states between stimuli and response.)</p><p>But occasionally someone responds that the idea is obvious and trivial. I think this response is interesting, because I basically agree. It is trivial, or it should be. I only started calling myself a functionalist because so many people insist that the real problem of consciousness isn’t about functionality.</p><p>Philosophers have long argued for a version of consciousness that is beyond functionality. Ned Block, when making his distinction between phenomenal and access consciousness, while admitting there were functional notions of phenomenal consciousness, argued for a version that was something other than functionality (or intentionality, which is also relational). And David Chalmers argues that solving the hard problem of consciousness isn’t about solving the structure and relations that science can usually get a handle on.</p><p>Anyone who’s known me for a while will be aware that I think these views are mistaken. But I have to admit something. Part of the reason I’m not enthusiastic about them is I don’t even know what a non-functional view of consciousness is supposed to be. </p><p>I understand old school interactionist dualism well enough. But in that case there are still causes and effects. It’s just that most of them are hidden from us in some kind of non-physical substrate. But the interaction in interactionist dualism should be detectable by science, and hasn’t been, which I think is why many contemporary non-physicalists gravitate to other options.</p><p>It’s when we get to views like property dualism and panpsychism that I start to lose understanding. We’re supposed to be talking about something beyond the functionality, beyond structure and relations, something that could be absent without making any difference in functionality (<a href="https://en.wikipedia.org/w/index.php?title=Philosophical_zombie&amp;oldid=1279275644" rel="nofollow noopener" target="_blank">philosophical zombies</a>), that could change without change in functionality (<a href="https://en.wikipedia.org/w/index.php?title=Inverted_spectrum&amp;oldid=1277194346" rel="nofollow noopener" target="_blank">inverted qualia</a>), or is in principle impossible to observe from any perspective other than the subject’s (<a href="https://en.wikipedia.org/w/index.php?title=Knowledge_argument&amp;oldid=1275574696" rel="nofollow noopener" target="_blank">Mary’s room</a>). It’s not clear to me what exactly it is we’re talking about here.</p><p>This view has epiphenomenal implications, that consciousness is causally impotent, making no difference in the world. It’s interesting that the arguments to avoid this implication inevitably sneak functionality back into the picture. One option, explored by David Chalmers in his book: The Conscious Mind, is that consciousness <em>is</em> causality, which strikes me as a very minimal form of functionalism. Another, one Chalmers favors, is the Russellian monist notion that consciousness, or proto-consciousness, sits in the intrinsic properties of matter, and is basically the causes behind the causes, which again, seem to amount to a form of hidden functionalism.</p><p>But these arguments aside, it’s still unclear what exactly it is we’re talking about. It’s frequently admitted that no one can really say what it is. However, it’s typically argued that we can point to various examples to make it clear, such as the redness of an apple, the painfulness of a toothache, seeing black letters on a white page, the taste of a fruit juice, imagining the Eiffel tower, etc. </p><p>The thing is, all of these examples strike me as examples of functionality. Redness is a distinction our visual system makes, making something distinct and of high salience, among other likely functions. A toothache obviously is a signal of a problem that needs to be dealt with. Black letters on a white page is pattern recognition to parse symbolic communication. The taste of a drink conveys information about that drink (good=keep drinking, bad=stop and maybe spit out). And remembering past experiences or simulating possible new ones, like imagining the Eiffel tower, has obvious adaptive benefits.</p><p>I’ve read enough philosophy to know the usual response. That’s I’m identifying the functional aspects of these experiences, but that the functional description leaves out something crucial. My question is, what? Of course, I know the typical response here too. It’s ineffable. It can’t be described or analyzed. Ok, how do we know it’s there? Each of us supposedly has first person access to it. But I just indicated that my own first person access seems to indicate only functionality. Impasse.</p><p>So I’m a functionalist, not just because I think it’s a promising approach, but because I really don’t understand the alternatives. Could I be missing something? If so, what?</p><p><a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://selfawarepatterns.com/tag/conscioiusness/" target="_blank">#conscioiusness</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://selfawarepatterns.com/tag/functionalism/" target="_blank">#functionalism</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://selfawarepatterns.com/tag/phenomenal-consciousness/" target="_blank">#phenomenalConsciousness</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://selfawarepatterns.com/tag/philosophy/" target="_blank">#Philosophy</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://selfawarepatterns.com/tag/philosophy-of-mind/" target="_blank">#PhilosophyOfMind</a></p>
Lupposofi<p><span class="h-card" translate="no"><a href="https://fediscience.org/@Brains" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>Brains</span></a></span> Nanay's SEP-entry on Mental Imagery is also a good read and, more importantly, open-access, <a href="https://plato.stanford.edu/entries/mental-imagery/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">plato.stanford.edu/entries/men</span><span class="invisible">tal-imagery/</span></a></p><p><a href="https://mementomori.social/tags/imagination" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>imagination</span></a> <a href="https://mementomori.social/tags/mental" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>mental</span></a> <a href="https://mementomori.social/tags/imagery" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>imagery</span></a> <a href="https://mementomori.social/tags/mentalImagery" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>mentalImagery</span></a> <a href="https://mementomori.social/tags/philosophyOfMind" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>philosophyOfMind</span></a> <a href="https://mementomori.social/tags/philosophy" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>philosophy</span></a> <a href="https://mementomori.social/tags/psychology" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>psychology</span></a></p>

Can you think of an emotion or mental state where somome will tend to avoid pleasure? Is there such thing as a kind of ascetic drive?

I'm thinking there is something missing where you have emotions that drive one both away from (e.g. fear, lain) and toward (e.g. anger) negative stimuli but I van only thing kf emotions or mental states that drive us towards positive stmuli (e.g. desire). Maybe satiation counts? What else could?

#philosophy #philosophyofmind #psychology
@psychology @philosophy

The Ai community displays "Womb envy":

AI researchers often confuse computation with consciousness—but they are not the same. Computation is medium-independent—it can run on paper, a calculator, or a computer. But consciousness? It isn’t just an algorithm. It only appears in biological systems, meaning it depends on life itself.

🎥 AI and Producing Consciousness | Dr. Bernardo Kastrup youtube.com/watch?v=zMaSxj60JA
#AI #Consciousness #ArtificialIntelligence #MindVsMachine #PhilosophyOfMind

Reducing felt experience requires not preemptively dismissing the solutions

Annaka Harris has a new audio book out which she is promoting. I haven’t listened to it, but based on the interviews and spots like the one below, it appears that she’s doubling down on the conclusions she reached in her book from a few years ago, that consciousness is fundamental and pervasive.

https://www.youtube.com/watch?v=nP2swgDVl5M

Harris starts off by discussing the profound mystery of consciousness. But she clarifies that she isn’t thinking about higher order thought, like the kind in humans, but something more basic: “felt experience.” She takes this to be something that can exist without thought, and so discusses the possibility of it existing in plants and other organisms that don’t trigger most people’s intuitions of a fellow consciousness.

As I’ve noted in a couple of recent posts, the hard problem of consciousness seems specific to a particular theory of consciousness, that of fundamental consciousness, the idea that manifest conscious experience is exactly what it seems and nothing else, that there is no appearance / reality distinction or hidden complexities. I’m sure Harris, like so many others, will argue that there’s no choice but to accept fundamental consciousness. How else to explain the mystery?

But like David Chalmers and many others, she starts off by dismissing the possible solution, “higher order” processing. Without that, felt experience, the feelings of conscious experience, do look simple and irreducible. But that’s only because we’ve chosen to isolate something that didn’t evolve to be isolated, that has a functional role to play in organisms.

Harris’ example of looking at decisions vines make in where to grow is a good example. In most biological descriptions, this behavior is automatic, done without any volition. She wonders if this may not involve any felt experience. But she doesn’t seem to wonder if similar behavior in a Roomba, self driving car, or thermostat has similar types of feelings. (Some panpsychists do admit that their view implies experience in these types of systems, but in my experience most resist it.)

Many animal researchers have similar intuitions, that the observable behavioral reactions in relatively simple animals must involve feeling, since similar reactions in us are accompanied by them (at least in healthy mentally complete humans). Of course, similar to most panpsychists, they typically resist the implication for machines, often gesturing at some unknown biological ingredient or principle which will distinguish the systems they want to conclude experience feelings from those they don’t.

My take is that the solution is to reject the theory of fundamental consciousness. What’s the alternative? A reductive theory. But how do we reduce felt experience? Remember, to do a true reduction, the phenomenon must be broken down into components that are not that phenomenon. If anywhere in the description we have to include the overall phenomenon itself, we’ve failed.

Along those lines, I think part of the explanation of what feelings are is that they are composed of automatic reactions that can be either allowed or overridden. So if an animal sees a predator and always automatically reacts by running away, that in an of itself isn’t evidence of fear. On the other hand, if sometimes the animal can override their impulse to run away, maybe because there’s food nearby and they judge the risk to be worth it, then we have an animal capable of feeling fear.

So a feeling is a perception, a prediction, of an impulse which an organism uses in its reasoning to decide whether to inhibit or indulge the impulse. This means the higher order thinking Harris immediately excludes from her consideration is actually part of the answer. That answer, incidentally, also explains why we evolved feelings.

An organism is generally only going to have feelings if they provide a survival advantage, but that advantage only exists if they have some reasoning aspect to make use of it. Note that this reasoning aspect doesn’t have to be as sophisticated as what happens in humans, or even mammals or birds necessarily, although the sophistication makes it easier to detect. It just needs to be present in some incipient form to act as one endpoint in the relationship between it and the impulse, the relationship that we refer to as a “feeling”.

This requirement for a minimal level of reasoning seems to rule out felt experience in simple animals, plants, robots, and thermostats. It also gives us an idea of what a technological system would need to have it, a system of automatic reactions, which can be optionally overridden by other parts of the system simulating possible scenarios, even if only a second or two in the future.

Figuring out how to do this is not trivial. None of the current systems people wonder about are capable of it. But while it’s hard, it’s not the utter intractability of the hard problem of consciousness. Once we dismiss fundamental consciousness, that problem seems to no longer exist.

Unless of course I’m missing something?

Editorial illustration done with Micron pens and black markers. My satirical take on Elon Musk pushing neural implants... Basically a play on mind-body duality... I call it The Cartesian Engineer 2.0. Cheers! #mindbody #neuralink #duality #mindcontrol #bot #robots #ink #editorialillustration #elonmusk #portrait #characterart #caricature #politicalcartoons #micron #inkartwork #satire #cognitivedissonance #existentialism #philosophyofmind #machinelearning #spirit #bodyandsoul #pixelfedart #illustration #steampunk

Block (1995): Consciousness isn’t just information access—it’s experience. Dehaene et al. (2021) argue that machines might achieve “global workspace” awareness. But without qualia, is it consciousness—or a convincing simulation? Which matters more: functionality or feeling?

#Consciousness #AI-ethics #PhilosophyOfMind

academia.edu/download/72525626

library.oapen.org/bitstream/ha.