“The matrix presents a version of an old philosophical fable: the brain in a vat. a disembodied brain is floating in a vat, inside a scientist’s laboratory. the scientist has arranged that the brain will be stimulated with the same sort of inputs that a normal embodied brain receives. to do this, the brain is connected to a giant computer simulation of a world. the simulation determines which inputs the brain receives. when the brain produces outputs, these are fed back into the simulation. the internal state of the brain is just like that of a normal brain, despite the fact that it lacks a body. from the brain’s point of view, things seem very much as they seem to you and me.”
Cosmelli, D., & Thompson, E.. (2013). Embodiment or Envatment?: Reflections on the Bodily Basis of Consciousness. In Enaction
“Suppose that a team of neurosurgeons and bioengineers were able to remove your brain from your body, suspend it in a life-sustaining vat of liquid nutrients, and connect its neurons and nerve terminals by wires to a supercomputer that would stimulate it with electrical impulses exactly like those it normally receives when embodied. according to this brain-in-a-vat thought experiment, your envatted brain and your embodied brain would have subjectively indistinguishable mental lives. for all you know—so one argument goes—you could be such a brain in a vat right now.”
Cavallaro, D.. (2004). The brain in a vat in cyberpunk: The persistence of the flesh. Studies in History and Philosophy of Science Part C :Studies in History and Philosophy of Biological and Biomedical Sciences
“M dell’utri (in ‘choosing conceptions of realism’, ‘mind’ 99, 79-90) presents a reconstruction of putnam’s argument to show that the hypothesis that we are brains in a vat is self-refuting. i explain why the argument is problematic and offer a resolution of the difficulty.”
Hickey, L. P.. (2005). The Brain in a Vat Argument. Internet Encyclopedia of Philosophy
Show/hide publication abstract
“The brain in a vat thought-experiment is most commonly used to illustrate global or cartesian skepticism. you are told to imagine the possibility that at this very moment you are actually a brain hooked up to a sophisticated computer program that can perfectly simulate experiences of the outside world. here is the skeptical argument. if you cannot now be sure that you are not a brain in a vat, then you cannot rule out the possibility that all of your beliefs about the external world are false. or, to put it in terms of knowledge claims, we can construct the following skeptical argument. let ‘p’ stand for any belief or claim about the external world, say, that snow is white.”
Huemer, M.. (2006). Direct Realism and the Brain-in-a-Vat Argument. Philosophy and Phenomenological Research
“The brain-in-a-vat argument for skepticism is best formulated, notnusing the closure principle, but using the ‘preference principle,’nwhich states that in order to be justified in believing h on thenbasis of e, one must have grounds for preferring h over each alternativenexplanation of e. when the argument is formulated this way, dretske’snand klein’s responses to it fail. however, the strengthened argumentncan be refuted using a direct realist account of perception. fornthe direct realist, refuting the biv scenario is not a preconditionnon knowledge of the external world, and only the direct realist canngive a noncircular account of how we know we’re not brains in vats.”
Smart, J. J. C.. (2004). The brain in the vat and the question of metaphysical realism. Studies in History and Philosophy of Science Part C :Studies in History and Philosophy of Biological and Biomedical Sciences
Sprevak, M., & McLeish, C.. (2004). Magic, semantics, and Putnam’s vat brains. Studies in History and Philosophy of Science Part C :Studies in History and Philosophy of Biological and Biomedical Sciences
“Recent literature in epistemology has focused on the following argument for skepticism (sa): i know that i have two hands only if i know that i am not a handless brain in a vat. but i don’t know i am not a handless brain in a vat. therefore, i don’t know that i have two hands. part i of this article reviews two responses to skepticism that emerged in the 1980s and 1990s: sensitivity theories and attributor contextualism. part ii considers the more recent textquoteleft{}neo-mooreantextquoteright response to skepticism and its development in textquoteleft{}safetytextquoteright theories of knowledge. part iii argues that the skeptical argument set out in sa is not of central importance. specifically, sa is parasitic on skeptical reasoning that is more powerful and more fundamental than that displayed by sa itself. finally, part iv reviews a pyrrhonian argument for skepticism that is not well captured by sa, and considers a promising strategy for responding to it.”
Bostrom, N.. (2005). The simulation argument: Reply to Weatherson. Philosophical Quarterly
“I reply to some recent comments by brian weatherson on my ‘simulation argument’. i clarify some interpretational matters, and address issues relating to epistemological externalism, the difference from traditional brain-in-a-vat arguments, and a challenge based on ‘grue’-like predicates.”
Manson, N. C.. (2004). Brains, vats, and neurally-controlled animats. Studies in History and Philosophy of Science Part C :Studies in History and Philosophy of Biological and Biomedical Sciences
“Phi losophe r s lov e to ta l k a bout k nowl edge. a whole field is devoted to reflection on the topic, with product tie-ins to professor- ships and weighty conferences. epistemology is serious business, taught in academies the world over: there is ‘moral’ and ‘social’ epistemology, epistemology of the sacred, the closet, and the family. there is a compu- tational epistemology laboratory at the university of waterloo, and a center for epistemology at the free university in amsterdam. a google search turns up separate websites for ‘constructivist,’ ‘feminist,’ and ‘evolutionary’ epistemology, of course, but also ‘libidinal,’ ‘android,’ ‘quaker,’ ‘internet,’ and (my favorite) ‘erotometaphysical’ epistemol- ogy. harvard offers a course in the field (without the erotometaphysical part), which (if we are to believe its website) explores the epistemic status of weighty claims like ‘the standard meter is 1 meter long’ and ‘i am not a brain in a vat.’1 we seem to know a lot about knowledge.2”
www.Qbism.art is an interdisciplinary web-project that synthesises a plurality of perspectives from cognitive psychology, neuroscience, quantum physics, philosophy, computer science, and digital art into a holistic transdisciplinary Gestalt. You can view a series of animated digital Qbism artworks below (the neologism ‘Qbism’ is a composite lexeme composed of ‘Quantum & Cubism’).
Keywords: Syntax versus semantics; the problem of symbol grounding; meaning and AI, creativity and AI, intelligence and AI, embodied cognition, disembodied computation.
URL: rintintin.colorado.edu/~vancecd/phil201/Searle.pdf
“The chinese room argument is a thought experiment of john searle (1980a) and associated (1984) derivation. it is one of the best known and widely credited counters to claims of artificial intelligence (ai)that is, to claims that computers do or at least can (someday might) think. according to searles original presentation, the argument is based on two key claims: brains cause minds and syntax doesnt suffice for semantics. its target is what searle dubs strong ai. according to strong ai, searle says, the computer is not merely a tool in the study of the mind, rather the appropriately programmed computer really is a mind in the sense that computers given the right programs can be literally said to understand and have other cognitive states (1980a, p. 417). searle contrasts strong ai with weak ai. according to weak ai, computers just simulate thought, their seeming understanding isnt real understanding (just as-if), their seeming calculation is only as-if calculation, etc. nevertheless, computer simulation is useful for studying the mind (as for studying the weather and other things).”
Damper, R. I.. (2006). The logic of Searle’s Chinese room argument. Minds and Machines
“John searle’s chinese room argument (cra) is a celebrated thought experiment designed to refute the hypothesis, popular among artificial intelligence (ai) scientists and philosophers of mind, that ‘the appropriately programmed computer really is a mind’. since its publication in 1980, the cra has evoked an enormous amount of debate about its implications for machine intelligence, the functionalist philosophy of mind, theories of consciousness, etc. although the general consensus among commentators is that the cra is flawed, and not withstanding the popularity of the systems reply in some quarters, there is remarkably little agreement on exactly how and why it is flawed. a newcomer to the controversy could be forgiven for thinking that the bewildering collection of diverse replies to searle betrays a tendency to unprincipled, ad hoc argumentation and, thereby, a weakness in the opposition’s case. in this paper, treating the cra as a prototypical example of a ‘destructive’ thought experiment, i attempt to set it in a logical framework (due to sorensen), which allows us to systematise and classify the various objections. since thought experiments are always posed in narrative form, formal logic by itself cannot fully capture the controversy. on the contrary, much also hinges on how one translates between the informal everyday language in which the cra was initially framed and formal logic and, in particular, on the specific conception(s) of possibility that one reads into the logical formalism. (psycinfo database record (c) 2012 apa, all rights reserved) (journal abstract)”
Anderson, D., & Copeland, B. J.. (2002). Artificial life and the Chinese room argument.. Artificial Life
“‘Strong artificial life’ refers to the thesis that a sufficiently sophisticated computer simulation of a life form is a life form in its own right. can john searle’s chinese room argument [12]—originally intended by him to show that the thesis he dubs ‘strong ai’ is false—be deployed against strong alife? we have often encountered the suggestion that it can be (even in print; see harnad [8]). we do our best to transfer the argument from the domain of ai to that of alife. we do so in order to show once and for all that the chinese room argument proves nothing about alife. there may indeed be powerful philosophical objections to the thesis of strong alife, but the chinese room argument is not among them.”
Harnad, S.. (1989). Minds, machines and searle. Journal of Experimental and Theoretical Artificial Intelligence
“ABSTRACT: searle’s celebrated chinese room argument has shaken the foundations of artificial intelligence. many refutations have been attempted, but none seem convincing. this paper is an attempt to sort out explicitly the assumptions and the logical, methodological and empirical points of disagreement. searle is shown to have underestimated some features of computer modeling, but the heart of the issue turns out to be an empirical question about the scope and limits of the purely symbolic (computational) model of the mind. nonsymbolic modeling turns out to be immune to the chinese room argument. the issues discussed include the total turing test, modularity, neural modeling, robotics, causality and the symbol-grounding problem.”
Nute, D.. (2011). A logical hole the Chinese room avoids. Minds and Machines
“Abstract searle’s chinese room argument (cra) has been the object of great interest in the philosophy of mind, artificial intelligencenand cognitive science since its initial presentation in ‘minds, brains and programs’ in 1980. it is by no means an overstatementnto assert that it has been a main focus of attention for philosophers and computer scientists of many stripes. it is thennespecially interesting to note that relatively little has been said about the detailed logic of the argument, whatever significancensearle intended cra to have. the problem with the cra is that it involves a very strong modal claim, the truth of which isnboth unproved and highly questionable. so it will be argued here that the cra does not prove what it was intended to prove.”
Waskan, J.. (2006). Views into the Chinese Room: New Essays on Searle and Artificial Intelligence. Philosophical Review
“The most famous challenge to computational cognitive science and artificial intelligence is the philosopher john searle’s ‘chinese room’ argument. searle argued that, although machines can be devised to respond to input with the same output as would a mind, machines-unlike minds-lack understanding of the symbols they process. 19 essays by leading scientists and philosophers assess, renew, and respond to this crucial challenge.”
Harnad, S.. (2005). Searle’s Chinese Room Argument. In Encyclopedia of Philosophy
Show/hide publication abstract
“Summary of searle’s ‘chinese room argument’ showing that cognition cannot be just computation. searle implements a computer programme that can pass the turing test in chinese. searle does not understand chinese in doing so, hence neither does the computer.”
Jacquette, D.. (2006). Adventures in the Chinese Room. Philosophy and Phenomenological Research
“John r searle’s problem of the chinese room is criticized for failing to address microlevel functional isomorphisms between intelligent subjects and artificial cognitive simulations in hypothetical turing test evaluations. searle’s argument that the mammalian brain is the only known material object with the ‘right causal powers’ to support intrinsic intentional states in a scientific causal-biological ‘naturalization’ of intentionality is refuted as inconsistent in objective and inadequately motivated. searle’s examples of wetness and elasticity as instances of the causation and realization of macrostructure in microstructure are rejected as unsatisfactory analogies for the way in which intentionality is supposed to be caused by and realized in the microstructure of the brain. an alternative approach to the scientific demystification of intentionality is proposed in accord with a foundational model of conceptual analysis, in which intentionality is seen as a primitive abstract relation rather than a causal-biological product or process.”
Rodríguez, D., Hermosillo, J., & Lara, B.. (2012). Meaning in artificial agents: The symbol grounding problem revisited. Minds and Machines
“I argue that john searle’s (1980) influential chinesenroom argument (cra) against computationalism and strong ainsurvives existing objections $…$ however, a newn”essentialist” reply i construct shows that the cra as presentednby searle is an unsound argument that relies on a question-beggingnappeal to intuition. $…$ ”
Block, N.. (1995). The Mind as the Software of the Brain. In An Invitation to Cognitive Science: Thinking
Show/hide publication abstract
“Offers a philosophical perspective about the cognitive approach to thinking in general cognitive scientists often say that the mind is the software of the brain this chapter is about what this claim means start with an influential attempt to define ‘intelligence’ consider how human intelligence is to be investigated on the machine model discuss the relation between the mental and the biological intelligence and intentionality functionalism and the language of thought arguments for the language of thought explanatory levels and the syntactic theory of the mind j. searle’s chinese room argument (psycinfo database record (c) 2007 apa, all rights reserved)”
Teng, N. Y.. (2002). A cognitive analysis of the Chinese room argument. Philosophical Psychology
“Searle’s chinese room argument is analyzed from a cognitive point of view. the analysis is based on a newly developed model of conceptual integration, the many space model proposed by fauconnier and turner. the main point of the analysis is that the central inference constructed in the chinese room scenario is a result of a dynamic, cognitive activity of conceptual blending, with metaphor defining the basic features of the blending. two important consequences follow: (1) searle’s recent contention that syntax is not intrinsic to physics turns out to be a slightly modified version of the old chinese room argument; and (2) the argument itself is still open to debate. it is persuasive but not conclusive, and at bottom it is a topological mismatch in the metaphoric conceptual integration that is responsible for the non-conclusive character of the chinese room argument. [abstract from author]; copyright of philosophical psychology is the property of routledge and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder’s express written permission. however, users may print, download, or email articles for individual use. this abstract may be abridged. no warranty is given about the accuracy of the copy. users should refer to the original published version of the material for the full abstract. (copyright applies to all abstracts.)”
This website (www.cognovo.net) uses cookies to improve the user experience. We'll assume you're ok with this, but you can opt-out if you wish. Cookie settingsACCEPT Read More
Privacy & Cookies Policy
Privacy Overview
This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are as essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.