Topic: Philosophy/Philosophy of mind

You are looking at all articles with the topic "Philosophy/Philosophy of mind". We found 15 matches.

Hint: To view all topics, click here. Too see the most popular topics, click here instead.

πŸ”— Moravec's Paradox

πŸ”— Computer science πŸ”— Philosophy πŸ”— Philosophy/Logic πŸ”— Philosophy/Philosophy of science πŸ”— Philosophy/Philosophy of mind

Moravec's paradox is the observation by artificial intelligence and robotics researchers that, contrary to traditional assumptions, reasoning (which is high-level in humans) requires very little computation, but sensorimotor skills (comparatively low-level in humans) require enormous computational resources. The principle was articulated by Hans Moravec, Rodney Brooks, Marvin Minsky and others in the 1980s. As Moravec writes, "it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility".

Similarly, Minsky emphasized that the most difficult human skills to reverse engineer are those that are unconscious. "In general, we're least aware of what our minds do best", he wrote, and added "we're more aware of simple processes that don't work well than of complex ones that work flawlessly".

Discussed on

πŸ”— Stochastic Parrot

πŸ”— Computer science πŸ”— Philosophy πŸ”— Philosophy/Contemporary philosophy πŸ”— Philosophy/Philosophy of mind πŸ”— Artificial Intelligence

In machine learning, "stochastic parrot" is a term coined by Emily M. Bender in the 2021 artificial intelligence research paper "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?" by Bender, Timnit Gebru, Angelina McMillan-Major, and Margaret Mitchell. The term refers to "large language models that are impressive in their ability to generate realistic-sounding language but ultimately do not truly understand the meaning of the language they are processing."

Discussed on

πŸ”— Alan Turing's 100th Birthday - Mathematician, logician, cryptanalyst, scientist

πŸ”— Biography πŸ”— Computing πŸ”— Mathematics πŸ”— London πŸ”— Philosophy πŸ”— Philosophy/Logic πŸ”— England πŸ”— Biography/science and academia πŸ”— Philosophy/Philosophy of science πŸ”— History of Science πŸ”— Computing/Computer science πŸ”— Robotics πŸ”— Philosophy/Philosophers πŸ”— Cryptography πŸ”— LGBT studies/LGBT Person πŸ”— LGBT studies πŸ”— Athletics πŸ”— Greater Manchester πŸ”— Cheshire πŸ”— Cryptography/Computer science πŸ”— Philosophy/Philosophy of mind πŸ”— Molecular and Cell Biology πŸ”— Surrey πŸ”— Running

Alan Mathison Turing (; 23 June 1912 – 7 June 1954) was an English mathematician, computer scientist, logician, cryptanalyst, philosopher, and theoretical biologist. Turing was highly influential in the development of theoretical computer science, providing a formalisation of the concepts of algorithm and computation with the Turing machine, which can be considered a model of a general-purpose computer. Turing is widely considered to be the father of theoretical computer science and artificial intelligence. Despite these accomplishments, he was not fully recognised in his home country during his lifetime, due to his homosexuality, and because much of his work was covered by the Official Secrets Act.

During the Second World War, Turing worked for the Government Code and Cypher School (GC&CS) at Bletchley Park, Britain's codebreaking centre that produced Ultra intelligence. For a time he led Hut 8, the section that was responsible for German naval cryptanalysis. Here, he devised a number of techniques for speeding the breaking of German ciphers, including improvements to the pre-war Polish bombe method, an electromechanical machine that could find settings for the Enigma machine.

Turing played a crucial role in cracking intercepted coded messages that enabled the Allies to defeat the Nazis in many crucial engagements, including the Battle of the Atlantic, and in so doing helped win the war. Due to the problems of counterfactual history, it is hard to estimate the precise effect Ultra intelligence had on the war, but at the upper end it has been estimated that this work shortened the war in Europe by more than two years and saved over 14Β million lives.

After the war Turing worked at the National Physical Laboratory, where he designed the Automatic Computing Engine. The Automatic Computing Engine was one of the first designs for a stored-program computer. In 1948 Turing joined Max Newman's Computing Machine Laboratory, at the Victoria University of Manchester, where he helped develop the Manchester computers and became interested in mathematical biology. He wrote a paper on the chemical basis of morphogenesis and predicted oscillating chemical reactions such as the Belousov–Zhabotinsky reaction, first observed in the 1960s.

Turing was prosecuted in 1952 for homosexual acts; the Labouchere Amendment of 1885 had mandated that "gross indecency" was a criminal offence in the UK. He accepted chemical castration treatment, with DES, as an alternative to prison. Turing died in 1954, 16 days before his 42nd birthday, from cyanide poisoning. An inquest determined his death as a suicide, but it has been noted that the known evidence is also consistent with accidental poisoning.

In 2009, following an Internet campaign, British Prime Minister Gordon Brown made an official public apology on behalf of the British government for "the appalling way he was treated". Queen Elizabeth II granted Turing a posthumous pardon in 2013. The Alan Turing law is now an informal term for a 2017 law in the United Kingdom that retroactively pardoned men cautioned or convicted under historical legislation that outlawed homosexual acts.

Discussed on

πŸ”— Ship of Theseus

πŸ”— Philosophy πŸ”— Philosophy/Logic πŸ”— Philosophy/Contemporary philosophy πŸ”— Philosophy/Ancient philosophy πŸ”— Philosophy/Philosophy of mind πŸ”— Philosophy/Modern philosophy πŸ”— Philosophy/Metaphysics πŸ”— Philosophy/Analytic philosophy πŸ”— Folklore

In the metaphysics of identity, the ship of Theseus is a thought experiment that raises the question of whether an object that has had all of its components replaced remains fundamentally the same object. The concept is one of the oldest in Western philosophy, having been discussed by the likes of Heraclitus and Plato by ca. 500-400 BC.

Discussed on

πŸ”— Bonini's Paradox

πŸ”— Philosophy πŸ”— Philosophy/Logic πŸ”— Philosophy/Philosophy of mind

Bonini's paradox, named after Stanford business professor Charles Bonini, explains the difficulty in constructing models or simulations that fully capture the workings of complex systems (such as the human brain).

Discussed on

πŸ”— The knowledge argument

πŸ”— Philosophy πŸ”— Philosophy/Logic πŸ”— Philosophy/Philosophy of mind

The knowledge argument (also known as Mary's room or Mary the super-scientist) is a philosophical thought experiment proposed by Frank Jackson in his article "Epiphenomenal Qualia" (1982) and extended in "What Mary Didn't Know" (1986). The experiment is intended to argue against physicalismβ€”the view that the universe, including all that is mental, is entirely physical. The debate that emerged following its publication became the subject of an edited volumeβ€”There's Something About Mary (2004)β€”which includes replies from such philosophers as Daniel Dennett, David Lewis, and Paul Churchland.

Discussed on

πŸ”— Bicameralism (Psychology)

πŸ”— Philosophy πŸ”— Skepticism πŸ”— Psychology πŸ”— Philosophy/Contemporary philosophy πŸ”— Philosophy/Philosophy of mind πŸ”— Alternative Views πŸ”— Neuroscience

Bicameralism (the condition of being divided into "two-chambers") is a hypothesis in psychology that argues that the human mind once operated in a state in which cognitive functions were divided between one part of the brain which appears to be "speaking", and a second part which listens and obeysβ€”a bicameral mind. The term was coined by Julian Jaynes, who presented the idea in his 1976 book The Origin of Consciousness in the Breakdown of the Bicameral Mind, wherein he made the case that a bicameral mentality was the normal and ubiquitous state of the human mind as recently as 3,000 years ago, near the end of the Mediterranean bronze age.

Discussed on

πŸ”— The Hard Problem of Consciousness

πŸ”— Philosophy πŸ”— Philosophy/Philosophy of mind

The hard problem of consciousness is the problem of explaining why and how sentient organisms have qualia or phenomenal experiencesβ€”how and why it is that some internal states are subjective, felt states, such as heat or pain, rather than merely nonsubjective, unfelt states, as in a thermostat or a toaster. The philosopher David Chalmers, who introduced the term "hard problem" of consciousness, contrasts this with the "easy problems" of explaining the ability to discriminate, integrate information, report mental states, focus attention, and so forth. Easy problems are (relatively) easy because all that is required for their solution is to specify a mechanism that can perform the function. That is, regardless of how complex or poorly understood the phenomena of the easy problems may be, they can eventually be understood by relying entirely on standard scientific methodologies. Chalmers claims that the problem of experience is distinct from this set and will "persist even when the performance of all the relevant functions is explained".

The existence of a "hard problem" is controversial. It has been accepted by philosophers of mind such as Joseph Levine, Colin McGinn, and Ned Block and cognitive neuroscientists such as Francisco Varela, Giulio Tononi, and Christof Koch. However, its existence is disputed by philosophers of mind such as Daniel Dennett, Massimo Pigliucci, and Keith Frankish and cognitive neuroscientists such as Stanislas Dehaene and Bernard Baars.

Discussed on

πŸ”— The reason why Blub programmers have such a hard time picking up more powerful languages.

πŸ”— Philosophy πŸ”— Cognitive science πŸ”— Linguistics πŸ”— Linguistics/Applied Linguistics πŸ”— Anthropology πŸ”— Philosophy/Philosophy of mind πŸ”— Neuroscience πŸ”— Philosophy/Philosophy of language πŸ”— Linguistics/Philosophy of language

The hypothesis of linguistic relativity, part of relativism, also known as the Sapir–Whorf hypothesis , or Whorfianism is a principle claiming that the structure of a language affects its speakers' world view or cognition, and thus people's perceptions are relative to their spoken language.

The principle is often defined in one of two versions: the strong hypothesis, which was held by some of the early linguists before World War II, and the weak hypothesis, mostly held by some of the modern linguists.

  • The strong version says that language determines thought and that linguistic categories limit and determine cognitive categories.
  • The weak version says that linguistic categories and usage only influence thought and decisions.

The principle had been accepted and then abandoned by linguists during the early 20th century following the changing perceptions of social acceptance for the other especially after World War II. The origin of formulated arguments against the acceptance of linguistic relativity are attributed to Noam Chomsky.

πŸ”— Chinese room argument

πŸ”— Philosophy πŸ”— Philosophy/Logic πŸ”— Philosophy/Contemporary philosophy πŸ”— Philosophy/Philosophy of mind πŸ”— Philosophy/Analytic philosophy

The Chinese room argument holds that a digital computer executing a program cannot be shown to have a "mind", "understanding" or "consciousness", regardless of how intelligently or human-like the program may make the computer behave. The argument was first presented by philosopher John Searle in his paper, "Minds, Brains, and Programs", published in Behavioral and Brain Sciences in 1980. It has been widely discussed in the years since. The centerpiece of the argument is a thought experiment known as the Chinese room.

The argument is directed against the philosophical positions of functionalism and computationalism, which hold that the mind may be viewed as an information-processing system operating on formal symbols. Specifically, the argument is intended to refute a position Searle calls strong AI: "The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds."

Although it was originally presented in reaction to the statements of artificial intelligence (AI) researchers, it is not an argument against the behavioural goals of AI research, because it does not limit the amount of intelligence a machine can display. The argument applies only to digital computers running programs and does not apply to machines in general.

Discussed on