Topic: Cognitive science (Page 2)

You are looking at all articles with the topic "Cognitive science". We found 23 matches.

Hint: To view all topics, click here. Too see the most popular topics, click here instead.

πŸ”— Functional Fixedness

πŸ”— Psychology πŸ”— Cognitive science

Functional fixedness is a cognitive bias that limits a person to use an object only in the way it is traditionally used. The concept of functional fixedness originated in Gestalt psychology, a movement in psychology that emphasizes holistic processing. Karl Duncker defined functional fixedness as being a "mental block against using an object in a new way that is required to solve a problem". This "block" limits the ability of an individual to use components given to them to complete a task, as they cannot move past the original purpose of those components. For example, if someone needs a paperweight, but they only have a hammer, they may not see how the hammer can be used as a paperweight. Functional fixedness is this inability to see a hammer's use as anything other than for pounding nails; the person couldn't think to use the hammer in a way other than in its conventional function.

When tested, 5-year-old children show no signs of functional fixedness. It has been argued that this is because at age 5, any goal to be achieved with an object is equivalent to any other goal. However, by age 7, children have acquired the tendency to treat the originally intended purpose of an object as special.

Discussed on

πŸ”— Soar (Cognitive Architecture)

πŸ”— Cognitive science

Soar is a cognitive architecture, originally created by John Laird, Allen Newell, and Paul Rosenbloom at Carnegie Mellon University. (Rosenbloom continued to serve as co-principal investigator after moving to Stanford University, then to the University of Southern California's Information Sciences Institute.) It is now maintained and developed by John Laird's research group at the University of Michigan.

The goal of the Soar project is to develop the fixed computational building blocks necessary for general intelligent agents – agents that can perform a wide range of tasks and encode, use, and learn all types of knowledge to realize the full range of cognitive capabilities found in humans, such as decision making, problem solving, planning, and natural language understanding. It is both a theory of what cognition is and a computational implementation of that theory. Since its beginnings in 1983 as John Laird’s thesis, it has been widely used by AI researchers to create intelligent agents and cognitive models of different aspects of human behavior. The most current and comprehensive description of Soar is the 2012 book, The Soar Cognitive Architecture.

Discussed on

πŸ”— Effective Accelerationism

πŸ”— Computer science πŸ”— Disaster management πŸ”— Philosophy πŸ”— Cognitive science πŸ”— Futures studies πŸ”— Effective Altruism

Effective accelerationism, often abbreviated as "e/acc", is a 21st-century philosophical movement advocating an explicit pro-technology stance. Its proponents believe that artificial intelligence-driven progress is a great social equalizer which should be pushed forward. They see themselves as a counterweight to the cautious view that AI is highly unpredictable and needs to be regulated, often giving their opponents the derogatory labels of "doomers" or "decels" (short for deceleration).

Central to effective accelerationism is the belief that propelling technological progress at any cost is the only ethically justifiable course of action. The movement carries utopian undertones and argues that humans need to develop and build faster to ensure their survival and propagate consciousness throughout the universe.

Although effective accelerationism has been described as a fringe movement, it has gained mainstream visibility. A number of high-profile Silicon Valley figures, including investors Marc Andreessen and Garry Tan, explicitly endorsed the movement by adding "e/acc" to their public social media profiles. Yann LeCun and Andrew Ng are seen as further supporters, as they have argued for less restrictive AI regulation.

Discussed on

πŸ”— The reason why Blub programmers have such a hard time picking up more powerful languages.

πŸ”— Philosophy πŸ”— Cognitive science πŸ”— Linguistics πŸ”— Linguistics/Applied Linguistics πŸ”— Anthropology πŸ”— Philosophy/Philosophy of mind πŸ”— Neuroscience πŸ”— Philosophy/Philosophy of language πŸ”— Linguistics/Philosophy of language

The hypothesis of linguistic relativity, part of relativism, also known as the Sapir–Whorf hypothesis , or Whorfianism is a principle claiming that the structure of a language affects its speakers' world view or cognition, and thus people's perceptions are relative to their spoken language.

The principle is often defined in one of two versions: the strong hypothesis, which was held by some of the early linguists before World War II, and the weak hypothesis, mostly held by some of the modern linguists.

  • The strong version says that language determines thought and that linguistic categories limit and determine cognitive categories.
  • The weak version says that linguistic categories and usage only influence thought and decisions.

The principle had been accepted and then abandoned by linguists during the early 20th century following the changing perceptions of social acceptance for the other especially after World War II. The origin of formulated arguments against the acceptance of linguistic relativity are attributed to Noam Chomsky.

πŸ”— Possible explanations for the slow progress of AI research

πŸ”— Computing πŸ”— Computer science πŸ”— Science Fiction πŸ”— Cognitive science πŸ”— Robotics πŸ”— Transhumanism πŸ”— Software πŸ”— Software/Computing πŸ”— Futures studies

Artificial general intelligence (AGI) is the hypothetical intelligence of a machine that has the capacity to understand or learn any intellectual task that a human being can. It is a primary goal of some artificial intelligence research and a common topic in science fiction and futures studies. AGI can also be referred to as strong AI, full AI, or general intelligent action. (Some academic sources reserve the term "strong AI" for machines that can experience consciousness.)

Some authorities emphasize a distinction between strong AI and applied AI (also called narrow AI or weak AI): the use of software to study or accomplish specific problem solving or reasoning tasks. Weak AI, in contrast to strong AI, does not attempt to perform the full range of human cognitive abilities.

As of 2017, over forty organizations were doing research on AGI.

Discussed on

πŸ”— Sparse Distributed Memory

πŸ”— Cognitive science πŸ”— Neuroscience

Sparse distributed memory (SDM) is a mathematical model of human long-term memory introduced by Pentti Kanerva in 1988 while he was at NASA Ames Research Center. It is a generalized random-access memory (RAM) for long (e.g., 1,000 bit) binary words. These words serve as both addresses to and data for the memory. The main attribute of the memory is sensitivity to similarity, meaning that a word can be read back not only by giving the original write address but also by giving one close to it, as measured by the number of mismatched bits (i.e., the Hamming distance between memory addresses).

SDM implements transformation from logical space to physical space using distributed data representation and storage, similarly to encoding processes in human memory. A value corresponding to a logical address is stored into many physical addresses. This way of storing is robust and not deterministic. A memory cell is not addressed directly. If input data (logical addresses) are partially damaged at all, we can still get correct output data.

The theory of the memory is mathematically complete and has been verified by computer simulation. It arose from the observation that the distances between points of a high-dimensional space resemble the proximity relations between concepts in human memory. The theory is also practical in that memories based on it can be implemented with conventional RAM-memory elements.

Discussed on

πŸ”— ACT-R: A cognitive architecture

πŸ”— Cognitive science

ACT-R (pronounced /ˌækt ΛˆΙ‘r/; short for "Adaptive Control of Thoughtβ€”Rational") is a cognitive architecture mainly developed by John Robert Anderson and Christian Lebiere at Carnegie Mellon University. Like any cognitive architecture, ACT-R aims to define the basic and irreducible cognitive and perceptual operations that enable the human mind. In theory, each task that humans can perform should consist of a series of these discrete operations.

Most of the ACT-R's basic assumptions are also inspired by the progress of cognitive neuroscience, and ACT-R can be seen and described as a way of specifying how the brain itself is organized in a way that enables individual processing modules to produce cognition.

Discussed on

πŸ”— Alex

πŸ”— Biography πŸ”— Psychology πŸ”— Cognitive science πŸ”— Neuroscience πŸ”— Birds

Alex (May 1976 – 6 September 2007) was a grey parrot and the subject of a thirty-year experiment by animal psychologist Irene Pepperberg, initially at the University of Arizona and later at Harvard University and Brandeis University. When Alex was about one year old, Pepperberg bought him at a pet shop. The name Alex was an acronym for avian language experiment, or avian learning experiment. He was compared to Albert Einstein and at two years old was correctly answering questions made for six-year-olds.

Before Pepperberg's work with Alex, it was widely believed in the scientific community that a large primate brain was needed to handle complex problems related to language and understanding; birds were not considered to be intelligent, as their only common use of communication was mimicking and repeating sounds to interact with each other. However, Alex's accomplishments supported the idea that birds may be able to reason on a basic level and use words creatively. Pepperberg wrote that Alex's intelligence was on a level similar to dolphins and great apes. She also reported that Alex seemed to show the intelligence of a five-year-old human, in some respects, and he had not even reached his full potential by the time he died. She believed that he possessed the emotional level of a two-year-old human at the time of his death.

Discussed on

  • "Alex" | 2021-07-09 | 17 Upvotes 1 Comments

πŸ”— Functional Fixedness - The Candle Problem

πŸ”— Psychology πŸ”— Cognitive science

Functional fixedness is a cognitive bias that limits a person to use an object only in the way it is traditionally used. The concept of functional fixedness originated in Gestalt psychology, a movement in psychology that emphasizes holistic processing. Karl Duncker defined functional fixedness as being a "mental block against using an object in a new way that is required to solve a problem". This "block" limits the ability of an individual to use components given to them to complete a task, as they cannot move past the original purpose of those components. For example, if someone needs a paperweight, but they only have a hammer, they may not see how the hammer can be used as a paperweight. Functional fixedness is this inability to see a hammer's use as anything other than for pounding nails; the person couldn't think to use the hammer in a way other than in its conventional function.

When tested, 5-year-old children show no signs of functional fixedness. It has been argued that this is because at age 5, any goal to be achieved with an object is equivalent to any other goal. However, by age 7, children have acquired the tendency to treat the originally intended purpose of an object as special.

Discussed on

πŸ”— Plato: Allegory of the Cave

πŸ”— Philosophy πŸ”— Greece πŸ”— Cognitive science πŸ”— Philosophy/Ancient philosophy πŸ”— Alternative Views πŸ”— Philosophy/Epistemology

The Allegory of the Cave, or Plato's Cave, is an allegory presented by the Greek philosopher Plato in his work Republic (514a–520a) to compare "the effect of education (παιδΡία) and the lack of it on our nature". It is written as a dialogue between Plato's brother Glaucon and his mentor Socrates, narrated by the latter. The allegory is presented after the analogy of the sun (508b–509c) and the analogy of the divided line (509d–511e).

In the allegory "The Cave", Plato describes a group of people who have lived chained to the wall of a cave all their lives, facing a blank wall. The people watch shadows projected on the wall from objects passing in front of a fire behind them and give names to these shadows. The shadows are the prisoners' reality, but are not accurate representations of the real world. The shadows represent the fragment of reality that we can normally perceive through our senses, while the objects under the sun represent the true forms of objects that we can only perceive through reason. Three higher levels exist: the natural sciences; mathematics, geometry, and deductive logic; and the theory of forms.

Socrates explains how the philosopher is like a prisoner who is freed from the cave and comes to understand that the shadows on the wall are actually not the direct source of the images seen. A philosopher aims to understand and perceive the higher levels of reality. However, the other inmates of the cave do not even desire to leave their prison, for they know no better life.

Socrates remarks that this allegory can be paired with previous writings, namely the analogy of the sun and the analogy of the divided line.