Topic: Cognitive science

You are looking at all articles with the topic "Cognitive science". We found 23 matches.

Hint: To view all topics, click here. Too see the most popular topics, click here instead.

πŸ”— List of Cognitive Biases

πŸ”— Lists πŸ”— Philosophy πŸ”— Philosophy/Logic πŸ”— Business πŸ”— Psychology πŸ”— Cognitive science

Cognitive biases are systematic patterns of deviation from norm or rationality in judgment, and are often studied in psychology and behavioral economics.

Although the reality of most of these biases is confirmed by reproducible research, there are often controversies about how to classify these biases or how to explain them. Some are effects of information-processing rules (i.e., mental shortcuts), called heuristics, that the brain uses to produce decisions or judgments. Biases have a variety of forms and appear as cognitive ("cold") bias, such as mental noise, or motivational ("hot") bias, such as when beliefs are distorted by wishful thinking. Both effects can be present at the same time.

There are also controversies over some of these biases as to whether they count as useless or irrational, or whether they result in useful attitudes or behavior. For example, when getting to know others, people tend to ask leading questions which seem biased towards confirming their assumptions about the person. However, this kind of confirmation bias has also been argued to be an example of social skill: a way to establish a connection with the other person.

Although this research overwhelmingly involves human subjects, some findings that demonstrate bias have been found in non-human animals as well. For example, loss aversion has been shown in monkeys and hyperbolic discounting has been observed in rats, pigeons, and monkeys.

Discussed on

πŸ”— Cyc

πŸ”— Computing πŸ”— Computer science πŸ”— Cognitive science πŸ”— Software πŸ”— Software/Computing πŸ”— Databases πŸ”— Databases/Computer science

Cyc (pronounced SYKE, ) is a long-living artificial intelligence project that aims to assemble a comprehensive ontology and knowledge base that spans the basic concepts and rules about how the world works. Hoping to capture common sense knowledge, Cyc focuses on implicit knowledge that other AI platforms may take for granted. This is contrasted with facts one might find somewhere on the internet or retrieve via a search engine or Wikipedia. Cyc enables AI applications to perform human-like reasoning and be less "brittle" when confronted with novel situations.

Douglas Lenat began the project in July 1984 at MCC, where he was Principal Scientist 1984–1994, and then, since January 1995, has been under active development by the Cycorp company, where he is the CEO.

Discussed on

  • "Cyc" | 2022-09-28 | 24 Upvotes 2 Comments
  • "Cyc" | 2019-12-13 | 357 Upvotes 173 Comments

πŸ”— Fitts’s law

πŸ”— Computing πŸ”— Cognitive science πŸ”— Computing/Computer science πŸ”— Human–Computer Interaction

Fitts's law (often cited as Fitts' law) is a predictive model of human movement primarily used in human–computer interaction and ergonomics. This scientific law predicts that the time required to rapidly move to a target area is a function of the ratio between the distance to the target and the width of the target. Fitts's law is used to model the act of pointing, either by physically touching an object with a hand or finger, or virtually, by pointing to an object on a computer monitor using a pointing device.

Fitts's law has been shown to apply under a variety of conditions; with many different limbs (hands, feet, the lower lip, head-mounted sights), manipulanda (input devices), physical environments (including underwater), and user populations (young, old, special educational needs, and drugged participants).

Discussed on

πŸ”— Sussman anomaly

πŸ”— Cognitive science

The Sussman anomaly is a problem in artificial intelligence, first described by Gerald Sussman, that illustrates a weakness of noninterleaved planning algorithms, which were prominent in the early 1970s. In the problem, three blocks (labeled A, B, and C) rest on a table. The agent must stack the blocks such that A is atop B, which in turn is atop C. However, it may only move one block at a time. The problem starts with B on the table, C atop A, and A on the table:

However, noninterleaved planners typically separate the goal (stack A atop B atop C) into subgoals, such as:

  1. get A atop B
  2. get B atop C

Suppose the planner starts by pursuing Goal 1. The straightforward solution is to move C out of the way, then move A atop B. But while this sequence accomplishes Goal 1, the agent cannot now pursue Goal 2 without undoing Goal 1, since both A and B must be moved atop C:

If instead the planner starts with Goal 2, the most efficient solution is to move B. But again, the planner cannot pursue Goal 1 without undoing Goal 2:

The problem was first identified by Sussman as a part of his PhD research. Sussman (and his supervisor, Marvin Minsky) believed that intelligence requires a list of exceptions or tricks, and developed a modular planning system for "debugging" plans. Most modern planning systems can handle this anomaly, but it is still useful for explaining why planning is non-trivial.

Discussed on

πŸ”— Curse of dimensionality

πŸ”— Computing πŸ”— Mathematics πŸ”— Statistics πŸ”— Cognitive science

The curse of dimensionality refers to various phenomena that arise when analyzing and organizing data in high-dimensional spaces (often with hundreds or thousands of dimensions) that do not occur in low-dimensional settings such as the three-dimensional physical space of everyday experience. The expression was coined by Richard E. Bellman when considering problems in dynamic programming.

Cursed phenomena occur in domains such as numerical analysis, sampling, combinatorics, machine learning, data mining and databases. The common theme of these problems is that when the dimensionality increases, the volume of the space increases so fast that the available data become sparse. This sparsity is problematic for any method that requires statistical significance. In order to obtain a statistically sound and reliable result, the amount of data needed to support the result often grows exponentially with the dimensionality. Also, organizing and searching data often relies on detecting areas where objects form groups with similar properties; in high dimensional data, however, all objects appear to be sparse and dissimilar in many ways, which prevents common data organization strategies from being efficient.

Discussed on

πŸ”— AI Winter

πŸ”— United States/U.S. Government πŸ”— United States πŸ”— Technology πŸ”— Computing πŸ”— Systems πŸ”— Cognitive science πŸ”— Linguistics πŸ”— Computing/Computer science πŸ”— Robotics πŸ”— Transhumanism πŸ”— Linguistics/Applied Linguistics πŸ”— Systems/Cybernetics

In the history of artificial intelligence, an AI winter is a period of reduced funding and interest in artificial intelligence research. The term was coined by analogy to the idea of a nuclear winter. The field has experienced several hype cycles, followed by disappointment and criticism, followed by funding cuts, followed by renewed interest years or decades later.

The term first appeared in 1984 as the topic of a public debate at the annual meeting of AAAI (then called the "American Association of Artificial Intelligence"). It is a chain reaction that begins with pessimism in the AI community, followed by pessimism in the press, followed by a severe cutback in funding, followed by the end of serious research. At the meeting, Roger Schank and Marvin Minskyβ€”two leading AI researchers who had survived the "winter" of the 1970sβ€”warned the business community that enthusiasm for AI had spiraled out of control in the 1980s and that disappointment would certainly follow. Three years later, the billion-dollar AI industry began to collapse.

Hype is common in many emerging technologies, such as the railway mania or the dot-com bubble. The AI winter was a result of such hype, due to over-inflated promises by developers, unnaturally high expectations from end-users, and extensive promotion in the media . Despite the rise and fall of AI's reputation, it has continued to develop new and successful technologies. AI researcher Rodney Brooks would complain in 2002 that "there's this stupid myth out there that AI has failed, but AI is around you every second of the day." In 2005, Ray Kurzweil agreed: "Many observers still think that the AI winter was the end of the story and that nothing since has come of the AI field. Yet today many thousands of AI applications are deeply embedded in the infrastructure of every industry."

Enthusiasm and optimism about AI has increased since its low point in the early 1990s. Beginning about 2012, interest in artificial intelligence (and especially the sub-field of machine learning) from the research and corporate communities led to a dramatic increase in funding and investment.

Discussed on

πŸ”— Free energy principle

πŸ”— Biology πŸ”— Cognitive science πŸ”— Neuroscience

The free energy principle tries to explain how (biological) systems maintain their order (non-equilibrium steady-state) by restricting themselves to a limited number of states. It says that biological systems minimise a free energy function of their internal states, which entail beliefs about hidden states in their environment. The implicit minimisation of variational free energy is formally related to variational Bayesian methods and was originally introduced by Karl Friston as an explanation for embodied perception in neuroscience, where it is also known as active inference.

The free energy principle is that systemsβ€”those that are defined by their enclosure in a Markov blanketβ€”try to minimize the difference between their model of the world and their sense and associated perception. This difference can be described as "surprise" and is minimized by continuous correction of the world model of the system. As such, the principle is based on the Bayesian idea of the brain as an β€œinference engine”. Friston added a second route to minimization: action. By actively changing the world into the expected state, systems can also minimize the free energy of the system. Friston assumes this to be the principle of all biological reaction.. Friston also believes his principle applies to mental disorders as well as to artificial intelligence. AI implementations based on the active inference principle have shown advantages over other methods.

The free energy principle has been criticized for being very difficult to understand, even for experts. Discussions of the principle have also been criticized as invoking metaphysical assumptions far removed from a testable scientific prediction, making the principle unfalsifiable. In a 2018 interview, Friston acknowledged that the free energy principle is not properly falsifiable: "the free energy principle is what it is β€” a principle. Like Hamilton’s Principle of Stationary Action, it cannot be falsified. It cannot be disproven. In fact, there’s not much you can do with it, unless you ask whether measurable systems conform to the principle."

Discussed on

πŸ”— Digital Infinity

πŸ”— Cognitive science

Digital infinity is a technical term in theoretical linguistics. Alternative formulations are "discrete infinity" and "the infinite use of finite means". The idea is that all human languages follow a simple logical principle, according to which a limited set of digitsβ€”irreducible atomic sound elementsβ€”are combined to produce an infinite range of potentially meaningful expressions.

'Language is, at its core, a system that is both digital and infinite. To my knowledge, there is no other biological system with these properties....'

It remains for us to examine the spiritual element of speech ... this marvelous invention of composing from twenty-five or thirty sounds an infinite variety of words, which, although not having any resemblance in themselves to that which passes through our minds, nevertheless do not fail to reveal to others all of the secrets of the mind, and to make intelligible to others who cannot penetrate into the mind all that we conceive and all of the diverse movements of our souls.

Noam Chomsky cites Galileo as perhaps the first to recognise the significance of digital infinity. This principle, notes Chomsky, is "the core property of human language, and one of its most distinctive properties: the use of finite means to express an unlimited array of thoughts". In his Dialogo, Galileo describes with wonder the discovery of a means to communicate one's "most secret thoughts to any other person ... with no greater difficulty than the various collocations of twenty-four little characters upon a paper." "This is the greatest of all human inventions," Galileo continues, noting it to be "comparable to the creations of a Michelangelo".

Discussed on

πŸ”— Illusory Superiority

πŸ”— Skepticism πŸ”— Business πŸ”— Psychology πŸ”— Cognitive science

In the field of social psychology, illusory superiority is a condition of cognitive bias wherein a person overestimates their own qualities and abilities, in relation to the same qualities and abilities of other people. Illusory superiority is one of many positive illusions, relating to the self, that are evident in the study of intelligence, the effective performance of tasks and tests, and the possession of desirable personal characteristics and personality traits. Overestimation of abilities compared to an objective measure is known as the overconfidence effect.

The term illusory superiority was first used by the researchers Van Yperen and Buunk, in 1991. The phenomenon is also known as the above-average effect, the superiority bias, the leniency error, the sense of relative superiority, the primus inter pares effect, and the Lake Wobegon effect, named after the fictional town where all the children are above average. The Dunning-Kruger effect is a form of illusory superiority shown by people on a task where their level of skill is low.

A vast majority of the literature on illusory superiority originates from studies on participants in the United States. However, research that only investigates the effects in one specific population is severely limited as this may not be a true representation of human psychology. More recent research investigating self-esteem in other countries suggests that illusory superiority depends on culture. Some studies indicate that East Asians tend to underestimate their own abilities in order to improve themselves and get along with others.

Discussed on

πŸ”— List of cognitive biases

πŸ”— Philosophy πŸ”— Skepticism πŸ”— Philosophy/Logic πŸ”— Psychology πŸ”— Cognitive science

A cognitive bias is a systematic pattern of deviation from norm or rationality in judgment. Individuals create their own "subjective reality" from their perception of the input. An individual's construction of reality, not the objective input, may dictate their behavior in the world. Thus, cognitive biases may sometimes lead to perceptual distortion, inaccurate judgment, illogical interpretation, or what is broadly called irrationality.

Some cognitive biases are presumably adaptive. Cognitive biases may lead to more effective actions in a given context. Furthermore, allowing cognitive biases enables faster decisions which can be desirable when timeliness is more valuable than accuracy, as illustrated in heuristics. Other cognitive biases are a "by-product" of human processing limitations, resulting from a lack of appropriate mental mechanisms (bounded rationality), impact of individual's constitution and biological state (see embodied cognition), or simply from a limited capacity for information processing.

A continually evolving list of cognitive biases has been identified over the last six decades of research on human judgment and decision-making in cognitive science, social psychology, and behavioral economics. Daniel Kahneman and Tversky (1996) argue that cognitive biases have efficient practical implications for areas including clinical judgment, entrepreneurship, finance, and management.

Discussed on