Topic: Cognitive science
You are looking at all articles with the topic "Cognitive science". We found 18 matches.
Hint:
To view all topics, click here. Too see the most popular topics, click here instead.
ACT-R: A cognitive architecture
ACT-R (pronounced /ˌækt ˈɑr/; short for "Adaptive Control of Thought—Rational") is a cognitive architecture mainly developed by John Robert Anderson and Christian Lebiere at Carnegie Mellon University. Like any cognitive architecture, ACT-R aims to define the basic and irreducible cognitive and perceptual operations that enable the human mind. In theory, each task that humans can perform should consist of a series of these discrete operations.
Most of the ACT-R's basic assumptions are also inspired by the progress of cognitive neuroscience, and ACT-R can be seen and described as a way of specifying how the brain itself is organized in a way that enables individual processing modules to produce cognition.
Discussed on
- "ACT-R: A cognitive architecture" | 2015-12-17 | 20 Upvotes 4 Comments
AI Winter
In the history of artificial intelligence, an AI winter is a period of reduced funding and interest in artificial intelligence research. The term was coined by analogy to the idea of a nuclear winter. The field has experienced several hype cycles, followed by disappointment and criticism, followed by funding cuts, followed by renewed interest years or decades later.
The term first appeared in 1984 as the topic of a public debate at the annual meeting of AAAI (then called the "American Association of Artificial Intelligence"). It is a chain reaction that begins with pessimism in the AI community, followed by pessimism in the press, followed by a severe cutback in funding, followed by the end of serious research. At the meeting, Roger Schank and Marvin Minsky—two leading AI researchers who had survived the "winter" of the 1970s—warned the business community that enthusiasm for AI had spiraled out of control in the 1980s and that disappointment would certainly follow. Three years later, the billion-dollar AI industry began to collapse.
Hype is common in many emerging technologies, such as the railway mania or the dot-com bubble. The AI winter was a result of such hype, due to over-inflated promises by developers, unnaturally high expectations from end-users, and extensive promotion in the media . Despite the rise and fall of AI's reputation, it has continued to develop new and successful technologies. AI researcher Rodney Brooks would complain in 2002 that "there's this stupid myth out there that AI has failed, but AI is around you every second of the day." In 2005, Ray Kurzweil agreed: "Many observers still think that the AI winter was the end of the story and that nothing since has come of the AI field. Yet today many thousands of AI applications are deeply embedded in the infrastructure of every industry."
Enthusiasm and optimism about AI has increased since its low point in the early 1990s. Beginning about 2012, interest in artificial intelligence (and especially the sub-field of machine learning) from the research and corporate communities led to a dramatic increase in funding and investment.
Discussed on
- "AI Winter" | 2019-11-25 | 55 Upvotes 41 Comments
- "AI winter" | 2007-10-25 | 16 Upvotes 1 Comments
Possible explanations for the slow progress of AI research
Artificial general intelligence (AGI) is the hypothetical intelligence of a machine that has the capacity to understand or learn any intellectual task that a human being can. It is a primary goal of some artificial intelligence research and a common topic in science fiction and futures studies. AGI can also be referred to as strong AI, full AI, or general intelligent action. (Some academic sources reserve the term "strong AI" for machines that can experience consciousness.)
Some authorities emphasize a distinction between strong AI and applied AI (also called narrow AI or weak AI): the use of software to study or accomplish specific problem solving or reasoning tasks. Weak AI, in contrast to strong AI, does not attempt to perform the full range of human cognitive abilities.
As of 2017, over forty organizations were doing research on AGI.
Discussed on
- "Possible explanations for the slow progress of AI research" | 2019-11-25 | 19 Upvotes 15 Comments
List of cognitive biases
A cognitive bias is a systematic pattern of deviation from norm or rationality in judgment. Individuals create their own "subjective reality" from their perception of the input. An individual's construction of reality, not the objective input, may dictate their behavior in the world. Thus, cognitive biases may sometimes lead to perceptual distortion, inaccurate judgment, illogical interpretation, or what is broadly called irrationality.
Some cognitive biases are presumably adaptive. Cognitive biases may lead to more effective actions in a given context. Furthermore, allowing cognitive biases enables faster decisions which can be desirable when timeliness is more valuable than accuracy, as illustrated in heuristics. Other cognitive biases are a "by-product" of human processing limitations, resulting from a lack of appropriate mental mechanisms (bounded rationality), impact of individual's constitution and biological state (see embodied cognition), or simply from a limited capacity for information processing.
A continually evolving list of cognitive biases has been identified over the last six decades of research on human judgment and decision-making in cognitive science, social psychology, and behavioral economics. Daniel Kahneman and Tversky (1996) argue that cognitive biases have efficient practical implications for areas including clinical judgment, entrepreneurship, finance, and management.
Discussed on
- "List of cognitive biases" | 2008-09-23 | 36 Upvotes 25 Comments
Curse of dimensionality
The curse of dimensionality refers to various phenomena that arise when analyzing and organizing data in high-dimensional spaces (often with hundreds or thousands of dimensions) that do not occur in low-dimensional settings such as the three-dimensional physical space of everyday experience. The expression was coined by Richard E. Bellman when considering problems in dynamic programming.
Cursed phenomena occur in domains such as numerical analysis, sampling, combinatorics, machine learning, data mining and databases. The common theme of these problems is that when the dimensionality increases, the volume of the space increases so fast that the available data become sparse. This sparsity is problematic for any method that requires statistical significance. In order to obtain a statistically sound and reliable result, the amount of data needed to support the result often grows exponentially with the dimensionality. Also, organizing and searching data often relies on detecting areas where objects form groups with similar properties; in high dimensional data, however, all objects appear to be sparse and dissimilar in many ways, which prevents common data organization strategies from being efficient.
Discussed on
- "Curse of dimensionality" | 2012-05-30 | 89 Upvotes 25 Comments
Cyc
Cyc (pronounced SYKE, ) is a long-living artificial intelligence project that aims to assemble a comprehensive ontology and knowledge base that spans the basic concepts and rules about how the world works. Hoping to capture common sense knowledge, Cyc focuses on implicit knowledge that other AI platforms may take for granted. This is contrasted with facts one might find somewhere on the internet or retrieve via a search engine or Wikipedia. Cyc enables AI applications to perform human-like reasoning and be less "brittle" when confronted with novel situations.
Douglas Lenat began the project in July 1984 at MCC, where he was Principal Scientist 1984–1994, and then, since January 1995, has been under active development by the Cycorp company, where he is the CEO.
Fitts’s law
Fitts's law (often cited as Fitts' law) is a predictive model of human movement primarily used in human–computer interaction and ergonomics. This scientific law predicts that the time required to rapidly move to a target area is a function of the ratio between the distance to the target and the width of the target. Fitts's law is used to model the act of pointing, either by physically touching an object with a hand or finger, or virtually, by pointing to an object on a computer monitor using a pointing device.
Fitts's law has been shown to apply under a variety of conditions; with many different limbs (hands, feet, the lower lip, head-mounted sights), manipulanda (input devices), physical environments (including underwater), and user populations (young, old, special educational needs, and drugged participants).
Discussed on
- "Fitts’s law" | 2018-03-18 | 83 Upvotes 2 Comments
- "Fitt's Law" | 2016-03-02 | 70 Upvotes 43 Comments
Functional Fixedness
Functional fixedness is a cognitive bias that limits a person to use an object only in the way it is traditionally used. The concept of functional fixedness originated in Gestalt psychology, a movement in psychology that emphasizes holistic processing. Karl Duncker defined functional fixedness as being a "mental block against using an object in a new way that is required to solve a problem". This "block" limits the ability of an individual to use components given to them to complete a task, as they cannot move past the original purpose of those components. For example, if someone needs a paperweight, but they only have a hammer, they may not see how the hammer can be used as a paperweight. Functional fixedness is this inability to see a hammer's use as anything other than for pounding nails; the person couldn't think to use the hammer in a way other than in its conventional function.
When tested, 5-year-old children show no signs of functional fixedness. It has been argued that this is because at age 5, any goal to be achieved with an object is equivalent to any other goal. However, by age 7, children have acquired the tendency to treat the originally intended purpose of an object as special.
Discussed on
- "Functional Fixedness" | 2015-08-02 | 34 Upvotes 26 Comments
Functional Fixedness - The Candle Problem
Functional fixedness is a cognitive bias that limits a person to use an object only in the way it is traditionally used. The concept of functional fixedness originated in Gestalt psychology, a movement in psychology that emphasizes holistic processing. Karl Duncker defined functional fixedness as being a "mental block against using an object in a new way that is required to solve a problem". This "block" limits the ability of an individual to use components given to them to complete a task, as they cannot move past the original purpose of those components. For example, if someone needs a paperweight, but they only have a hammer, they may not see how the hammer can be used as a paperweight. Functional fixedness is this inability to see a hammer's use as anything other than for pounding nails; the person couldn't think to use the hammer in a way other than in its conventional function.
When tested, 5-year-old children show no signs of functional fixedness. It has been argued that this is because at age 5, any goal to be achieved with an object is equivalent to any other goal. However, by age 7, children have acquired the tendency to treat the originally intended purpose of an object as special.
Discussed on
- "Functional Fixedness - The Candle Problem" | 2010-03-22 | 10 Upvotes 5 Comments
The reason why Blub programmers have such a hard time picking up more powerful languages.
The hypothesis of linguistic relativity, part of relativism, also known as the Sapir–Whorf hypothesis , or Whorfianism is a principle claiming that the structure of a language affects its speakers' world view or cognition, and thus people's perceptions are relative to their spoken language.
The principle is often defined in one of two versions: the strong hypothesis, which was held by some of the early linguists before World War II, and the weak hypothesis, mostly held by some of the modern linguists.
- The strong version says that language determines thought and that linguistic categories limit and determine cognitive categories.
- The weak version says that linguistic categories and usage only influence thought and decisions.
The principle had been accepted and then abandoned by linguists during the early 20th century following the changing perceptions of social acceptance for the other especially after World War II. The origin of formulated arguments against the acceptance of linguistic relativity are attributed to Noam Chomsky.
Discussed on
- "The reason why Blub programmers have such a hard time picking up more powerful languages." | 2007-09-29 | 7 Upvotes 28 Comments