Topic: Philosophy/Logic (Page 4)

You are looking at all articles with the topic "Philosophy/Logic". We found 51 matches.

Hint: To view all topics, click here. Too see the most popular topics, click here instead.

πŸ”— Dempster–Shafer theory

πŸ”— Philosophy πŸ”— Philosophy/Logic πŸ”— Philosophy/Epistemology

The theory of belief functions, also referred to as evidence theory or Dempster–Shafer theory (DST), is a general framework for reasoning with uncertainty, with understood connections to other frameworks such as probability, possibility and imprecise probability theories. First introduced by Arthur P. Dempster in the context of statistical inference, the theory was later developed by Glenn Shafer into a general framework for modeling epistemic uncertaintyβ€”a mathematical theory of evidence. The theory allows one to combine evidence from different sources and arrive at a degree of belief (represented by a mathematical object called belief function) that takes into account all the available evidence.

In a narrow sense, the term Dempster–Shafer theory refers to the original conception of the theory by Dempster and Shafer. However, it is more common to use the term in the wider sense of the same general approach, as adapted to specific kinds of situations. In particular, many authors have proposed different rules for combining evidence, often with a view to handling conflicts in evidence better. The early contributions have also been the starting points of many important developments, including the transferable belief model and the theory of hints.

Discussed on

πŸ”— Groupthink

πŸ”— Philosophy πŸ”— Philosophy/Logic πŸ”— Business πŸ”— Politics πŸ”— Psychology πŸ”— Politics/Corporatism

Groupthink is a psychological phenomenon that occurs within a group of people in which the desire for harmony or conformity in the group results in an irrational or dysfunctional decision-making outcome. Cohesiveness, or the desire for cohesiveness, in a group may produce a tendency among its members to agree at all costs. This causes the group to minimize conflict and reach a consensus decision without critical evaluation.

Groupthink requires individuals to avoid raising controversial issues or alternative solutions, and there is loss of individual creativity, uniqueness and independent thinking. The dysfunctional group dynamics of the "ingroup" produces an "illusion of invulnerability" (an inflated certainty that the right decision has been made). Thus the "ingroup" significantly overrates its own abilities in decision-making and significantly underrates the abilities of its opponents (the "outgroup"). Furthermore, groupthink can produce dehumanizing actions against the "outgroup". Members of a group can often feel peer pressure to "go along with the crowd" in fear of rocking the boat or of what them speaking up will do to the overall to how their teammates perceive them. Group interactions tend to favor, clear and harmonious agreements and it can be a cause for concern when little to no new innovations or arguments for better policies, outcomes and structures are called to question. (McLeod). Groupthink can often be referred to as a group of β€œyes men” because group activities and group projects in general make it extremely easy to pass on not offering constructive opinions.

Some methods that have been used to counteract group think in the past is selecting teams from more diverse backgrounds, and even mixing men and women for groups (Kamalnath). Groupthink can be considered by many to be a detriment to companies, organizations and in any work situations. Most positions that are senior level need individuals to be independent in their thinking. There is a positive correlation found between outstanding executives and decisiveness (Kelman). Groupthink also prohibits an organization from moving forward and innovating if no one ever speaks up and says something could be done differently.

Antecedent factors such as group cohesiveness, faulty group structure, and situational context (e.g., community panic) play into the likelihood of whether or not groupthink will impact the decision-making process.

Groupthink is a construct of social psychology, but has an extensive reach and influences literature in the fields of communication studies, political science, management, and organizational theory, as well as important aspects of deviant religious cult behaviour.

Groupthink is sometimes stated to occur (more broadly) within natural groups within the community, for example to explain the lifelong different mindsets of those with differing political views (such as "conservatism" and "liberalism" in the U.S. political context ) or the purported benefits of team work vs. work conducted in solitude. However, this conformity of viewpoints within a group does not mainly involve deliberate group decision-making, and might be better explained by the collective confirmation bias of the individual members of the group.

Most of the initial research on groupthink was conducted by Irving Janis, a research psychologist from Yale University. Janis published an influential book in 1972, which was revised in 1982. Janis used the Bay of Pigs disaster (the failed invasion of Castro's Cuba in 1961) and the Japanese attack on Pearl Harbor in 1941 as his two prime case studies. Later studies have evaluated and reformulated his groupthink model.

Discussed on

πŸ”— Serial-position effect

πŸ”— Philosophy πŸ”— Philosophy/Logic πŸ”— Business πŸ”— Politics πŸ”— Psychology πŸ”— Marketing & Advertising

Serial-position effect is the tendency of a person to recall the first and last items in a series best, and the middle items worst. The term was coined by Hermann Ebbinghaus through studies he performed on himself, and refers to the finding that recall accuracy varies as a function of an item's position within a study list. When asked to recall a list of items in any order (free recall), people tend to begin recall with the end of the list, recalling those items best (the recency effect). Among earlier list items, the first few items are recalled more frequently than the middle items (the primacy effect).

One suggested reason for the primacy effect is that the initial items presented are most effectively stored in long-term memory because of the greater amount of processing devoted to them. (The first list item can be rehearsed by itself; the second must be rehearsed along with the first, the third along with the first and second, and so on.) The primacy effect is reduced when items are presented quickly and is enhanced when presented slowly (factors that reduce and enhance processing of each item and thus permanent storage). Longer presentation lists have been found to reduce the primacy effect.

One theorised reason for the recency effect is that these items are still present in working memory when recall is solicited. Items that benefit from neither (the middle items) are recalled most poorly. An additional explanation for the recency effect is related to temporal context: if tested immediately after rehearsal, the current temporal context can serve as a retrieval cue, which would predict more recent items to have a higher likelihood of recall than items that were studied in a different temporal context (earlier in the list). The recency effect is reduced when an interfering task is given. Intervening tasks involve working memory, as the distractor activity, if exceeding 15 to 30 seconds in duration, can cancel out the recency effect. Additionally, if recall comes immediately after the test, the recency effect is consistent regardless of the length of the studied list, or presentation rate.

Amnesiacs with poor ability to form permanent long-term memories do not show a primacy effect, but do show a recency effect if recall comes immediately after study. People with Alzheimer's disease exhibit a reduced primacy effect but do not produce a recency effect in recall.

Discussed on

πŸ”— Trivialism

πŸ”— Philosophy πŸ”— Philosophy/Logic

Trivialism (from Latin trivialisΒ 'found everywhere') is the logical theory that all statements (also known as propositions) are true and that all contradictions of the form "p and not p" (e.g. the ball is red and not red) are true. In accordance with this, a trivialist is a person who believes everything is true.

In classical logic, trivialism is in direct violation of Aristotle's law of noncontradiction. In philosophy, trivialism is considered by some to be the complete opposite of skepticism. Paraconsistent logics may use "the law of non-triviality" to abstain from trivialism in logical practices that involve true contradictions.

Theoretical arguments and anecdotes have been offered for trivialism to contrast it with theories such as modal realism, dialetheism and paraconsistent logics.

Discussed on

πŸ”— Scientism

πŸ”— Philosophy πŸ”— Skepticism πŸ”— Philosophy/Logic πŸ”— Philosophy/Social and political philosophy πŸ”— Philosophy/Philosophy of science πŸ”— Sociology πŸ”— Science

Scientism is the promotion of science as the best or only objective means by which society should determine normative and epistemological values. The term scientism is generally used critically, implying a cosmetic application of science in unwarranted situations considered not amenable to application of the scientific method or similar scientific standards.

Discussed on

πŸ”— Colorless green ideas sleep furiously

πŸ”— Philosophy πŸ”— Philosophy/Logic πŸ”— Linguistics πŸ”— Philosophy/Philosophy of language πŸ”— Linguistics/Philosophy of language

Colorless green ideas sleep furiously is a sentence composed by Noam Chomsky in his 1957 book Syntactic Structures as an example of a sentence that is grammatically correct, but semantically nonsensical. The sentence was originally used in his 1955 thesis The Logical Structure of Linguistic Theory and in his 1956 paper "Three Models for the Description of Language". Although the sentence is grammatically correct, no obvious understandable meaning can be derived from it, and thus it demonstrates the distinction between syntax and semantics. As an example of a category mistake, it was used to show the inadequacy of certain probabilistic models of grammar, and the need for more structured models.

Discussed on

πŸ”— Magical Thinking

πŸ”— Philosophy πŸ”— Skepticism πŸ”— Philosophy/Logic πŸ”— Psychology

Magical thinking is a term used in anthropology, philosophy and psychology, denoting the causal relationships between thoughts, actions and events. There are subtle differences in meaning between individual theorists as well as amongst fields of study.

In anthropology, it denotes the attribution of causality between entities grouped with one another (coincidence) or similar to one another.

In psychology, the entities between which a causal relation has to be posited are more strictly delineated; here it denotes the belief that one's thoughts by themselves can bring about effects in the world or that thinking something corresponds with doing it. In both cases, the belief can cause a person to experience fear, seemingly not rationally justifiable to an observer outside the belief system, of performing certain acts or having certain thoughts because of an assumed correlation between doing so and threatening calamities.

In psychiatry, magical thinking is a disorder of thought content; here it denotes the false belief that one's thoughts, actions, or words will cause or prevent a specific consequence in some way that defies commonly understood laws of causality.

Discussed on

πŸ”— John von Neumann

πŸ”— Biography πŸ”— Computing πŸ”— Mathematics πŸ”— Military history πŸ”— Military history/North American military history πŸ”— Military history/United States military history πŸ”— Military history/Military science, technology, and theory πŸ”— Physics πŸ”— Economics πŸ”— Philosophy πŸ”— Philosophy/Logic πŸ”— Biography/science and academia πŸ”— Philosophy/Philosophy of science πŸ”— Philosophy/Contemporary philosophy πŸ”— Military history/Military biography πŸ”— Biography/military biography πŸ”— History of Science πŸ”— Computing/Computer science πŸ”— Philosophy/Philosophers πŸ”— Education πŸ”— Hungary πŸ”— Military history/World War II πŸ”— Military history/Cold War πŸ”— Physics/History πŸ”— Physics/Biographies πŸ”— Game theory πŸ”— Eastern Europe

John von Neumann (; Hungarian: Neumann JΓ‘nos Lajos, pronouncedΒ [ˈnΙ’jmΙ’n ˈjaːnoΚƒ ˈlΙ’joΚƒ]; December 28, 1903 – FebruaryΒ 8, 1957) was a Hungarian-American mathematician, physicist, computer scientist, engineer and polymath. Von Neumann was generally regarded as the foremost mathematician of his time and said to be "the last representative of the great mathematicians"; who integrated both pure and applied sciences.

He made major contributions to a number of fields, including mathematics (foundations of mathematics, functional analysis, ergodic theory, representation theory, operator algebras, geometry, topology, and numerical analysis), physics (quantum mechanics, hydrodynamics, and quantum statistical mechanics), economics (game theory), computing (Von Neumann architecture, linear programming, self-replicating machines, stochastic computing), and statistics.

He was a pioneer of the application of operator theory to quantum mechanics in the development of functional analysis, and a key figure in the development of game theory and the concepts of cellular automata, the universal constructor and the digital computer.

He published over 150 papers in his life: about 60 in pure mathematics, 60 in applied mathematics, 20 in physics, and the remainder on special mathematical subjects or non-mathematical ones. His last work, an unfinished manuscript written while he was in hospital, was later published in book form as The Computer and the Brain.

His analysis of the structure of self-replication preceded the discovery of the structure of DNA. In a short list of facts about his life he submitted to the National Academy of Sciences, he stated, "The part of my work I consider most essential is that on quantum mechanics, which developed in GΓΆttingen in 1926, and subsequently in Berlin in 1927–1929. Also, my work on various forms of operator theory, Berlin 1930 and Princeton 1935–1939; on the ergodic theorem, Princeton, 1931–1932."

During World War II, von Neumann worked on the Manhattan Project with theoretical physicist Edward Teller, mathematician StanisΕ‚aw Ulam and others, problem solving key steps in the nuclear physics involved in thermonuclear reactions and the hydrogen bomb. He developed the mathematical models behind the explosive lenses used in the implosion-type nuclear weapon, and coined the term "kiloton" (of TNT), as a measure of the explosive force generated.

After the war, he served on the General Advisory Committee of the United States Atomic Energy Commission, and consulted for a number of organizations, including the United States Air Force, the Army's Ballistic Research Laboratory, the Armed Forces Special Weapons Project, and the Lawrence Livermore National Laboratory. As a Hungarian Γ©migrΓ©, concerned that the Soviets would achieve nuclear superiority, he designed and promoted the policy of mutually assured destruction to limit the arms race.

Discussed on

πŸ”— Charles Babbage

πŸ”— Biography πŸ”— Computing πŸ”— London πŸ”— Philosophy πŸ”— Philosophy/Logic πŸ”— Business πŸ”— England πŸ”— Biography/science and academia πŸ”— Philosophy/Philosophers πŸ”— Philately πŸ”— Biography/Core biographies

Charles Babbage (; 26 December 1791 – 18 October 1871) was an English polymath. A mathematician, philosopher, inventor and mechanical engineer, Babbage originated the concept of a digital programmable computer.

Considered by some to be a father of the computer, Babbage is credited with inventing the first mechanical computer that eventually led to more complex electronic designs, though all the essential ideas of modern computers are to be found in Babbage's Analytical Engine. His varied work in other fields has led him to be described as "pre-eminent" among the many polymaths of his century.

Parts of Babbage's incomplete mechanisms are on display in the Science Museum in London. In 1991, a functioning difference engine was constructed from Babbage's original plans. Built to tolerances achievable in the 19th century, the success of the finished engine indicated that Babbage's machine would have worked.

Discussed on

πŸ”— Chinese room argument

πŸ”— Philosophy πŸ”— Philosophy/Logic πŸ”— Philosophy/Contemporary philosophy πŸ”— Philosophy/Philosophy of mind πŸ”— Philosophy/Analytic philosophy

The Chinese room argument holds that a digital computer executing a program cannot be shown to have a "mind", "understanding" or "consciousness", regardless of how intelligently or human-like the program may make the computer behave. The argument was first presented by philosopher John Searle in his paper, "Minds, Brains, and Programs", published in Behavioral and Brain Sciences in 1980. It has been widely discussed in the years since. The centerpiece of the argument is a thought experiment known as the Chinese room.

The argument is directed against the philosophical positions of functionalism and computationalism, which hold that the mind may be viewed as an information-processing system operating on formal symbols. Specifically, the argument is intended to refute a position Searle calls strong AI: "The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds."

Although it was originally presented in reaction to the statements of artificial intelligence (AI) researchers, it is not an argument against the behavioural goals of AI research, because it does not limit the amount of intelligence a machine can display. The argument applies only to digital computers running programs and does not apply to machines in general.

Discussed on