Abductive logic programming (ALP) is a high-level knowledge-representation framework that can be used to solve problems declaratively based on abductive reasoning. It extends normal logic programming by allowing some predicates to be incompletely defined, declared as abducible predicates. Problem solving is effected by deriving hypotheses on these abducible predicates (abductive hypotheses) as solutions of problems to be solved. These problems can be either observations that need to be explained (as in classical abduction) or goals to be achieved (as in normal logic programming). It can be used to solve problems in diagnosis, planning, natural language and machine learning. It has also been used to interpret negation as failure as a form of abductive reasoning.
Alan Mathison Turing (; 23 June 1912 – 7 June 1954) was an English mathematician, computer scientist, logician, cryptanalyst, philosopher, and theoretical biologist. Turing was highly influential in the development of theoretical computer science, providing a formalisation of the concepts of algorithm and computation with the Turing machine, which can be considered a model of a general-purpose computer. Turing is widely considered to be the father of theoretical computer science and artificial intelligence. Despite these accomplishments, he was not fully recognised in his home country during his lifetime, due to his homosexuality, and because much of his work was covered by the Official Secrets Act.
During the Second World War, Turing worked for the Government Code and Cypher School (GC&CS) at Bletchley Park, Britain's codebreaking centre that produced Ultra intelligence. For a time he led Hut 8, the section that was responsible for German naval cryptanalysis. Here, he devised a number of techniques for speeding the breaking of German ciphers, including improvements to the pre-war Polish bombe method, an electromechanical machine that could find settings for the Enigma machine.
Turing played a crucial role in cracking intercepted coded messages that enabled the Allies to defeat the Nazis in many crucial engagements, including the Battle of the Atlantic, and in so doing helped win the war. Due to the problems of counterfactual history, it is hard to estimate the precise effect Ultra intelligence had on the war, but at the upper end it has been estimated that this work shortened the war in Europe by more than two years and saved over 14 million lives.
After the war Turing worked at the National Physical Laboratory, where he designed the Automatic Computing Engine. The Automatic Computing Engine was one of the first designs for a stored-program computer. In 1948 Turing joined Max Newman's Computing Machine Laboratory, at the Victoria University of Manchester, where he helped develop the Manchester computers and became interested in mathematical biology. He wrote a paper on the chemical basis of morphogenesis and predicted oscillating chemical reactions such as the Belousov–Zhabotinsky reaction, first observed in the 1960s.
Turing was prosecuted in 1952 for homosexual acts; the Labouchere Amendment of 1885 had mandated that "gross indecency" was a criminal offence in the UK. He accepted chemical castration treatment, with DES, as an alternative to prison. Turing died in 1954, 16 days before his 42nd birthday, from cyanide poisoning. An inquest determined his death as a suicide, but it has been noted that the known evidence is also consistent with accidental poisoning.
In 2009, following an Internet campaign, British Prime Minister Gordon Brown made an official public apology on behalf of the British government for "the appalling way he was treated". Queen Elizabeth II granted Turing a posthumous pardon in 2013. The Alan Turing law is now an informal term for a 2017 law in the United Kingdom that retroactively pardoned men cautioned or convicted under historical legislation that outlawed homosexual acts.
- "Alan Turing's 100th Birthday - Mathematician, logician, cryptanalyst, scientist" | 2012-06-22 | 146 Upvotes 19 Comments
- "Happy Birthday, Alan Turing" | 2011-06-23 | 78 Upvotes 6 Comments
Bertrand Arthur William Russell, 3rd Earl Russell, (18 May 1872 – 2 February 1970) was a British philosopher, logician, mathematician, historian, writer, essayist, social critic, political activist, and Nobel laureate. At various points in his life, Russell considered himself a liberal, a socialist and a pacifist, although he also confessed that his sceptical nature had led him to feel that he had "never been any of these things, in any profound sense." Russell was born in Monmouthshire into one of the most prominent aristocratic families in the United Kingdom.
In the early 20th century, Russell led the British "revolt against idealism". He is considered one of the founders of analytic philosophy along with his predecessor Gottlob Frege, colleague G. E. Moore and protégé Ludwig Wittgenstein. He is widely held to be one of the 20th century's premier logicians. With A. N. Whitehead he wrote Principia Mathematica, an attempt to create a logical basis for mathematics, the quintessential work of classical logic. His philosophical essay "On Denoting" has been considered a "paradigm of philosophy". His work has had a considerable influence on mathematics, logic, set theory, linguistics, artificial intelligence, cognitive science, computer science (see type theory and type system) and philosophy, especially the philosophy of language, epistemology and metaphysics.
Russell was a prominent anti-war activist and he championed anti-imperialism. Occasionally, he advocated preventive nuclear war, before the opportunity provided by the atomic monopoly had passed and he decided he would "welcome with enthusiasm" world government. He went to prison for his pacifism during World War I. Later, Russell concluded that war against Adolf Hitler's Nazi Germany was a necessary "lesser of two evils" and criticised Stalinist totalitarianism, attacked the involvement of the United States in the Vietnam War and was an outspoken proponent of nuclear disarmament. In 1950, Russell was awarded the Nobel Prize in Literature "in recognition of his varied and significant writings in which he champions humanitarian ideals and freedom of thought".
- "Bertrand Russell" | 2019-05-05 | 12 Upvotes 5 Comments
Bonini's paradox, named after Stanford business professor Charles Bonini, explains the difficulty in constructing models or simulations that fully capture the workings of complex systems (such as the human brain).
- "Bonini's Paradox" | 2019-05-26 | 88 Upvotes 52 Comments
Bullshit (also bullcrap) is a common English expletive which may be shortened to the euphemism bull or the initialism B.S. In British English, "bollocks" is a comparable expletive. It is mostly a slang term and a profanity which means "nonsense", especially as a rebuke in response to communication or actions viewed as deceptive, misleading, disingenuous, unfair or false. As with many expletives, the term can be used as an interjection, or as many other parts of speech, and can carry a wide variety of meanings. A person who communicates nonsense on a given subject may be referred to as a "bullshit artist".
In philosophy and psychology of cognition the term "bullshit" is sometimes used to specifically refer to statements produced without particular concern of truth, to distinguish from a deliberate, manipulative lie intended to subvert the truth.
While the word is generally used in a deprecatory sense, it may imply a measure of respect for language skills or frivolity, among various other benign usages. In philosophy, Harry Frankfurt, among others, analyzed the concept of bullshit as related to, but distinct from, lying.
As an exclamation, "Bullshit!" conveys a measure of dissatisfaction with something or someone, but this usage need not be a comment on the truth of the matter.
- "Bullshit asymmetry principle" | 2018-10-10 | 85 Upvotes 37 Comments
Charles Babbage (; 26 December 1791 – 18 October 1871) was an English polymath. A mathematician, philosopher, inventor and mechanical engineer, Babbage originated the concept of a digital programmable computer.
Considered by some to be a father of the computer, Babbage is credited with inventing the first mechanical computer that eventually led to more complex electronic designs, though all the essential ideas of modern computers are to be found in Babbage's Analytical Engine. His varied work in other fields has led him to be described as "pre-eminent" among the many polymaths of his century.
Parts of Babbage's incomplete mechanisms are on display in the Science Museum in London. In 1991, a functioning difference engine was constructed from Babbage's original plans. Built to tolerances achievable in the 19th century, the success of the finished engine indicated that Babbage's machine would have worked.
- "Charles Babbage" | 2015-08-17 | 17 Upvotes 5 Comments
The Chinese room argument holds that a digital computer executing a program cannot be shown to have a "mind", "understanding" or "consciousness", regardless of how intelligently or human-like the program may make the computer behave. The argument was first presented by philosopher John Searle in his paper, "Minds, Brains, and Programs", published in Behavioral and Brain Sciences in 1980. It has been widely discussed in the years since. The centerpiece of the argument is a thought experiment known as the Chinese room.
The argument is directed against the philosophical positions of functionalism and computationalism, which hold that the mind may be viewed as an information-processing system operating on formal symbols. Specifically, the argument is intended to refute a position Searle calls strong AI: "The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds."
Although it was originally presented in reaction to the statements of artificial intelligence (AI) researchers, it is not an argument against the behavioural goals of AI research, because it does not limit the amount of intelligence a machine can display. The argument applies only to digital computers running programs and does not apply to machines in general.
- "Chinese room argument" | 2017-07-04 | 11 Upvotes 9 Comments
A cognitive bias is a systematic pattern of deviation from norm or rationality in judgment. Individuals create their own "subjective reality" from their perception of the input. An individual's construction of reality, not the objective input, may dictate their behavior in the world. Thus, cognitive biases may sometimes lead to perceptual distortion, inaccurate judgment, illogical interpretation, or what is broadly called irrationality.
Some cognitive biases are presumably adaptive. Cognitive biases may lead to more effective actions in a given context. Furthermore, allowing cognitive biases enables faster decisions which can be desirable when timeliness is more valuable than accuracy, as illustrated in heuristics. Other cognitive biases are a "by-product" of human processing limitations, resulting from a lack of appropriate mental mechanisms (bounded rationality), impact of individual's constitution and biological state (see embodied cognition), or simply from a limited capacity for information processing.
A continually evolving list of cognitive biases has been identified over the last six decades of research on human judgment and decision-making in cognitive science, social psychology, and behavioral economics. Daniel Kahneman and Tversky (1996) argue that cognitive biases have efficient practical implications for areas including clinical judgment, entrepreneurship, finance, and management.
- "List of cognitive biases" | 2008-09-23 | 36 Upvotes 25 Comments
The theory of belief functions, also referred to as evidence theory or Dempster–Shafer theory (DST), is a general framework for reasoning with uncertainty, with understood connections to other frameworks such as probability, possibility and imprecise probability theories. First introduced by Arthur P. Dempster in the context of statistical inference, the theory was later developed by Glenn Shafer into a general framework for modeling epistemic uncertainty—a mathematical theory of evidence. The theory allows one to combine evidence from different sources and arrive at a degree of belief (represented by a mathematical object called belief function) that takes into account all the available evidence.
In a narrow sense, the term Dempster–Shafer theory refers to the original conception of the theory by Dempster and Shafer. However, it is more common to use the term in the wider sense of the same general approach, as adapted to specific kinds of situations. In particular, many authors have proposed different rules for combining evidence, often with a view to handling conflicts in evidence better. The early contributions have also been the starting points of many important developments, including the transferable belief model and the theory of hints.
- "Dempster–Shafer theory" | 2015-03-20 | 42 Upvotes 13 Comments
In social psychology, fundamental attribution error (FAE), also known as correspondence bias or attribution effect, is the tendency for people to under-emphasize situational explanations for an individual's observed behavior while over-emphasizing dispositional and personality-based explanations for their behavior. This effect has been described as "the tendency to believe that what people do reflects who they are".