Random Articles (Page 3)
Have a deep view into what people are curious about.
π Flow-Matic
FLOW-MATIC, originally known as B-0 (Business Language version 0), was the first English-like data processing language. It was developed for the UNIVAC I at Remington Rand under Grace Hopper from 1955 to 1959, and helped shape the development of COBOL.
Discussed on
- "Flow-Matic" | 2024-06-03 | 11 Upvotes 3 Comments
π 999-Year Lease
A 999-year lease, under historic common law, is an essentially permanent lease of property. The lease locations are mainly in Britain, its former colonies, and the Commonwealth.
A former colony, the Republic of Mauritius (The Raphael Fishing Company Ltd v. The State of Mauritius & Anor (Mauritius) [2008] UKPC 43 (30 July 2008)) established legal precedent on 30 July 2008 in respect of a 'permanent lease' on St. Brandon.
Discussed on
- "999-Year Lease" | 2019-07-18 | 39 Upvotes 28 Comments
π August Engelhardt
August Engelhardt (27 November 1875 β 6 May 1919) was a German author and founder of a sect of sun worshipers.
Discussed on
- "August Engelhardt" | 2020-04-08 | 29 Upvotes 4 Comments
π Razor 1911
Razor 1911 (RZR) is a warez and demogroup founded in Norway, 1986. It was the first ever such group to be initially founded exclusively as a demogroup, before moving into warez in 1987. According to the US Justice Department, Razor 1911 is the oldest software cracking group that is still active on the internet. Razor 1911 ran the diskmag 'Propaganda' until 1995.
Discussed on
- "Razor 1911" | 2023-10-30 | 242 Upvotes 118 Comments
π Angel Problem
The angel problem is a question in combinatorial game theory proposed by John Horton Conway. The game is commonly referred to as the Angels and Devils game. The game is played by two players called the angel and the devil. It is played on an infinite chessboard (or equivalently the points of a 2D lattice). The angel has a power k (a natural number 1 or higher), specified before the game starts. The board starts empty with the angel at the origin. On each turn, the angel jumps to a different empty square which could be reached by at most k moves of a chess king, i.e. the distance from the starting square is at most k in the infinity norm. The devil, on its turn, may add a block on any single square not containing the angel. The angel may leap over blocked squares, but cannot land on them. The devil wins if the angel is unable to move. The angel wins by surviving indefinitely.
The angel problem is: can an angel with high enough power win?
There must exist a winning strategy for one of the players. If the devil can force a win then it can do so in a finite number of moves. If the devil cannot force a win then there is always an action that the angel can take to avoid losing and a winning strategy for it is always to pick such a move. More abstractly, the "pay-off set" (i.e., the set of all plays in which the angel wins) is a closed set (in the natural topology on the set of all plays), and it is known that such games are determined. Of course, for any infinite game, if player 2 doesn't have a winning strategy, player 1 can always pick a move that leads to a position where player 2 doesn't have a winning strategy, but in some games, simply playing forever doesn't confer a win to player 1, and that's why undetermined games may exist.
Conway offered a reward for a general solution to this problem ($100 for a winning strategy for an angel of sufficiently high power, and $1000 for a proof that the devil can win irrespective of the angel's power). Progress was made first in higher dimensions. In late 2006, the original problem was solved when independent proofs appeared, showing that an angel can win. Bowditch proved that a 4-angel (that is, an angel with power k=4) can win and MΓ‘thΓ© and Kloster gave proofs that a 2-angel can win. At this stage, it has not been confirmed by Conway who is to be the recipient of his prize offer, or whether each published and subsequent solution will also earn $100 US.
Discussed on
- "Angel Problem" | 2016-04-20 | 128 Upvotes 32 Comments
π Karmarkar's algorithm β Patent controversy β can mathematics be patented?
Karmarkar's algorithm is an algorithm introduced by Narendra Karmarkar in 1984 for solving linear programming problems. It was the first reasonably efficient algorithm that solves these problems in polynomial time. The ellipsoid method is also polynomial time but proved to be inefficient in practice.
Denoting as the number of variables and as the number of bits of input to the algorithm, Karmarkar's algorithm requires operations on -digit numbers, as compared to such operations for the ellipsoid algorithm. The runtime of Karmarkar's algorithm is thus
using FFT-based multiplication (see Big O notation).
Karmarkar's algorithm falls within the class of interior-point methods: the current guess for the solution does not follow the boundary of the feasible set as in the simplex method, but moves through the interior of the feasible region, improving the approximation of the optimal solution by a definite fraction with every iteration and converging to an optimal solution with rational data.
π Corecursion
In computer science, corecursion is a type of operation that is dual to recursion. Whereas recursion works analytically, starting on data further from a base case and breaking it down into smaller data and repeating until one reaches a base case, corecursion works synthetically, starting from a base case and building it up, iteratively producing data further removed from a base case. Put simply, corecursive algorithms use the data that they themselves produce, bit by bit, as they become available, and needed, to produce further bits of data. A similar but distinct concept is generative recursion which may lack a definite "direction" inherent in corecursion and recursion.
Where recursion allows programs to operate on arbitrarily complex data, so long as they can be reduced to simple data (base cases), corecursion allows programs to produce arbitrarily complex and potentially infinite data structures, such as streams, so long as it can be produced from simple data (base cases) in a sequence of finite steps. Where recursion may not terminate, never reaching a base state, corecursion starts from a base state, and thus produces subsequent steps deterministically, though it may proceed indefinitely (and thus not terminate under strict evaluation), or it may consume more than it produces and thus become non-productive. Many functions that are traditionally analyzed as recursive can alternatively, and arguably more naturally, be interpreted as corecursive functions that are terminated at a given stage, for example recurrence relations such as the factorial.
Corecursion can produce both finite and infinite data structures as results, and may employ self-referential data structures. Corecursion is often used in conjunction with lazy evaluation, to produce only a finite subset of a potentially infinite structure (rather than trying to produce an entire infinite structure at once). Corecursion is a particularly important concept in functional programming, where corecursion and codata allow total languages to work with infinite data structures.
Discussed on
- "Corecursion" | 2014-05-08 | 167 Upvotes 62 Comments
π Sussman anomaly
The Sussman anomaly is a problem in artificial intelligence, first described by Gerald Sussman, that illustrates a weakness of noninterleaved planning algorithms, which were prominent in the early 1970s. In the problem, three blocks (labeled A, B, and C) rest on a table. The agent must stack the blocks such that A is atop B, which in turn is atop C. However, it may only move one block at a time. The problem starts with B on the table, C atop A, and A on the table:
However, noninterleaved planners typically separate the goal (stack A atop B atop C) into subgoals, such as:
- get A atop B
- get B atop C
Suppose the planner starts by pursuing Goal 1. The straightforward solution is to move C out of the way, then move A atop B. But while this sequence accomplishes Goal 1, the agent cannot now pursue Goal 2 without undoing Goal 1, since both A and B must be moved atop C:
If instead the planner starts with Goal 2, the most efficient solution is to move B. But again, the planner cannot pursue Goal 1 without undoing Goal 2:
The problem was first identified by Sussman as a part of his PhD research. Sussman (and his supervisor, Marvin Minsky) believed that intelligence requires a list of exceptions or tricks, and developed a modular planning system for "debugging" plans. Most modern planning systems can handle this anomaly, but it is still useful for explaining why planning is non-trivial.
Discussed on
- "Sussman anomaly" | 2018-01-15 | 151 Upvotes 41 Comments
π Palmer Notation
Palmer notation (sometimes called the "Military System" and named for 19th-century American dentist Dr. Corydon Palmer from Warren, Ohio) is a dental notation (tooth numbering system). Despite the adoption of the FDI World Dental Federation notation (ISO 3950) in most of the world and by the World Health Organization, the Palmer notation continued to be the overwhelmingly preferred method used by orthodontists, dental students and practitioners in the United Kingdom as of 1998.
The notation was originally termed the Zsigmondy system after Hungarian dentist Adolf Zsigmondy, who developed the idea in 1861 using a Zsigmondy cross to record quadrants of tooth positions. Adult teeth were numbered 1 to 8, and the child primary dentition (also called deciduous, milk or baby teeth) were depicted with a quadrant grid using Roman numerals I, II, III, IV, V to number the teeth from the midline. Palmer changed this to A, B, C, D, E, which made it less confusing and less prone to errors in interpretation.
The Palmer notation consists of a symbol (ββΏ ββΎ) designating in which quadrant the tooth is found and a number indicating the position from the midline. Adult teeth are numbered 1 to 8, with deciduous (baby) teeth indicated by a letter A to E. Hence the left and right maxillary central incisor would have the same number, "1", but the right one would have the symbol "β" underneath it, while the left one would have "βΏ".
Discussed on
- "Palmer Notation" | 2022-04-07 | 22 Upvotes 4 Comments
π M.2
M.2, formerly known as the Next Generation Form Factor (NGFF), is a specification for internally mounted computer expansion cards and associated connectors. M.2 replaces the mSATA standard, which uses the PCI Express Mini Card physical card layout and connectors. Employing a more flexible physical specification, the M.2 allows different module widths and lengths, and, paired with the availability of more advanced interfacing features, makes the M.2 more suitable than mSATA in general for solid-state storage applications, and particularly in smaller devices such as ultrabooks and tablets.
Computer bus interfaces provided through the M.2 connector are PCI ExpressΒ 3.0 (up to four lanes), Serial ATAΒ 3.0, and USBΒ 3.0 (a single logical port for each of the latter two). It is up to the manufacturer of the M.2 host or module to select which interfaces are to be supported, depending on the desired level of host support and device type. The M.2 connector keying notches denote various purposes and capabilities of both M.2 hosts and devices. The unique key notches of M.2 modules also prevent them from being inserted into incompatible host connectors.
The M.2 specification supports NVM Express (NVMe) as the logical device interface for M.2 PCI Express SSDs, in addition to supporting legacy Advanced Host Controller Interface (AHCI) at the logical interface level. While the support for AHCI ensures software-level backward compatibility with legacy SATA devices and legacy operating systems, NVM Express is designed to fully utilize the capability of high-speed PCI Express storage devices to perform many I/O operations in parallel.
Discussed on
- "M.2" | 2018-06-03 | 17 Upvotes 4 Comments