Random Articles

Have a deep view into what people are curious about.

🔗 Penrose Tiling

🔗 Mathematics

A Penrose tiling is an example of an aperiodic tiling. Here, a tiling is a covering of the plane by non-overlapping polygons or other shapes, and aperiodic means that shifting any tiling with these shapes by any finite distance, without rotation, cannot produce the same tiling. However, despite their lack of translational symmetry, Penrose tilings may have both reflection symmetry and fivefold rotational symmetry. Penrose tilings are named after mathematician and physicist Roger Penrose, who investigated them in the 1970s.

There are several different variations of Penrose tilings with different tile shapes. The original form of Penrose tiling used tiles of four different shapes, but this was later reduced to only two shapes: either two different rhombi, or two different quadrilaterals called kites and darts. The Penrose tilings are obtained by constraining the ways in which these shapes are allowed to fit together. This may be done in several different ways, including matching rules, substitution tiling or finite subdivision rules, cut and project schemes, and coverings. Even constrained in this manner, each variation yields infinitely many different Penrose tilings.

Penrose tilings are self-similar: they may be converted to equivalent Penrose tilings with different sizes of tiles, using processes called inflation and deflation. The pattern represented by every finite patch of tiles in a Penrose tiling occurs infinitely many times throughout the tiling. They are quasicrystals: implemented as a physical structure a Penrose tiling will produce diffraction patterns with Bragg peaks and five-fold symmetry, revealing the repeated patterns and fixed orientations of its tiles. The study of these tilings has been important in the understanding of physical materials that also form quasicrystals. Penrose tilings have also been applied in architecture and decoration, as in the floor tiling shown.

Discussed on

🔗 Nelson Rules

🔗 Statistics

Nelson rules are a method in process control of determining if some measured variable is out of control (unpredictable versus consistent). Rules, for detecting "out-of-control" or non-random conditions were first postulated by Walter A. Shewhart in the 1920s. The Nelson rules were first published in the October 1984 issue of the Journal of Quality Technology in an article by Lloyd S Nelson.

The rules are applied to a control chart on which the magnitude of some variable is plotted against time. The rules are based on the mean value and the standard deviation of the samples.

The above eight rules apply to a chart of a variable value.

A second chart, the moving range chart, can also be used but only with rules 1, 2, 3 and 4. Such a chart plots a graph of the maximum value - minimum value of N adjacent points against the time sample of the range.

An example moving range: if N = 3 and values are 1, 3, 5, 3, 3, 2, 4, 5 then the sets of adjacent points are (1,3,5) (3,5,3) (5,3,3) (3,3,2) (3,2,4) (2,4,5) resulting in moving range values of (5-1) (5-3) (5-3) (3-2) (4-2) (5-2) = 4, 2, 2, 1, 2, 3.

Applying these rules indicates when a potential "out of control" situation has arisen. However, there will always be some false alerts and the more rules applied the more will occur. For some processes, it may be beneficial to omit one or more rules. Equally there may be some missing alerts where some specific "out of control" situation is not detected. Empirically, the detection accuracy is good.

Discussed on

🔗 Schönhage–Strassen Algorithm

🔗 Computer science

The Schönhage–Strassen algorithm is an asymptotically fast multiplication algorithm for large integers. It was developed by Arnold Schönhage and Volker Strassen in 1971. The run-time bit complexity is, in Big O notation, O ( n log n log log n ) {\displaystyle O(n\cdot \log n\cdot \log \log n)} for two n-digit numbers. The algorithm uses recursive fast Fourier transforms in rings with 2n+1 elements, a specific type of number theoretic transform.

The Schönhage–Strassen algorithm was the asymptotically fastest multiplication method known from 1971 until 2007, when a new method, Fürer's algorithm, was announced with lower asymptotic complexity; however, Fürer's algorithm currently only achieves an advantage for astronomically large values and is used only in Basic Polynomial Algebra Subprograms (BPAS) (see Galactic algorithms).

In practice the Schönhage–Strassen algorithm starts to outperform older methods such as Karatsuba and Toom–Cook multiplication for numbers beyond 2215 to 2217 (10,000 to 40,000 decimal digits). The GNU Multi-Precision Library uses it for values of at least 1728 to 7808 64-bit words (33,000 to 150,000 decimal digits), depending on architecture. There is a Java implementation of Schönhage–Strassen which uses it above 74,000 decimal digits.

Applications of the Schönhage–Strassen algorithm include mathematical empiricism, such as the Great Internet Mersenne Prime Search and computing approximations of π, as well as practical applications such as Kronecker substitution, in which multiplication of polynomials with integer coefficients can be efficiently reduced to large integer multiplication; this is used in practice by GMP-ECM for Lenstra elliptic curve factorization.

Discussed on

🔗 GNU Guix System

🔗 Software 🔗 Software/Computing

GNU Guix System or Guix System (previously GuixSD) is a rolling release, free and open source Linux distribution built around the GNU Guix package manager. It enables a declarative operating system configuration and allows reliable system upgrades that can easily be rolled back. It uses the GNU Shepherd init system and the Linux-libre kernel, with support for the GNU Hurd kernel under development. On February 3, 2015, the distribution was added to the Free Software Foundation's list of free Linux distributions. The Guix package manager and the Guix System drew inspiration from the Nix package manager and NixOS respectively.

Discussed on

🔗 List of selected stars for navigation

🔗 Lists 🔗 Transport 🔗 Transport/Maritime

Fifty-eight selected navigational stars are given a special status in the field of celestial navigation. Of the approximately 6,000 stars visible to the naked eye under optimal conditions, the selected stars are among the brightest and span 38 constellations of the celestial sphere from the declination of −70° to +89°. Many of the selected stars were named in antiquity by the Babylonians, Greeks, Romans, and Arabs.

The star Polaris, often called the "North Star", is treated specially due to its proximity to the north celestial pole. When navigating in the Northern Hemisphere, special techniques can be used with Polaris to determine latitude or gyrocompass error. The other 57 selected stars have daily positions given in nautical almanacs, aiding the navigator in efficiently performing observations on them. A second group of 115 "tabulated stars" can also be used for celestial navigation, but are often less familiar to the navigator and require extra calculations.

For purposes of identification, the positions of navigational stars — expressed as declination and sidereal hour angle — are often rounded to the nearest degree. In addition to tables, star charts provide an aid to the navigator in identifying the navigational stars, showing constellations, relative positions, and brightness.

Discussed on

🔗 Quadratic Voting

🔗 Politics

Quadratic voting is a collective decision-making procedure where individuals allocate votes to express the degree of their preferences, rather than just the direction of their preferences. By doing so, quadratic voting helps enable users to address issues of voting paradox and majority-rule. Quadratic voting works by allowing users to 'pay' for additional votes on a given matter to express their preference for given issues more strongly, resulting in voting outcomes that are aligned with the highest willingness to pay outcome, rather than just the outcome preferred by the majority regardless of the intensity of individual preferences. The payment for votes may be through either artificial or real currencies (e.g. with tokens distributed equally among voting members or with real money). Under various sets of conditions, quadratic voting has been shown to be much more efficient than one-person-one-vote in aligning collective decisions with doing the most good for the most people. Quadratic voting (abbreviated as QV) is considered a promising alternative to existing democratic structures to solve some of the known failure modes of one-person-one-vote democracies. Quadratic voting is a variant of cumulative voting in the class of cardinal voting. It differs from Cumulative voting by altering "the cost" and "the vote" relation from linear to quadratic.

Quadratic voting is based upon market principles, where each voter is given a budget of vote credits that they have the personal decisions and delegation to spend in order to influence the outcome of a range of decisions. If a participant has a strong preference for or against a specific decision, additional votes could be allocated to proportionally demonstrate the voter's preferences. A vote pricing rule determines the cost of additional votes, with each vote becoming increasingly more expensive. By increasing voter credit costs, this demonstrates an individual's preferences and interests toward the particular decision. This money is eventually cycled back to the voters based upon per capita. Both Weyl and Lalley conducted research to demonstrate that this decision-making policy expedites efficiency as the number of voters increases. The simplified formula on how quadratic voting functions is:

cost to the voter = (number of votes)2.

The quadratic nature of the voting suggests that a voter can use their votes more efficiently by spreading them across many issues. For example, a voter with a budget of 16 vote credits can apply 1 vote credit to each of the 16 issues. However, if the individual has a stronger passion or sentiment on an issue, they could allocate 4 votes, at the cost of 16 credits, to the singular issue, effectively using up their entire budget. This mechanism towards voting demonstrates that there is a large incentive to buy and sell votes, or to trade votes. Using this anonymous ballot system provides identity protection from vote buying or trading since these exchanges cannot be verified by the buyer or trader.

Discussed on

🔗 White House Reconstruction

🔗 Architecture 🔗 National Register of Historic Places 🔗 Architecture/Historic houses

The White House Reconstruction, also known as the Truman Reconstruction, was a comprehensive dismantling and rebuilding of the interior of the White House from 1949 to 1952. A century and a half of wartime destruction and rebuilding, hurried renovations, additions of new services, technologies, the added third floor and inadequate foundations brought the Executive Residence portion of the White House Complex to near-imminent collapse.

In 1948, architectural and engineering investigations deemed it unsafe for occupancy. President Harry S. Truman, his family, and the entire residence staff were relocated across the street to Blair House. For over three years, the White House was gutted, expanded, and rebuilt.

Discussed on

🔗 Beflix (Bell Labs Flicks)

🔗 Computing 🔗 Animation

BEFLIX is the name of the first embedded domain-specific language for computer animation, invented by Ken Knowlton at Bell Labs in 1963. The name derives from a combination of Bell Flicks. Ken Knowlton used BEFLIX to create animated films for educational and engineering purposes. He also collaborated with the artist Stan Vanderbeek at Bell Labs to create a series of computer-animated films called Poemfields between 1966 and 1969.

BEFLIX was developed on the IBM 7090 mainframe computer using a Stromberg-Carlson SC4020 microfilm recorder for output. The programming environment targeted by BEFLIX consisted of a FORTRAN II implementation with FORTRAN II Assembly Program (FAP) macros. The first version of BEFLIX was implemented through the FAP macro facility. A later version targeting FORTRAN IV resembled a more traditional subroutine library and lost some of the unique flavor to the language.

Pixels are produced by writing characters to the screen of the microfilm recorder with a defocused electron beam. The SC4020 used a charactron tube to expose microfilm. In BEFLIX, the electron beam is defocused to draw pixels as blurred character shapes. Characters are selected to create a range of grayscale values for pixels. The microfilm recorder is not connected directly to the 7090, but communicates through magnetic tape. BEFLIX writes the magnetic tape output on the 7090 and the film recorder reads the tape to create the film output. BEFLIX also supports a preview mode where selected frames of the output are written to the line printer.

Discussed on

🔗 Gall's law

🔗 Biography 🔗 Systems 🔗 Biography/arts and entertainment

John Gall (September 18, 1925 – December 15, 2014) was an American author and retired pediatrician. Gall is known for his 1975 book General systemantics: an essay on how systems work, and especially how they fail..., a critique of systems theory. One of the statements from this book has become known as Gall's law.

Discussed on