Random Articles (Page 2)
Have a deep view into what people are curious about.
π Tide-Predicting Machine No. 2
Tide-Predicting Machine No. 2, also known as Old Brass Brains, was a special-purpose mechanical computer that uses gears, pulleys, chains, and other mechanical components to compute the height and time of high and low tides for specific locations. The machine can perform tide calculations much faster than a person could do with pencil and paper. The U.S. Coast and Geodetic Survey put the machine into operation in 1910. It was used until 1965, when it was replaced by an electronic computer.
Discussed on
- "Tide-Predicting Machine No. 2" | 2018-08-12 | 82 Upvotes 13 Comments
π Kernel Embedding of Distributions
In machine learning, the kernel embedding of distributions (also called the kernel mean or mean map) comprises a class of nonparametric methods in which a probability distribution is represented as an element of a reproducing kernel Hilbert space (RKHS). A generalization of the individual data-point feature mapping done in classical kernel methods, the embedding of distributions into infinite-dimensional feature spaces can preserve all of the statistical features of arbitrary distributions, while allowing one to compare and manipulate distributions using Hilbert space operations such as inner products, distances, projections, linear transformations, and spectral analysis. This learning framework is very general and can be applied to distributions over any space on which a sensible kernel function (measuring similarity between elements of ) may be defined. For example, various kernels have been proposed for learning from data which are: vectors in , discrete classes/categories, strings, graphs/networks, images, time series, manifolds, dynamical systems, and other structured objects. The theory behind kernel embeddings of distributions has been primarily developed by Alex Smola, Le Song , Arthur Gretton, and Bernhard SchΓΆlkopf. A review of recent works on kernel embedding of distributions can be found in.
The analysis of distributions is fundamental in machine learning and statistics, and many algorithms in these fields rely on information theoretic approaches such as entropy, mutual information, or KullbackβLeibler divergence. However, to estimate these quantities, one must first either perform density estimation, or employ sophisticated space-partitioning/bias-correction strategies which are typically infeasible for high-dimensional data. Commonly, methods for modeling complex distributions rely on parametric assumptions that may be unfounded or computationally challenging (e.g. Gaussian mixture models), while nonparametric methods like kernel density estimation (Note: the smoothing kernels in this context have a different interpretation than the kernels discussed here) or characteristic function representation (via the Fourier transform of the distribution) break down in high-dimensional settings.
Methods based on the kernel embedding of distributions sidestep these problems and also possess the following advantages:
- Data may be modeled without restrictive assumptions about the form of the distributions and relationships between variables
- Intermediate density estimation is not needed
- Practitioners may specify the properties of a distribution most relevant for their problem (incorporating prior knowledge via choice of the kernel)
- If a characteristic kernel is used, then the embedding can uniquely preserve all information about a distribution, while thanks to the kernel trick, computations on the potentially infinite-dimensional RKHS can be implemented in practice as simple Gram matrix operations
- Dimensionality-independent rates of convergence for the empirical kernel mean (estimated using samples from the distribution) to the kernel embedding of the true underlying distribution can be proven.
- Learning algorithms based on this framework exhibit good generalization ability and finite sample convergence, while often being simpler and more effective than information theoretic methods
Thus, learning via the kernel embedding of distributions offers a principled drop-in replacement for information theoretic approaches and is a framework which not only subsumes many popular methods in machine learning and statistics as special cases, but also can lead to entirely new learning algorithms.
Discussed on
- "Kernel Embedding of Distributions" | 2014-02-15 | 13 Upvotes 3 Comments
π Nebraska Furniture Mart
Nebraska Furniture Mart is the largest home furnishing store in North America selling furniture, flooring, appliances and electronics. NFM was founded in 1937 by Belarus-born Rose Blumkin, universally known as Mrs. B., in Omaha, Nebraska, United States. Under the motto "sell cheap and tell the truth," she worked in the business until age 103. In 1983, Mrs. B. sold a majority interest to Berkshire Hathaway in a handshake deal with Warren Buffett.
Discussed on
- "Nebraska Furniture Mart" | 2019-09-21 | 170 Upvotes 72 Comments
π C++0x (upcoming C++ standard, includes lambda functions)
C++11 is a version of the standard for the programming language C++. It was approved by International Organization for Standardization (ISO) on 12 August 2011, replacing C++03, superseded by C++14 on 18 August 2014 and later, by C++17. The name follows the tradition of naming language versions by the publication year of the specification, though it was formerly named C++0x because it was expected to be published before 2010.
Although one of the design goals was to prefer changes to the libraries over changes to the core language, C++11 does make several additions to the core language. Areas of the core language that were significantly improved include multithreading support, generic programming support, uniform initialization, and performance. Significant changes were also made to the C++ Standard Library, incorporating most of the C++ Technical Report 1 (TR1) libraries, except the library of mathematical special functions.
C++11 was published as ISO/IEC 14882:2011 in September 2011 and is available for a fee. The working draft most similar to the published C++11 standard is N3337, dated 16 January 2012; it has only editorial corrections from the C++11 standard.
Discussed on
- "C++0x (upcoming C++ standard, includes lambda functions)" | 2010-08-11 | 19 Upvotes 25 Comments
π Panopticon
The panopticon is a type of institutional building and a system of control designed by the English philosopher and social theorist Jeremy Bentham in the 18th century. The concept of the design is to allow all prisoners of an institution to be observed by a single security guard, without the inmates being able to tell whether they are being watched.
Although it is physically impossible for the single guard to observe all the inmates' cells at once, the fact that the inmates cannot know when they are being watched means that they are motivated to act as though they are being watched at all times. Thus, the inmates are effectively compelled to regulate their own behaviour. The architecture consists of a rotunda with an inspection house at its centre. From the centre the manager or staff of the institution are able to watch the inmates. Bentham conceived the basic plan as being equally applicable to hospitals, schools, sanatoriums, and asylums, but he devoted most of his efforts to developing a design for a panopticon prison. It is his prison that is now most widely meant by the term "panopticon".
Discussed on
- "Panopticon" | 2023-10-28 | 33 Upvotes 15 Comments
- "Panopticon" | 2020-01-14 | 92 Upvotes 42 Comments
- "Panopticon" | 2014-06-30 | 38 Upvotes 8 Comments
π Colorless green ideas sleep furiously
Colorless green ideas sleep furiously is a sentence composed by Noam Chomsky in his 1957 book Syntactic Structures as an example of a sentence that is grammatically correct, but semantically nonsensical. The sentence was originally used in his 1955 thesis The Logical Structure of Linguistic Theory and in his 1956 paper "Three Models for the Description of Language". Although the sentence is grammatically correct, no obvious understandable meaning can be derived from it, and thus it demonstrates the distinction between syntax and semantics. As an example of a category mistake, it was used to show the inadequacy of certain probabilistic models of grammar, and the need for more structured models.
Discussed on
- "Colorless green ideas sleep furiously" | 2021-02-06 | 15 Upvotes 18 Comments
π RΓΈmer's determination of the speed of light (1676)
RΓΈmer's determination of the speed of light was the demonstration in 1676 that light has a finite speed and so does not travel instantaneously. The discovery is usually attributed to Danish astronomer Ole RΓΈmer, who was working at the Royal Observatory in Paris at the time.
By timing the eclipses of the Jovian moon Io, RΓΈmer estimated that light would take about 22Β minutes to travel a distance equal to the diameter of Earth's orbit around the Sun. This would give light a velocity of about 220,000 kilometres per second, about 26% lower than the true value of 299,792 km/s.
RΓΈmer's theory was controversial at the time that he announced it and he never convinced the director of the Paris Observatory, Giovanni Domenico Cassini, to fully accept it. However, it quickly gained support among other natural philosophers of the period such as Christiaan Huygens and Isaac Newton. It was finally confirmed nearly two decades after RΓΈmer's death, with the explanation in 1729 of stellar aberration by the English astronomer James Bradley.
π Sesame Credit
Zhima Credit (Chinese: θιΊ»δΏ‘η¨; pinyin: ZhΔ«ma XΓ¬nyΓ²ng), also known as Sesame Credit, is a private credit scoring and loyalty program system developed by Ant Financial Services Group (AFSG), an affiliate of the Chinese Alibaba Group. It uses data from Alibaba's services to compile its score. Customers receive a score based on a variety of factors based on social media interactions and purchases carried out on Alibaba Group websites or paid for using its affiliate Ant Financial's Alipay mobile wallet. The rewards of having a high score include easier access to loans from Ant Financial and having a more trustworthy profile on e-commerce sites within the Alibaba Group.
Discussed on
- "Sesame Credit" | 2017-07-11 | 13 Upvotes 3 Comments
π Differential Power Analysis
In cryptography, power analysis is a form of side channel attack in which the attacker studies the power consumption of a cryptographic hardware device (such as a smart card, tamper-resistant "black box", or integrated circuit). The attack can non-invasively extract cryptographic keys and other secret information from the device.
Simple power analysis (SPA) involves visually interpreting power traces, or graphs of electrical activity over time. Differential power analysis (DPA) is a more advanced form of power analysis, which can allow an attacker to compute the intermediate values within cryptographic computations through statistical analysis of data collected from multiple cryptographic operations. SPA and DPA were introduced to the open cryptology community in 1998 by Paul Kocher, Joshua Jaffe and Benjamin Jun.
π The Two Cultures
The Two Cultures is the first part of an influential 1959 Rede Lecture by British scientist and novelist C. P. Snow which were published in book form as The Two Cultures and the Scientific Revolution the same year. Its thesis was that science and the humanities which represented "the intellectual life of the whole of western society" had become split into "two cultures" and that this division was a major handicap to both in solving the world's problems.
Discussed on
- "The Two Cultures" | 2023-05-11 | 40 Upvotes 8 Comments
- "The Two Cultures" | 2013-10-16 | 149 Upvotes 80 Comments