Topic: Computer science (Page 12)

You are looking at all articles with the topic "Computer science". We found 134 matches.

Hint: To view all topics, click here. Too see the most popular topics, click here instead.

๐Ÿ”— Ron Graham has left us

๐Ÿ”— Biography ๐Ÿ”— Computer science ๐Ÿ”— Mathematics

Ronald Lewis Graham (born October 31, 1935) is an American mathematician credited by the American Mathematical Society as being "one of the principal architects of the rapid development worldwide of discrete mathematics in recent years". He has done important work in scheduling theory, computational geometry, Ramsey theory, and quasi-randomness.

He is the Chief Scientist at the California Institute for Telecommunications and Information Technology (also known as Cal-(IT)2) and the Irwin and Joan Jacobs Professor in Computer Science and Engineering at the University of California, San Diego (UCSD).

Discussed on

๐Ÿ”— Pi Calculus

๐Ÿ”— Computing ๐Ÿ”— Computer science

In theoretical computer science, the ฯ€-calculus (or pi-calculus) is a process calculus. The ฯ€-calculus allows channel names to be communicated along the channels themselves, and in this way it is able to describe concurrent computations whose network configuration may change during the computation.

The ฯ€-calculus is simple, it has few terms and so is a small, yet expressive language (see #Syntax). Functional programs can be encoded into the ฯ€-calculus, and the encoding emphasises the dialogue nature of computation, drawing connections with game semantics. Extensions of the ฯ€-calculus, such as the spi calculus and applied ฯ€, have been successful in reasoning about cryptographic protocols. Beside the original use in describing concurrent systems, the ฯ€-calculus has also been used to reason about business processes and molecular biology.

Discussed on

๐Ÿ”— Succinct Data Structures

๐Ÿ”— Computer science

In computer science, a succinct data structure is a data structure which uses an amount of space that is "close" to the information-theoretic lower bound, but (unlike other compressed representations) still allows for efficient query operations. The concept was originally introduced by Jacobson to encode bit vectors, (unlabeled) trees, and planar graphs. Unlike general lossless data compression algorithms, succinct data structures retain the ability to use them in-place, without decompressing them first. A related notion is that of a compressed data structure, in which the size of the data structure depends upon the particular data being represented.

Suppose that Z {\displaystyle Z} is the information-theoretical optimal number of bits needed to store some data. A representation of this data is called:

  • implicit if it takes Z + O ( 1 ) {\displaystyle Z+O(1)} bits of space,
  • succinct if it takes Z + o ( Z ) {\displaystyle Z+o(Z)} bits of space, and
  • compact if it takes O ( Z ) {\displaystyle O(Z)} bits of space.

For example, a data structure that uses 2 Z {\displaystyle 2Z} bits of storage is compact, Z + Z {\displaystyle Z+{\sqrt {Z}}} bits is succinct, Z + lg โก Z {\displaystyle Z+\lg Z} bits is also succinct, and Z + 3 {\displaystyle Z+3} bits is implicit.

Implicit structures are thus usually reduced to storing information using some permutation of the input data; the most well-known example of this is the heap.

๐Ÿ”— Expert System

๐Ÿ”— Computer science ๐Ÿ”— Systems ๐Ÿ”— Humanโ€“Computer Interaction

In artificial intelligence, an expert system is a computer system emulating the decision-making ability of a human expert. Expert systems are designed to solve complex problems by reasoning through bodies of knowledge, represented mainly as ifโ€“then rules rather than through conventional procedural code. The first expert systems were created in the 1970s and then proliferated in the 1980s. Expert systems were among the first truly successful forms of artificial intelligence (AI) software. An expert system is divided into two subsystems: the inference engine and the knowledge base. The knowledge base represents facts and rules. The inference engine applies the rules to the known facts to deduce new facts. Inference engines can also include explanation and debugging abilities.

๐Ÿ”— Pareto Front

๐Ÿ”— Computer science ๐Ÿ”— Economics ๐Ÿ”— Engineering

In multi-objective optimization, the Pareto front (also called Pareto frontier or Pareto curve) is the set of all Pareto efficient solutions. The concept is widely used in engineering.:โ€Š111โ€“148โ€Š It allows the designer to restrict attention to the set of efficient choices, and to make tradeoffs within this set, rather than considering the full range of every parameter.:โ€Š63โ€“65โ€Š:โ€Š399โ€“412โ€Š

๐Ÿ”— Computer

๐Ÿ”— Technology ๐Ÿ”— Video games ๐Ÿ”— Computing ๐Ÿ”— Computer science ๐Ÿ”— Computing/Computer hardware ๐Ÿ”— Systems ๐Ÿ”— Computing/Software ๐Ÿ”— Engineering ๐Ÿ”— Home Living

A computer is a machine that can be instructed to carry out sequences of arithmetic or logical operations automatically via computer programming. Modern computers have the ability to follow generalized sets of operations, called programs. These programs enable computers to perform an extremely wide range of tasks. A "complete" computer including the hardware, the operating system (main software), and peripheral equipment required and used for "full" operation can be referred to as a computer system. This term may as well be used for a group of computers that are connected and work together, in particular a computer network or computer cluster.

Computers are used as control systems for a wide variety of industrial and consumer devices. This includes simple special purpose devices like microwave ovens and remote controls, factory devices such as industrial robots and computer-aided design, and also general purpose devices like personal computers and mobile devices such as smartphones. The Internet is run on computers and it connects hundreds of millions of other computers and their users.

Early computers were only conceived as calculating devices. Since ancient times, simple manual devices like the abacus aided people in doing calculations. Early in the Industrial Revolution, some mechanical devices were built to automate long tedious tasks, such as guiding patterns for looms. More sophisticated electrical machines did specialized analog calculations in the early 20th century. The first digital electronic calculating machines were developed during World War II. The first semiconductor transistors in the late 1940s were followed by the silicon-based MOSFET (MOS transistor) and monolithic integrated circuit (IC) chip technologies in the late 1950s, leading to the microprocessor and the microcomputer revolution in the 1970s. The speed, power and versatility of computers have been increasing dramatically ever since then, with MOS transistor counts increasing at a rapid pace (as predicted by Moore's law), leading to the Digital Revolution during the late 20th to early 21st centuries.

Conventionally, a modern computer consists of at least one processing element, typically a central processing unit (CPU) in the form of a metal-oxide-semiconductor (MOS) microprocessor, along with some type of computer memory, typically MOS semiconductor memory chips. The processing element carries out arithmetic and logical operations, and a sequencing and control unit can change the order of operations in response to stored information. Peripheral devices include input devices (keyboards, mice, joystick, etc.), output devices (monitor screens, printers, etc.), and input/output devices that perform both functions (e.g., the 2000s-era touchscreen). Peripheral devices allow information to be retrieved from an external source and they enable the result of operations to be saved and retrieved.

๐Ÿ”— Floyd-Steinberg Dithering Algorithm

๐Ÿ”— Computer science

Floydโ€“Steinberg dithering is an image dithering algorithm first published in 1976 by Robert W. Floyd and Louis Steinberg. It is commonly used by image manipulation software, for example when an image is converted into GIF format that is restricted to a maximum of 256 colors.

The algorithm achieves dithering using error diffusion, meaning it pushes (adds) the residual quantization error of a pixel onto its neighboring pixels, to be dealt with later. It spreads the debt out according to the distribution (shown as a map of the neighboring pixels):

[ โˆ— 7 16 โ€ฆ โ€ฆ 3 16 5 16 1 16 โ€ฆ ] {\displaystyle {\begin{bmatrix}&&*&{\frac {\displaystyle 7}{\displaystyle 16}}&\ldots \\\ldots &{\frac {\displaystyle 3}{\displaystyle 16}}&{\frac {\displaystyle 5}{\displaystyle 16}}&{\frac {\displaystyle 1}{\displaystyle 16}}&\ldots \\\end{bmatrix}}}

The pixel indicated with a star (*) indicates the pixel currently being scanned, and the blank pixels are the previously-scanned pixels. The algorithm scans the image from left to right, top to bottom, quantizing pixel values one by one. Each time the quantization error is transferred to the neighboring pixels, while not affecting the pixels that already have been quantized. Hence, if a number of pixels have been rounded downwards, it becomes more likely that the next pixel is rounded upwards, such that on average, the quantization error is close to zero.

The diffusion coefficients have the property that if the original pixel values are exactly halfway in between the nearest available colors, the dithered result is a checkerboard pattern. For example, 50% grey data could be dithered as a black-and-white checkerboard pattern. For optimal dithering, the counting of quantization errors should be in sufficient accuracy to prevent rounding errors from affecting the result.

In some implementations, the horizontal direction of scan alternates between lines; this is called "serpentine scanning" or boustrophedon transform dithering.

In the following pseudocode we can see the algorithm described above. This works for any approximately linear encoding of pixel values, such as 8-bit integers, 16-bit integers or real numbers in the range [0,1].

for each y from top to bottom do
    for each x from left to right do
        oldpixelย := pixel[x][y]
        newpixelย := find_closest_palette_color(oldpixel)
        pixel[x][y]ย := newpixel
        quant_errorย := oldpixel - newpixel
        pixel[x + 1][y    ]ย := pixel[x + 1][y    ] + quant_error ร— 7 / 16
        pixel[x - 1][y + 1]ย := pixel[x - 1][y + 1] + quant_error ร— 3 / 16
        pixel[x    ][y + 1]ย := pixel[x    ][y + 1] + quant_error ร— 5 / 16
        pixel[x + 1][y + 1]ย := pixel[x + 1][y + 1] + quant_error ร— 1 / 16

When converting 16 bit greyscale to 8 bit, find_closest_palette_color() may perform just a simple rounding, for example:

find_closest_palette_color(oldpixel) = round(oldpixel / 256)

The pseudocode can result in pixel values exceeding the valid values (such as greater than 1 in a [0,1] representation). Such values should ideally be clipped by the find_closest_palette_color() function, rather than clipping the intermediate values, since a subsequent error may bring the value back into range. However, if fixed-width integers are used, wrapping of intermediate values would cause inversion of black and white, and so should be avoided.

Discussed on

๐Ÿ”— Shape Grammar

๐Ÿ”— Computing ๐Ÿ”— Computer science

Shape grammars in computation are a specific class of production systems that generate geometric shapes. Typically, shapes are 2- or 3-dimensional, thus shape grammars are a way to study 2- and 3-dimensional languages. Shape grammars were first introduced in a seminal article by George Stiny and James Gips in 1971. The mathematical and algorithmic foundations of shape grammars (in particular, for linear elements in two-dimensions) were developed in "Pictorial and Formal Aspects of Shapes and Shape Grammars" (Birkhรคuser Basel, 1975) by George Stiny. Applications of shape grammars were first considered in "Shape Grammars and their Uses" (Birkhรคuser Basel, 1975) by James Gips. These publications also contain two independent, though equivalent, constructions showing that shape grammars can simulate Turing machines.

Discussed on

๐Ÿ”— Continuation-Passing Style

๐Ÿ”— Computing ๐Ÿ”— Computer science

In functional programming, continuation-passing style (CPS) is a style of programming in which control is passed explicitly in the form of a continuation. This is contrasted with direct style, which is the usual style of programming. Gerald Jay Sussman and Guy L. Steele, Jr. coined the phrase in AI Memo 349 (1975), which sets out the first version of the Scheme programming language.John C. Reynolds gives a detailed account of the numerous discoveries of continuations.

A function written in continuation-passing style takes an extra argument: an explicit "continuation"; i.e., a function of one argument. When the CPS function has computed its result value, it "returns" it by calling the continuation function with this value as the argument. That means that when invoking a CPS function, the calling function is required to supply a procedure to be invoked with the subroutine's "return" value. Expressing code in this form makes a number of things explicit which are implicit in direct style. These include: procedure returns, which become apparent as calls to a continuation; intermediate values, which are all given names; order of argument evaluation, which is made explicit; and tail calls, which simply call a procedure with the same continuation, unmodified, that was passed to the caller.

Programs can be automatically transformed from direct style to CPS. Functional and logic compilers often use CPS as an intermediate representation where a compiler for an imperative or procedural programming language would use static single assignment form (SSA). SSA is formally equivalent to a subset of CPS (excluding non-local control flow, which does not occur when CPS is used as intermediate representation). Functional compilers can also use A-normal form (ANF) (but only for languages requiring eager evaluation), rather than with 'thunks' (described in the examples below) in CPS. CPS is used more frequently by compilers than by programmers as a local or global style.

Discussed on

๐Ÿ”— Kernel Embedding of Distributions

๐Ÿ”— Computer science

In machine learning, the kernel embedding of distributions (also called the kernel mean or mean map) comprises a class of nonparametric methods in which a probability distribution is represented as an element of a reproducing kernel Hilbert space (RKHS). A generalization of the individual data-point feature mapping done in classical kernel methods, the embedding of distributions into infinite-dimensional feature spaces can preserve all of the statistical features of arbitrary distributions, while allowing one to compare and manipulate distributions using Hilbert space operations such as inner products, distances, projections, linear transformations, and spectral analysis. This learning framework is very general and can be applied to distributions over any space ฮฉ {\displaystyle \Omega } on which a sensible kernel function (measuring similarity between elements of ฮฉ {\displaystyle \Omega } ) may be defined. For example, various kernels have been proposed for learning from data which are: vectors in R d {\displaystyle \mathbb {R} ^{d}} , discrete classes/categories, strings, graphs/networks, images, time series, manifolds, dynamical systems, and other structured objects. The theory behind kernel embeddings of distributions has been primarily developed by Alex Smola, Le Song , Arthur Gretton, and Bernhard Schรถlkopf. A review of recent works on kernel embedding of distributions can be found in.

The analysis of distributions is fundamental in machine learning and statistics, and many algorithms in these fields rely on information theoretic approaches such as entropy, mutual information, or Kullbackโ€“Leibler divergence. However, to estimate these quantities, one must first either perform density estimation, or employ sophisticated space-partitioning/bias-correction strategies which are typically infeasible for high-dimensional data. Commonly, methods for modeling complex distributions rely on parametric assumptions that may be unfounded or computationally challenging (e.g. Gaussian mixture models), while nonparametric methods like kernel density estimation (Note: the smoothing kernels in this context have a different interpretation than the kernels discussed here) or characteristic function representation (via the Fourier transform of the distribution) break down in high-dimensional settings.

Methods based on the kernel embedding of distributions sidestep these problems and also possess the following advantages:

  1. Data may be modeled without restrictive assumptions about the form of the distributions and relationships between variables
  2. Intermediate density estimation is not needed
  3. Practitioners may specify the properties of a distribution most relevant for their problem (incorporating prior knowledge via choice of the kernel)
  4. If a characteristic kernel is used, then the embedding can uniquely preserve all information about a distribution, while thanks to the kernel trick, computations on the potentially infinite-dimensional RKHS can be implemented in practice as simple Gram matrix operations
  5. Dimensionality-independent rates of convergence for the empirical kernel mean (estimated using samples from the distribution) to the kernel embedding of the true underlying distribution can be proven.
  6. Learning algorithms based on this framework exhibit good generalization ability and finite sample convergence, while often being simpler and more effective than information theoretic methods

Thus, learning via the kernel embedding of distributions offers a principled drop-in replacement for information theoretic approaches and is a framework which not only subsumes many popular methods in machine learning and statistics as special cases, but also can lead to entirely new learning algorithms.

Discussed on