Topic: Computer science (Page 6)

You are looking at all articles with the topic "Computer science". We found 134 matches.

Hint: To view all topics, click here. Too see the most popular topics, click here instead.

πŸ”— History of the Monte Carlo method

πŸ”— Computing πŸ”— Computer science πŸ”— Mathematics πŸ”— Physics πŸ”— Statistics

Monte Carlo methods, or Monte Carlo experiments, are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results. The underlying concept is to use randomness to solve problems that might be deterministic in principle. They are often used in physical and mathematical problems and are most useful when it is difficult or impossible to use other approaches. Monte Carlo methods are mainly used in three problem classes: optimization, numerical integration, and generating draws from a probability distribution.

In physics-related problems, Monte Carlo methods are useful for simulating systems with many coupled degrees of freedom, such as fluids, disordered materials, strongly coupled solids, and cellular structures (see cellular Potts model, interacting particle systems, McKean–Vlasov processes, kinetic models of gases).

Other examples include modeling phenomena with significant uncertainty in inputs such as the calculation of risk in business and, in mathematics, evaluation of multidimensional definite integrals with complicated boundary conditions. In application to systems engineering problems (space, oil exploration, aircraft design, etc.), Monte Carlo–based predictions of failure, cost overruns and schedule overruns are routinely better than human intuition or alternative "soft" methods.

In principle, Monte Carlo methods can be used to solve any problem having a probabilistic interpretation. By the law of large numbers, integrals described by the expected value of some random variable can be approximated by taking the empirical mean (a.k.a. the sample mean) of independent samples of the variable. When the probability distribution of the variable is parameterized, mathematicians often use a Markov chain Monte Carlo (MCMC) sampler. The central idea is to design a judicious Markov chain model with a prescribed stationary probability distribution. That is, in the limit, the samples being generated by the MCMC method will be samples from the desired (target) distribution. By the ergodic theorem, the stationary distribution is approximated by the empirical measures of the random states of the MCMC sampler.

In other problems, the objective is generating draws from a sequence of probability distributions satisfying a nonlinear evolution equation. These flows of probability distributions can always be interpreted as the distributions of the random states of a Markov process whose transition probabilities depend on the distributions of the current random states (see McKean–Vlasov processes, nonlinear filtering equation). In other instances we are given a flow of probability distributions with an increasing level of sampling complexity (path spaces models with an increasing time horizon, Boltzmann–Gibbs measures associated with decreasing temperature parameters, and many others). These models can also be seen as the evolution of the law of the random states of a nonlinear Markov chain. A natural way to simulate these sophisticated nonlinear Markov processes is to sample multiple copies of the process, replacing in the evolution equation the unknown distributions of the random states by the sampled empirical measures. In contrast with traditional Monte Carlo and MCMC methodologies, these mean-field particle techniques rely on sequential interacting samples. The terminology mean field reflects the fact that each of the samples (a.k.a. particles, individuals, walkers, agents, creatures, or phenotypes) interacts with the empirical measures of the process. When the size of the system tends to infinity, these random empirical measures converge to the deterministic distribution of the random states of the nonlinear Markov chain, so that the statistical interaction between particles vanishes.

Despite its conceptual and algorithmic simplicity, the computational cost associated with a Monte Carlo simulation can be staggeringly high. In general the method requires many samples to get a good approximation, which may incur an arbitrarily large total runtime if the processing time of a single sample is high. Although this is a severe limitation in very complex problems, the embarrassingly parallel nature of the algorithm allows this large cost to be reduced (perhaps to a feasible level) through parallel computing strategies in local processors, clusters, cloud computing, GPU, FPGA, etc.

Discussed on

πŸ”— Zooming User Interface (ZUI)

πŸ”— Computer science

In computing, a zooming user interface or zoomable user interface (ZUI, pronounced zoo-ee) is a type of graphical user interface (GUI) where users can change the scale of the viewed area in order to see more detail or less, and browse through different documents. Information elements appear directly on an infinite virtual desktop (usually created using vector graphics), instead of in windows. Users can pan across the virtual surface in two dimensions and zoom into objects of interest. For example, as you zoom into a text object it may be represented as a small dot, then a thumbnail of a page of text, then a full-sized page and finally a magnified view of the page.

ZUIs use zooming as the main metaphor for browsing through hyperlinked or multivariate information. Objects present inside a zoomed page can in turn be zoomed themselves to reveal further detail, allowing for recursive nesting and an arbitrary level of zoom.

When the level of detail present in the resized object is changed to fit the relevant information into the current size, instead of being a proportional view of the whole object, it's called semantic zooming.

Some consider the ZUI paradigm as a flexible and realistic successor to the traditional windowing GUI, being a Post-WIMP interface.

Discussed on

πŸ”— Tupper's formula

πŸ”— Computer science πŸ”— Mathematics

Tupper's self-referential formula is a formula that visually represents itself when graphed at a specific location in the (x, y) plane.

Discussed on

πŸ”— Kahan Summation Algorithm

πŸ”— Computer science πŸ”— Mathematics

In numerical analysis, the Kahan summation algorithm, also known as compensated summation, significantly reduces the numerical error in the total obtained by adding a sequence of finite-precision floating-point numbers, compared to the obvious approach. This is done by keeping a separate running compensation (a variable to accumulate small errors).

In particular, simply summing n numbers in sequence has a worst-case error that grows proportional to n, and a root mean square error that grows as n {\displaystyle {\sqrt {n}}} for random inputs (the roundoff errors form a random walk). With compensated summation, the worst-case error bound is effectively independent of n, so a large number of values can be summed with an error that only depends on the floating-point precision.

The algorithm is attributed to William Kahan. Similar, earlier techniques are, for example, Bresenham's line algorithm, keeping track of the accumulated error in integer operations (although first documented around the same time) and the delta-sigma modulation (integrating, not just summing the error).

Discussed on

πŸ”— The Skip list

πŸ”— Computer science

In computer science, a skip list is a data structure that allows O ( log ⁑ n ) {\displaystyle {\mathcal {O}}(\log n)} search complexity as well as O ( log ⁑ n ) {\displaystyle {\mathcal {O}}(\log n)} insertion complexity within an ordered sequence of n {\displaystyle n} elements. Thus it can get the best features of an array (for searching) while maintaining a linked list-like structure that allows insertion, which is not possible in an array. Fast search is made possible by maintaining a linked hierarchy of subsequences, with each successive subsequence skipping over fewer elements than the previous one (see the picture below on the right). Searching starts in the sparsest subsequence until two consecutive elements have been found, one smaller and one larger than or equal to the element searched for. Via the linked hierarchy, these two elements link to elements of the next sparsest subsequence, where searching is continued until finally we are searching in the full sequence. The elements that are skipped over may be chosen probabilistically or deterministically, with the former being more common.

Discussed on

πŸ”— Hashlife

πŸ”— Computer science

Hashlife is a memoized algorithm for computing the long-term fate of a given starting configuration in Conway's Game of Life and related cellular automata, much more quickly than would be possible using alternative algorithms that simulate each time step of each cell of the automaton. The algorithm was first described by Bill Gosper in the early 1980s while he was engaged in research at the Xerox Palo Alto Research Center. Hashlife was originally implemented on Symbolics Lisp machines with the aid of the Flavors extension.

Discussed on

πŸ”— Shunting-yard algorithm

πŸ”— Computer science

In computer science, the shunting-yard algorithm is a method for parsing mathematical expressions specified in infix notation. It can produce either a postfix notation string, also known as Reverse Polish notation (RPN), or an abstract syntax tree (AST). The algorithm was invented by Edsger Dijkstra and named the "shunting yard" algorithm because its operation resembles that of a railroad shunting yard. Dijkstra first described the Shunting Yard Algorithm in the Mathematisch Centrum report MR 34/61.

Like the evaluation of RPN, the shunting yard algorithm is stack-based. Infix expressions are the form of mathematical notation most people are used to, for instance "3 + 4" or "3 + 4 Γ— (2 βˆ’ 1)". For the conversion there are two text variables (strings), the input and the output. There is also a stack that holds operators not yet added to the output queue. To convert, the program reads each symbol in order and does something based on that symbol. The result for the above examples would be (in Reverse Polish notation) "3 4 +" and "3 4 2 1 βˆ’ Γ— +", respectively.

The shunting-yard algorithm was later generalized into operator-precedence parsing.

Discussed on

πŸ”— CORDIC

πŸ”— Computer science πŸ”— Mathematics

CORDIC (for COordinate Rotation DIgital Computer), also known as Volder's algorithm, or: Digit-by-digit method Circular CORDIC (Jack E. Volder), Linear CORDIC, Hyperbolic CORDIC (John Stephen Walther), and Generalized Hyperbolic CORDIC (GH CORDIC) (Yuanyong Luo et al.), is a simple and efficient algorithm to calculate trigonometric functions, hyperbolic functions, square roots, multiplications, divisions, and exponentials and logarithms with arbitrary base, typically converging with one digit (or bit) per iteration. CORDIC is therefore also an example of digit-by-digit algorithms. CORDIC and closely related methods known as pseudo-multiplication and pseudo-division or factor combining are commonly used when no hardware multiplier is available (e.g. in simple microcontrollers and FPGAs), as the only operations it requires are additions, subtractions, bitshift and lookup tables. As such, they all belong to the class of shift-and-add algorithms. In computer science, CORDIC is often used to implement floating-point arithmetic when the target platform lacks hardware multiply for cost or space reasons.

Discussed on

πŸ”— List of Important Publications in Computer Science

πŸ”— Computing πŸ”— Computer science πŸ”— Lists πŸ”— History of Science πŸ”— Bibliographies πŸ”— Bibliographies/Science

This is a list of important publications in computer science, organized by field.

Some reasons why a particular publication might be regarded as important:

  • Topic creator – A publication that created a new topic
  • Breakthrough – A publication that changed scientific knowledge significantly
  • Influence – A publication which has significantly influenced the world or has had a massive impact on the teaching of computer science.

Discussed on

πŸ”— Warren Abstract Machine

πŸ”— Computing πŸ”— Computer science πŸ”— Software πŸ”— Software/Computing

In 1983, David H. D. Warren designed an abstract machine for the execution of Prolog consisting of a memory architecture and an instruction set. This design became known as the Warren Abstract Machine (WAM) and has become the de facto standard target for Prolog compilers.

Discussed on