Topic: Computer science (Page 11)

You are looking at all articles with the topic "Computer science". We found 134 matches.

Hint: To view all topics, click here. Too see the most popular topics, click here instead.

🔗 Weissman Score

🔗 Computer science

The Weissman score is an efficiency metric for lossless compression applications, which was developed for fictional use. It compares both required time and compression ratio of measured applications, with those of a de facto standard according to the data type. It was developed by Tsachy Weissman, a professor at Stanford University, and Vinith Mishra, a graduate student, at the request of producers for HBO's television series Silicon Valley, about a fictional tech start-up.

The formula is the following; where r is the compression ratio, T is the time required to compress, the overlined ones are the same metrics for a standard compressor, and alpha is a scaling constant.

Weissman score was used in Dropbox Tech Blog to explain real-world work on lossless compression.

Discussed on

🔗 Linda coordination language

🔗 Computer science

In computer science, Linda is a model of coordination and communication among several parallel processes operating upon objects stored in and retrieved from shared, virtual, associative memory. It was developed by Sudhir Ahuja at AT&T Bell Laboratories in collaboration with David Gelernter and Nicholas Carriero at Yale University in 1986.

Discussed on

🔗 Unique Games Conjecture

🔗 Computer science 🔗 Mathematics

In computational complexity theory, the unique games conjecture (often referred to as UGC) is a conjecture made by Subhash Khot in 2002. The conjecture postulates that the problem of determining the approximate value of a certain type of game, known as a unique game, has NP-hard computational complexity. It has broad applications in the theory of hardness of approximation. If the unique games conjecture is true and P ≠ NP, then for many important problems it is not only impossible to get an exact solution in polynomial time (as postulated by the P versus NP problem), but also impossible to get a good polynomial-time approximation. The problems for which such an inapproximability result would hold include constraint satisfaction problems, which crop up in a wide variety of disciplines.

The conjecture is unusual in that the academic world seems about evenly divided on whether it is true or not.

Discussed on

🔗 Schwartzian Transform

🔗 Computer science

In computer programming, the Schwartzian transform is a technique used to improve the efficiency of sorting a list of items. This idiom is appropriate for comparison-based sorting when the ordering is actually based on the ordering of a certain property (the key) of the elements, where computing that property is an intensive operation that should be performed a minimal number of times. The Schwartzian transform is notable in that it does not use named temporary arrays.

The Schwartzian transform is a version of a Lisp idiom known as decorate-sort-undecorate, which avoids recomputing the sort keys by temporarily associating them with the input items. This approach is similar to memoization, which avoids repeating the calculation of the key corresponding to a specific input value. By comparison, this idiom assures that each input item's key is calculated exactly once, which may still result in repeating some calculations if the input data contains duplicate items.

The idiom is named after Randal L. Schwartz, who first demonstrated it in Perl shortly after the release of Perl 5 in 1994. The term "Schwartzian transform" applied solely to Perl programming for a number of years, but it has later been adopted by some users of other languages, such as Python, to refer to similar idioms in those languages. However, the algorithm was already in use in other languages (under no specific name) before it was popularized among the Perl community in the form of that particular idiom by Schwartz. The term "Schwartzian transform" indicates a specific idiom, and not the algorithm in general.

For example, to sort the word list ("aaaa","a","aa") according to word length: first build the list (["aaaa",4],["a",1],["aa",2]), then sort it according to the numeric values getting (["a",1],["aa",2],["aaaa",4]), then strip off the numbers and you get ("a","aa","aaaa"). That was the algorithm in general, so it does not count as a transform. To make it a true Schwartzian transform, it would be done in Perl like this:

Discussed on

🔗 DNA Computing

🔗 Computing 🔗 Computer science 🔗 Biology 🔗 Chemistry 🔗 Genetics

DNA computing is a branch of computing which uses DNA, biochemistry, and molecular biology hardware, instead of the traditional silicon-based computer technologies. Research and development in this area concerns theory, experiments, and applications of DNA computing. The term "molectronics" has sometimes been used, but this term has already been used for an earlier technology, a then-unsuccessful rival of the first integrated circuits; this term has also been used more generally, for molecular-scale electronic technology.

Discussed on

🔗 Canadian Traveller Problem

🔗 Computer science 🔗 Mathematics

In computer science and graph theory, the Canadian traveller problem (CTP) is a generalization of the shortest path problem to graphs that are partially observable. In other words, the graph is revealed while it is being explored, and explorative edges are charged even if they do not contribute to the final path.

This optimization problem was introduced by Christos Papadimitriou and Mihalis Yannakakis in 1989 and a number of variants of the problem have been studied since. The name supposedly originates from conversations of the authors who learned of a difficulty Canadian drivers had: traveling a network of cities with snowfall randomly blocking roads. The stochastic version, where each edge is associated with a probability of independently being in the graph, has been given considerable attention in operations research under the name "the Stochastic Shortest Path Problem with Recourse" (SSPPR).

Discussed on

🔗 Human-based computation games

🔗 Video games 🔗 Computer science 🔗 Science

A human-based computation game or game with a purpose (GWAP) is a human-based computation technique of outsourcing steps within a computational process to humans in an entertaining way (gamification).

Luis von Ahn first proposed the idea of "human algorithm games", or games with a purpose (GWAPs), in order to harness human time and energy for addressing problems that computers cannot yet tackle on their own. He believes that human intellect is an important resource and contribution to the enhancement of computer processing and human computer interaction. He argues that games constitute a general mechanism for using brainpower to solve open computational problems. In this technique, human brains are compared to processors in a distributed system, each performing a small task of a massive computation. However, humans require an incentive to become part of a collective computation. Online games are used as a means to encourage participation in the process.

The tasks presented in these games are usually trivial for humans, but difficult for computers. These tasks include labeling images, transcribing ancient texts, common sense or human experience based activities, and more. Human-based computation games motivate people through entertainment rather than an interest in solving computation problems. This makes GWAPs more appealing to a larger audience. GWAPs can be used to help build the semantic web, annotate and classify collected data, crowdsource general knowledge, and improving other general computer processes. GWAPs have a vast range of applications in variety of areas such as security, computer vision, Internet accessibility, adult content filtering, and Internet search. In applications such as these, games with a purpose have lowered the cost of annotating data and increased the level of human participation.

Discussed on

🔗 Lisp (Book) (1989)

🔗 Computer science 🔗 Books

LISP is a university textbook on the Lisp programming language, written by Patrick Henry Winston and Berthold Klaus Paul Horn. It was first published in 1981, and the third edition of the book was released in 1989. The book is intended to introduce the Lisp programming language and its applications.

Discussed on

🔗 Andreas Raab passed away

🔗 Computer science

The Squeak programming language is a dialect of Smalltalk. It is object-oriented, class-based, and reflective.

It was derived directly from Smalltalk-80 by a group at Apple Computer that included some of the original Smalltalk-80 developers. Its development was continued by the same group at Walt Disney Imagineering, where it was intended for use in internal Disney projects. Later on the group moved on to be supported by HP labs, SAP Labs and most recently Y Combinator.

Squeak is cross-platform. Programs produced on one platform run bit-identical on all other platforms, and versions are available for many platforms including the obvious Windows/macOS/linux versions. The Squeak system includes code for generating a new version of the virtual machine (VM) on which it runs. It also includes a VM simulator written in Squeak. For these reasons, it is easily ported.

Discussed on

🔗 Pareto Efficiency

🔗 Computer science 🔗 Economics 🔗 Engineering 🔗 Gender Studies

Pareto efficiency or Pareto optimality is a situation where no action or allocation is available that makes one individual better off without making another worse off. The concept is named after Vilfredo Pareto (1848–1923), Italian civil engineer and economist, who used the concept in his studies of economic efficiency and income distribution. The following three concepts are closely related:

  • Given an initial situation, a Pareto improvement is a new situation where some agents will gain, and no agents will lose.
  • A situation is called Pareto-dominated or Pareto-inefficient if there is some possible Pareto improvement that has not been made.
  • A situation is called Pareto-optimal or Pareto-efficient if no change could lead to improved satisfaction for some agent without some other agent losing or, equivalently, if there is no scope for further Pareto improvement (in other words, the situation is not Pareto-dominated).

The Pareto front (also called Pareto frontier or Pareto set) is the set of all Pareto-efficient situations.

Pareto originally used the word "optimal" for the concept, but as it describes a situation where a limited number of people will be made better off under finite resources, and it does not take equality or social well-being into account, it is in effect a definition of and better captured by "efficiency".

In addition to the context of efficiency in allocation, the concept of Pareto efficiency also arises in the context of efficiency in production vs. x-inefficiency: a set of outputs of goods is Pareto-efficient if there is no feasible re-allocation of productive inputs such that output of one product increases while the outputs of all other goods either increase or remain the same.

Pareto efficiency is measured along the production possibility frontier (PPF), which is a graphical representation of all the possible options of output for two products that can be produced using all factors of production.

Besides economics, the notion of Pareto efficiency has been applied to the selection of alternatives in engineering and biology. Each option is first assessed, under multiple criteria, and then a subset of options is ostensibly identified with the property that no other option can categorically outperform the specified option. It is a statement of impossibility of improving one variable without harming other variables in the subject of multi-objective optimization (also termed Pareto optimization).

Discussed on