Topic: Mathematics (Page 19)
You are looking at all articles with the topic "Mathematics". We found 232 matches.
Hint:
To view all topics, click here. Too see the most popular topics, click here instead.
🔗 Itô Calculus
Itô calculus, named after Kiyosi Itô, extends the methods of calculus to stochastic processes such as Brownian motion (see Wiener process). It has important applications in mathematical finance and stochastic differential equations.
The central concept is the Itô stochastic integral, a stochastic generalization of the Riemann–Stieltjes integral in analysis. The integrands and the integrators are now stochastic processes:
where H is a locally square-integrable process adapted to the filtration generated by X (Revuz & Yor 1999, Chapter IV), which is a Brownian motion or, more generally, a semimartingale. The result of the integration is then another stochastic process. Concretely, the integral from 0 to any particular t is a random variable, defined as a limit of a certain sequence of random variables. The paths of Brownian motion fail to satisfy the requirements to be able to apply the standard techniques of calculus. So with the integrand a stochastic process, the Itô stochastic integral amounts to an integral with respect to a function which is not differentiable at any point and has infinite variation over every time interval. The main insight is that the integral can be defined as long as the integrand H is adapted, which loosely speaking means that its value at time t can only depend on information available up until this time. Roughly speaking, one chooses a sequence of partitions of the interval from 0 to t and constructs Riemann sums. Every time we are computing a Riemann sum, we are using a particular instantiation of the integrator. It is crucial which point in each of the small intervals is used to compute the value of the function. The limit then is taken in probability as the mesh of the partition is going to zero. Numerous technical details have to be taken care of to show that this limit exists and is independent of the particular sequence of partitions. Typically, the left end of the interval is used.
Important results of Itô calculus include the integration by parts formula and Itô's lemma, which is a change of variables formula. These differ from the formulas of standard calculus, due to quadratic variation terms.
In mathematical finance, the described evaluation strategy of the integral is conceptualized as that we are first deciding what to do, then observing the change in the prices. The integrand is how much stock we hold, the integrator represents the movement of the prices, and the integral is how much money we have in total including what our stock is worth, at any given moment. The prices of stocks and other traded financial assets can be modeled by stochastic processes such as Brownian motion or, more often, geometric Brownian motion (see Black–Scholes). Then, the Itô stochastic integral represents the payoff of a continuous-time trading strategy consisting of holding an amount Ht of the stock at time t. In this situation, the condition that H is adapted corresponds to the necessary restriction that the trading strategy can only make use of the available information at any time. This prevents the possibility of unlimited gains through clairvoyance: buying the stock just before each uptick in the market and selling before each downtick. Similarly, the condition that H is adapted implies that the stochastic integral will not diverge when calculated as a limit of Riemann sums (Revuz & Yor 1999, Chapter IV).
Discussed on
- "Itô Calculus" | 2023-08-03 | 22 Upvotes 3 Comments
🔗 A function that represents all primes
In number theory, a formula for primes is a formula generating the prime numbers, exactly and without exception. No such formula which is efficiently computable is known. A number of constraints are known, showing what such a "formula" can and cannot be.
Discussed on
- "A function that represents all primes" | 2019-10-05 | 16 Upvotes 8 Comments
🔗 Pólya conjecture
In number theory, the Pólya conjecture stated that "most" (i.e., 50% or more) of the natural numbers less than any given number have an odd number of prime factors. The conjecture was posited by the Hungarian mathematician George Pólya in 1919, and proved false in 1958 by C. Brian Haselgrove.
The size of the smallest counterexample is often used to show how a conjecture can be true for many cases, and still be false, providing an illustration for the strong law of small numbers.
Discussed on
- "Pólya conjecture" | 2009-08-29 | 13 Upvotes 10 Comments
🔗 John von Neumann
John von Neumann (; Hungarian: Neumann János Lajos, pronounced [ˈnɒjmɒn ˈjaːnoʃ ˈlɒjoʃ]; December 28, 1903 – February 8, 1957) was a Hungarian-American mathematician, physicist, computer scientist, engineer and polymath. Von Neumann was generally regarded as the foremost mathematician of his time and said to be "the last representative of the great mathematicians"; who integrated both pure and applied sciences.
He made major contributions to a number of fields, including mathematics (foundations of mathematics, functional analysis, ergodic theory, representation theory, operator algebras, geometry, topology, and numerical analysis), physics (quantum mechanics, hydrodynamics, and quantum statistical mechanics), economics (game theory), computing (Von Neumann architecture, linear programming, self-replicating machines, stochastic computing), and statistics.
He was a pioneer of the application of operator theory to quantum mechanics in the development of functional analysis, and a key figure in the development of game theory and the concepts of cellular automata, the universal constructor and the digital computer.
He published over 150 papers in his life: about 60 in pure mathematics, 60 in applied mathematics, 20 in physics, and the remainder on special mathematical subjects or non-mathematical ones. His last work, an unfinished manuscript written while he was in hospital, was later published in book form as The Computer and the Brain.
His analysis of the structure of self-replication preceded the discovery of the structure of DNA. In a short list of facts about his life he submitted to the National Academy of Sciences, he stated, "The part of my work I consider most essential is that on quantum mechanics, which developed in Göttingen in 1926, and subsequently in Berlin in 1927–1929. Also, my work on various forms of operator theory, Berlin 1930 and Princeton 1935–1939; on the ergodic theorem, Princeton, 1931–1932."
During World War II, von Neumann worked on the Manhattan Project with theoretical physicist Edward Teller, mathematician Stanisław Ulam and others, problem solving key steps in the nuclear physics involved in thermonuclear reactions and the hydrogen bomb. He developed the mathematical models behind the explosive lenses used in the implosion-type nuclear weapon, and coined the term "kiloton" (of TNT), as a measure of the explosive force generated.
After the war, he served on the General Advisory Committee of the United States Atomic Energy Commission, and consulted for a number of organizations, including the United States Air Force, the Army's Ballistic Research Laboratory, the Armed Forces Special Weapons Project, and the Lawrence Livermore National Laboratory. As a Hungarian émigré, concerned that the Soviets would achieve nuclear superiority, he designed and promoted the policy of mutually assured destruction to limit the arms race.
Discussed on
- "John von Neumann" | 2015-06-26 | 20 Upvotes 3 Comments
🔗 Abstract Nonsense
In mathematics, abstract nonsense, general abstract nonsense, generalized abstract nonsense, and general nonsense are nonderogatory terms used by mathematicians to describe long, theoretical parts of a proof they skip over when readers are expected to be familiar with them. These terms are mainly used for abstract methods related to category theory and homological algebra. More generally, "abstract nonsense" may refer to a proof that relies on category-theoretic methods, or even to the study of category theory itself.
Discussed on
- "Abstract Nonsense" | 2023-08-26 | 19 Upvotes 4 Comments
🔗 Weierstrass Function
In mathematics, the Weierstrass function is an example of a real-valued function that is continuous everywhere but differentiable nowhere. It is an example of a fractal curve. It is named after its discoverer Karl Weierstrass.
The Weierstrass function has historically served the role of a pathological function, being the first published example (1872) specifically concocted to challenge the notion that every continuous function is differentiable except on a set of isolated points. Weierstrass's demonstration that continuity did not imply almost-everywhere differentiability upended mathematics, overturning several proofs that relied on geometric intuition and vague definitions of smoothness. These types of functions were denounced by contemporaries: Henri Poincaré famously described them as "monsters" and called Weierstrass' work "an outrage against common sense", while Charles Hermite wrote that they were a "lamentable scourge". The functions were difficult to visualize until the arrival of computers in the next century, and the results did not gain wide acceptance until practical applications such as models of Brownian motion necessitated infinitely jagged functions (nowadays known as fractal curves).
Discussed on
- "Weierstrass Function" | 2023-08-22 | 21 Upvotes 2 Comments
🔗 Bernoulli number
In mathematics, the Bernoulli numbers Bn are a sequence of rational numbers which occur frequently in number theory. The Bernoulli numbers appear in (and can be defined by) the Taylor series expansions of the tangent and hyperbolic tangent functions, in Faulhaber's formula for the sum of m-th powers of the first n positive integers, in the Euler–Maclaurin formula, and in expressions for certain values of the Riemann zeta function.
The values of the first 20 Bernoulli numbers are given in the adjacent table. Two conventions are used in the literature, denoted here by and ; they differ only for n = 1, where and . For every odd n > 1, Bn = 0. For every even n > 0, Bn is negative if n is divisible by 4 and positive otherwise. The Bernoulli numbers are special values of the Bernoulli polynomials , with and (Weisstein 2016).
The Bernoulli numbers were discovered around the same time by the Swiss mathematician Jacob Bernoulli, after whom they are named, and independently by Japanese mathematician Seki Kōwa. Seki's discovery was posthumously published in 1712 (Selin 1997, p. 891; Smith & Mikami 1914, p. 108) in his work Katsuyo Sampo; Bernoulli's, also posthumously, in his Ars Conjectandi of 1713. Ada Lovelace's note G on the Analytical Engine from 1842 describes an algorithm for generating Bernoulli numbers with Babbage's machine (Menabrea 1842, Note G). As a result, the Bernoulli numbers have the distinction of being the subject of the first published complex computer program.
🔗 Shor's algorythm
Shor's algorithm is a polynomial-time quantum computer algorithm for integer factorization. Informally, it solves the following problem: Given an integer , find its prime factors. It was invented in 1994 by the American mathematician Peter Shor.
On a quantum computer, to factor an integer , Shor's algorithm runs in polynomial time (the time taken is polynomial in , the size of the integer given as input). Specifically, it takes quantum gates of order using fast multiplication, thus demonstrating that the integer-factorization problem can be efficiently solved on a quantum computer and is consequently in the complexity class BQP. This is almost exponentially faster than the most efficient known classical factoring algorithm, the general number field sieve, which works in sub-exponential time — . The efficiency of Shor's algorithm is due to the efficiency of the quantum Fourier transform, and modular exponentiation by repeated squarings.
If a quantum computer with a sufficient number of qubits could operate without succumbing to quantum noise and other quantum-decoherence phenomena, then Shor's algorithm could be used to break public-key cryptography schemes, such as the widely used RSA scheme. RSA is based on the assumption that factoring large integers is computationally intractable. As far as is known, this assumption is valid for classical (non-quantum) computers; no classical algorithm is known that can factor integers in polynomial time. However, Shor's algorithm shows that factoring integers is efficient on an ideal quantum computer, so it may be feasible to defeat RSA by constructing a large quantum computer. It was also a powerful motivator for the design and construction of quantum computers, and for the study of new quantum-computer algorithms. It has also facilitated research on new cryptosystems that are secure from quantum computers, collectively called post-quantum cryptography.
In 2001, Shor's algorithm was demonstrated by a group at IBM, who factored into , using an NMR implementation of a quantum computer with qubits. After IBM's implementation, two independent groups implemented Shor's algorithm using photonic qubits, emphasizing that multi-qubit entanglement was observed when running the Shor's algorithm circuits. In 2012, the factorization of was performed with solid-state qubits. Also, in 2012, the factorization of was achieved, setting the record for the largest integer factored with Shor's algorithm.
Discussed on
- "Shor's algorythm" | 2009-12-18 | 16 Upvotes 6 Comments
🔗 List of unsolved problems in mathematics
Since the Renaissance, every century has seen the solution of more mathematical problems than the century before, yet many mathematical problems, both major and minor, still remain unsolved. These unsolved problems occur in multiple domains, including physics, computer science, algebra, analysis, combinatorics, algebraic, differential, discrete and Euclidean geometries, graph, group, model, number, set and Ramsey theories, dynamical systems, partial differential equations, and more. Some problems may belong to more than one discipline of mathematics and be studied using techniques from different areas. Prizes are often awarded for the solution to a long-standing problem, and lists of unsolved problems (such as the list of Millennium Prize Problems) receive considerable attention.
🔗 Cistercian Numerals (base 10000 digit system)
The medieval Cistercian numerals, or "ciphers" in nineteenth-century parlance, were developed by the Cistercian monastic order in the early thirteenth century at about the time that Arabic numerals were introduced to northwestern Europe. They are more compact than Arabic or Roman numerals, with a single glyph able to indicate any integer from 1 to 9,999.
Digits are based on a horizontal or vertical stave, with the position of the digit on the stave indicating its place value (units, tens, hundreds or thousands). These digits are compounded on a single stave to indicate more complex numbers. The Cistercians eventually abandoned the system in favor of the Arabic numerals, but marginal use outside the order continued until the early twentieth century.