3-Statistics-Distribution

distribution statistics

Measurement values can have frequencies {distribution, population}| {frequency distribution} {theoretical distribution}, such as frequencies at ages.

binomial distribution

Distributions {binomial distribution} can reflect probability of event series that have two possible outcomes. Terms equal (n! / (r! * (n - r)!)) * p^r * (1 - p)^(n - r), where n is event number, r is favorable-outcome number, and p is favorable-outcome probability. Mean equals n*p. Variance equals n * p * (1 - p).

ratio

Small p with caret (^) {p-hat} denotes ratio of subset X to sample N: = X/N. p-hat standard deviation = (p * (1 - p) / N)^0.5 or (N * p * (1 - p))^0.5 / N, if p = X/N and 1 - p = (N - x) / N.

bivariate distribution

Distributions {bivariate distribution} can have two random variables. Average value of (x1 - u1) * (x2 - u2), where x is value and u is mean, measures covariance.

hypergeometric distribution

N things can have x of one kind and N - x of another kind {hypergeometric distribution}. Mean equals R*p, where p is favorable-outcome probability and R is favorable-outcome number. Variance equals R * p * (1 - p) * ((N - R) / (N - 1)). Terms equal (x! / (x1! * (x - x1)!)) * ((N - x)! / ((N - x1)! * (N - R - x + x1)!)) / (N! / (R! * (N - R)!)), where x is thing or event and x1 is number of things that are the same.

comparisons

Hypergeometric distributions approximate binomial distributions, if probability is less than 0.1 and number of things is large. Hypergeometric distributions approximate Poisson distributions, if number of things is large and favorable-event number divided by thing number is less than 0.1. Hypergeometric distributions approximate normal distributions, if mean is greater than or equal to four.

normal distribution

Symmetrical distributions {normal distribution} {normal curve} {Gaussian curve} {Gaussian distribution} over continuous domain can have highest frequencies near mean and lowest frequencies farthest from mean. y = (1 / (s * (2 * pi)^0.5)) * (e^(-(x - u)^2 / (2 * s^2))), where x is domain value, y is frequency, u is mean, and s is standard deviation.

median

In normal distributions, mean equals mode equals median.

approximations

Non-normal distributions can transform to normal distributions using square root of x or logarithm of x.

purposes

Normal distribution models random errors {error curve}. Normal distributions result from measurements that have many factors or random errors. For example, height results from genetics, diet, exercise, and so on, and has normal distribution.

Passing inputs through different-threshold sigmoidal functions, and then finding differences, results in Gaussian distributions.

theorem

If many random same-size samples come from a large population with normal distribution, sums of samples make a normal distribution {central limit theorem, normal distribution}, as do sample means.

mean

Sample-mean mean is an unbiased population-mean estimate.

Pascal triangle

If binomial coefficients are in a triangle {Pascal's triangle} {Pascal triangle}, so nth row has coefficients for n, coefficients are sums of two nearest preceding-row coefficients. Pascal's triangle is 1 / 1 1 / 1 2 1 / 1 3 3 1 / 1 4 6 4 1 / ...

Poisson distribution

Asymmetrical distributions {Poisson distribution, statistics} can have most numbers near mean. Domain starts at zero and is continuous. Mean and variance equal n*p. Terms equal (u^r / r!) * e^-u, where r is favorable-outcome number, from zero to infinity, u equals n*p, p is favorable-outcome probability, and n is event number. Poisson distribution is binomial-distribution limit, as p goes to zero and N goes to infinity.

standard score

z scores can convert to whole numbers {T score} {standard score} between 0 and 100: T = 50 + z*10. T score has mean 50 and standard deviation 10.

z distribution

Normal-distribution variable can change from x to z = (x - u) / s {standard measure} {z distribution} {z score}, where u equals mean and s equals standard deviation. New mean equals zero, and new standard deviation equals one. z score measures number of standard deviations from mean.

3-Statistics-Distribution-Markov Process

Markov process

Outcome or state probability can depend on outcome or state sequence {Markov process}. Events can depend on previous-event order. Transition graphs show event orders needed to reach all events.

Markov chain Monte Carlo method

Taking samples can find transition matrices, or using transition matrices can generate data points {Markov chain Monte Carlo method} (MCMC). MCMC includes jump MCMC, reversible-jump MCMC, and birth-death sampling.

Markov-modulated Poisson process

Hidden Markov models {Markov-modulated Poisson process} can have continuous time.

autocorrelation function

Functions {autocorrelation function} (ACF) can measure if process description needs extra dimensions or parameters.

autoregressive integrated moving average

An average {autoregressive integrated moving average} (ARIMA) can vary around means set by hidden Markov chains.

Bayesian information criterion

Minus two times Schwarz criterion {Bayesian information criterion} (BIC) approximates hidden-Markov-model number of states.

Bayesian model

Models {Bayesian model} can estimate finite-state Markov chains.

direct Gibbs sampler

Methods {direct Gibbs sampler, sampling} can sample states in hidden Markov chains.

EM algorithm

Algorithms {EM algorithm} can have E and M steps.

finite mixture model

Hidden Markov models {finite mixture model} can have equal transition-matrix rows.

forward-backward Gibbs sampler

Stochastic forward recursions and stochastic and non-stochastic backward recursions {forward-backward Gibbs sampler, sampling} can sample distribution.

forward-backward recursion

Recursion {forward-backward recursion, sampling} can adjust distribution sampling. It is similar to Kalman-filter prediction and smoothing steps in state-space model.

hidden Markov model

Finite-state Markov chain {hidden Markov model} (HMM) samples distribution.

model

Hidden Markov models are graphical models. Bayesian models are finite-state Markov chains.

purposes

Hidden Markov chains model signal processing, biology, genetics, ecology, image analysis, economics, and network security. Situations compare error-free or non-criminal distribution to error or criminal distribution.

transition

Hidden-distribution Markov chain has initial distribution and time-constant transition matrix.

calculations

Calculations include estimating parameters by recursion {forward-backward recursion, Markov} {forward-backward Gibbs sampler, Markov} {direct Gibbs sampler, Markov}, filling missing data, finding state-space size, preventing switching, assessing validity, and testing convergence by likelihood recursion.

hidden semi-Markov model

Models {hidden semi-Markov model} can maintain distribution states over times.

Langevin diffusion

In Monte Carlo simulations, if smaller particles surround particle, where will particle be at later time? Answer uses white noise {Langevin equation} {Langevin diffusion}: 6 * k * T * (lambda / m) * D(t), where lambda/m is friction coefficient, T is absolute temperature, k is elasticity, and D(t) is distance at time t. At much later time, distance = (k * T) / (m * lambda).

likelihood recursion

Recursion {likelihood recursion} can calculate over joint likelihood averaged over time, typically taking logarithm.

marginal estimation

Methods {marginal estimation} can estimate hidden Markov distribution and probability.

maximum a posteriori

Methods {maximum a posteriori estimation} (MAP) can estimate hidden Markov distribution and probability.

Metropolis-Hastings

Methods {Metropolis-Hastings sampler} can sample from multivariate normal distribution centered on current data point, depending on Hastings probability.

predictive distribution

Distributions {predictive distribution} can measure how well models predict actual data.

Schwarz criterion

Asymptotic approximations {Schwarz criterion} to logarithm of state-number probability can depend on likelihood maximized over transition matrix, sample size, and free-parameter number.

state-space model

Models {state-space model} can use Gaussian distributions for hidden Markov models.

Viterbi algorithm

Algorithms {Viterbi algorithm} can find most likely Bayesian trajectory, using forward-backward recursion with maximization, rather than averaging, and can estimate hidden Markov distribution.

Related Topics in Table of Contents

3-Statistics

Drawings

Drawings

Contents and Indexes of Topics, Names, and Works

Outline of Knowledge Database Home Page

Contents

Glossary

Topic Index

Name Index

Works Index

Searching

Search Form

Database Information, Disclaimer, Privacy Statement, and Rights

Description of Outline of Knowledge Database

Notation

Disclaimer

Copyright Not Claimed

Privacy Statement

References and Bibliography

Consciousness Bibliography

Technical Information

Date Modified: 2022.0225