The chance of an event occurring, expressed as a number between 0 and 1.
The set of all possible outcomes of an experiment.
A subset of the sample space; a specific outcome or a collection of outcomes.
A variable whose value is a numerical outcome of a random phenomenon.
A random variable that can only take on a finite number of values or a countably infinite number of values.
A random variable that can take on any value within a given range.
An arrangement of all or part of a set of objects in a specific order. The order of arrangement is crucial. For example, the sequences ABC, ACB, BAC are different permutations. Permutations of all elements nPn = n!
Permutations involving some elements from n: nPx = n!/(n-x)!
Permutation, involving some identical elements: MISSISSIPPI n=11, M=1, I=4, S=4, P=2 n!/(a!b!c!d!)
A selection of all or part of a set of objects without regard to the order in which they are selected. The order does not matter. For example, the set {A, B, C} is the same as {C, B, A}; they are considered the same combination. nCx = nPx/x! = n!/(x!(n-x)!)
If at least one of these events must occur. That is, their union covers all possible outcomes of the experiment. Definition: For two events AA and BB, they are collectively exhaustive if P(A∪B)=1. Example: When flipping a coin, the events "getting heads" and "getting tails" are collectively exhaustive.
The occurrence of one event prevents the occurrence of the other. Definition: P(A∩B)=0. Example: When flipping a single coin, the events "getting heads" and "getting tails" are mutually exclusive.
If the occurrence or non-occurrence of one event does not affect the probability of the occurrence or non-occurrence of the other event: P(A*B) = P(A)*P(B) or P(A|B) = P(A) and P(B|A) = P(B). Example: The results of two successive coin flips are independent events.
A function that describes the probability of each possible value of a random variable.
A function that gives the probability of a discrete random variable taking on a specific value.
A function whose integral over a given range gives the probability that a continuous random variable falls within that range.
A function that gives the probability that a random variable is less than or equal to a given value.
The average value of a random variable over many repetitions of the experiment.
A measure of how spread out the values of a random variable are from its expected value.
The square root of the variance; another measure of the spread of a random variable's values.
A measure of the linear relationship between two random variables.
A standardized measure of the linear relationship between two random variables, ranging from -1 to +1.
The probability of an event occurring given that another event has already occurred.
A theorem that describes how to update the probability of an event based on new evidence.
Two events are independent if the occurrence of one does not affect the probability of the other.
Two events that cannot both occur at the same time.
A subset of a population selected in such a way that each member of the population has an equal chance of being selected.
A theorem stating that the distribution of the sample mean approaches a normal distribution as the sample size increases.
A bell-shaped probability distribution that is symmetric around its mean.
A normal distribution with a mean of 0 and a standard deviation of 1.
Methods for summarizing and presenting data, such as measures of central tendency and dispersion.
The statement being tested in hypothesis testing; often a statement of no effect or no difference.
The statement that is accepted if the null hypothesis is rejected.
Rejecting the null hypothesis when it is true.
Failing to reject the null hypothesis when it is false.
A statistical method used to make inferences about a population based on sample data.
A hypothesis test where the alternative hypothesis specifies a direction (e.g., greater than or less than).
A hypothesis test where the alternative hypothesis does not specify a direction (e.g., not equal to).
A data point that significantly deviates from other data points in a dataset. Can influence statistical analyses and should be carefully considered.
A range of values that is likely to contain the true value of a population parameter.
The probability of rejecting a true null hypothesis.
The probability of observing the data (or more extreme data) if the null hypothesis is true.
A probability distribution used to estimate population parameters when the sample size is small or the population standard deviation is unknown.
A probability distribution used in hypothesis testing to determine if there is a significant difference between observed and expected frequencies. Often used in goodness-of-fit tests and tests of independence.
The number of independent pieces of information available to estimate a parameter. Influences the shape of t-distributions and chi-squared distributions.
A probability distribution used to compare the variances of two or more populations or to test the overall significance of a regression model.
The study of mathematical models of strategic interaction among rational agents.
A situation of strategic interaction between two or more players.
The decision-makers in a game.
The complete plan of action for a player in a game, specifying actions for all possible contingencies.
The outcomes or consequences of a game for each player, often represented numerically.
A representation of a game where players' strategies and payoffs are displayed in a matrix or table.
A representation of a game that shows the sequence of moves, information sets, and payoffs. Often depicted as a game tree.
A set of strategies, one for each player, such that no player has an incentive to deviate unilaterally, given the strategies of the other players.
A strategy where a player chooses a single action with certainty.
A strategy where a player chooses actions probabilistically.
A strategy that is always better for a player, regardless of the actions of the other players.
A strategy that is always worse for a player than another strategy, regardless of the actions of the other players.
A classic game theory example illustrating the conflict between individual rationality and collective rationality.
A game where the total payoffs to all players always sum to zero. One player's gain is always another player's loss.
A game where the sum of payoffs to all players is not always zero. Cooperation can lead to mutually beneficial outcomes
Cooperative Game - Кооперативная игра
A game where players can form binding agreements.
A game where players cannot form binding agreements.
A graphical representation of an extensive-form game.
A set of nodes in a game tree that a player cannot distinguish between.
A game where all players know the history of the game at every decision point.
A game where some players lack complete information about the history of the game at some decision points.
A Nash equilibrium where every subgame is also a Nash equilibrium.
A game played multiple times.
A method of solving extensive-form games by working backward from the end of the game.
A game where players have incomplete information about the other players' types or payoffs.
A game where one player sends a signal to the other player, who then takes an action based on the signal.
A game where bidders compete to purchase an item.