首页 > 其他分享 >Statistic - Review

Statistic - Review

时间:2022-10-31 16:33:59浏览次数:45  
标签:mathbb infty right Review random Statistic operatorname left

Statistic Final Review

Covariance

Definition 4.2. The covariance of \(X\) and \(Y\) is

\[\operatorname{Cov}(X, Y)=\mathbb{E}(X Y)-\mathbb{E} X \cdot \mathbb{E} Y \]

If \(X\) and \(Y\) are independent then \(\operatorname{Cov}(X, Y)=0\). But the converse is not true: we can have \(X, Y\) with \(\operatorname{Cov}(X, Y)=0\) and \(X\) and \(Y\) not independent.
Covariance can be positive, negative, or \(0 .\)
Lemma 4.3.

\[\operatorname{Var}(X+Y)=\operatorname{Var}(X)+\operatorname{Var}(Y)+2 \operatorname{Cov}(X, Y) . \]

The formula generalizes:
Lemma 4.4.

\[\operatorname{Var}\left(X_{1}+\cdots+X_{n}\right)=\sum_{i=1}^{n} \operatorname{Var}\left(X_{i}\right)+2 \sum_{i<j} \operatorname{Cov}\left(X_{i}, X_{j}\right) \]

(Remember that in this formula \(i<j\) so we add \(2 \operatorname{Cov}\left(X_{1}, X_{2}\right)\), for example, not 1 times or 4 times.)

Properties of Covariance

If \(X, Y, W\), and \(V\) are real-valued random variables and \(a, b, c, d\) are real-valued constants, then the following facts are a consequence of the definition of covariance:

\[\begin{aligned} \operatorname{cov}(X, a) &=0 \\ \operatorname{cov}(X, X) &=\operatorname{var}(X) \\ \operatorname{cov}(X, Y) &=\operatorname{cov}(Y, X) \\ \operatorname{cov}(a X, b Y) &=a b \operatorname{cov}(X, Y) \\ \operatorname{cov}(X+a, Y+b) &=\operatorname{cov}(X, Y) \\ \operatorname{cov}(a X+b Y, c W+d V) &=a c \operatorname{cov}(X, W)+a d \operatorname{cov}(X, V)+b c \operatorname{cov}(Y, W)+b d \operatorname{cov}(Y, V) \end{aligned} \]

For a sequence \(X_{1}, \ldots, X_{n}\) of random variables in real-valued, and constants \(a_{1}, \ldots, a_{n}\), we have

\[\operatorname{var}\left(\sum_{i=1}^{n} a_{i} X_{i}\right)=\sum_{i=1}^{n} a_{i}^{2} \sigma^{2}\left(X_{i}\right)+2 \sum_{i, j: i<j} a_{i} a_{j} \operatorname{cov}\left(X_{i}, X_{j}\right)=\sum_{i, j} a_{i} a_{j} \operatorname{cov}\left(X_{i}, X_{j}\right) \]

Hoeffding's covariance identity

A useful identity to compute the covariance between two random variables \(X, Y\) is the Hoeffding's covariance identity: [7]

\[\operatorname{cov}(X, Y)=\int_{\mathbb{R}} \int_{\mathbb{R}}\left(F_{(X, Y)}(x, y)-F_{X}(x) F_{Y}(y)\right) d x d y \]

where \(F_{(X, Y)}(x, y)\) is the joint cumulative distribution function of the random vector \((X, Y)\) and \(F_{X}(x), F_{Y}(y)\) are the marginals.

Covariance Matrix

Definition 4.5. The covariance matrix \(\Sigma\) of a random vector \(\vec{X}\) is an \(m \times m\) matrix where

\[\Sigma_{i j}=\operatorname{Cov}\left(X_{i}, X_{j}\right) . \]

This means the diagonal entries \(\Sigma_{i i}, i=1, \ldots, m\), are the variances of the components. \(\Sigma_{i i}=\operatorname{Var}\left(X_{i}\right) .\)
We can also write the covariance matrix in terms of vectors:

\[\Sigma=\mathbb{E}\left[(\vec{X}-\vec{\mu})(\vec{X}-\vec{\mu})^{T}\right] \]

This is the expectation of the outer product of a vector with itself. (Here we assume \(\vec{X}\) and \(\vec{\mu}\) are column vectors).

Properties of covariance matrices

Let \(\mathbb{E} \vec{X}=\vec{\mu}\) and \(\Sigma=\mathbb{E}\left[(\vec{X}-\vec{\mu})(\vec{X}-\vec{\mu})^{T}\right]\). Then

  1. \(\Sigma=\mathbb{E}\left[X X^{T}\right]-\vec{\mu} \vec{\mu}^{T}\).
  2. \(\Sigma\) is a symmetric matrix. That is, \(\Sigma_{i j}=\Sigma_{j i}\).
  3. \(\Sigma\) is a positive semi-definite matrix. That is, for every vector \(\vec{y} \in \mathbb{R}^{m}\),

\[\vec{y}^{T} \Sigma \vec{y} \geq 0 \]

Sums of independent random variables

Suppose \(X\) and \(Y\) are independent, continuous random variables, and let \(Z=X+Y\). What is the distribution of \(Z\) ?
We can calculate

\[\begin{aligned} F_{Z}(t) &=P(Z \leq t) \\ &=\int_{-\infty}^{\infty} \int_{-\infty}^{t-x} f_{X, Y}(x, y) d y d x \\ &=\int_{-\infty}^{\infty} \int_{-\infty}^{t-x} f_{X}(x) f_{Y}(y) d y d x \end{aligned} \]

or

\[f_{Z}(t)=\int_{-\infty}^{\infty} f_{X}(x) f_{Y}(t-x) d x \]

We can do the same for discrete random variables. Suppose \(X\) and \(Y\) are independent, discrete random variables, and let \(Z=X+Y\). What is the distribution of \(Z\) ?

\[F_{Z}(t)=\sum_{x} \sum_{y: x+y \leq t} f_{X}(x) f_{Y}(y) . \]

or

\[f_{Z}(t)=\sum_{k} f_{X}(k) f_{Y}(t-k) . \]

The first and second moment methods

Markov's Inequality & first moment method

Theorem \(5.1\) (Markov's inequality). Suppose \(X\) is a non-negative random variable. Then

\[P[X \geq t] \leq \frac{\mathbb{E} X}{t} \]

Say we have a counting (or any non-negative) random variable \(X\) and its expectation is small. Then Markov's inequality tells us that \(X\) cannot be too big too often.

The First moment method

Lemma \(5.2\) (The first moment method for counting random variables). Let \(B_{1}(n), B_{2}(n)\), \(\ldots, B_{m}(n)\) be a sequence of \((b a d)\) events that may depend on some hidden parameter \(n\). If

\[\lim _{n \rightarrow \infty} \sum_{i=1}^{m} P\left(B_{i}(n)\right)=0 \]

then \(\lim _{n \rightarrow \infty} P[\) At least one bad event occurs \(]=0\).
This is the first moment method.
Example: Let \(M_{n}\) be an \(n \times n\) matrix in which each entry is 0 or 1 with probability \(1 / 2\) each, independently of all other entries. Let \(B\) be the event that there is a row of all 0 's. Show that \(P(B) \rightarrow 0\) as \(n \rightarrow \infty\)

坏事情不会发生,因为每个所有坏事的概率的和=0.

Chebyshev's Inequality & second moment method

Theorem \(5.3\) (Chebyshev's Inequality). Let \(X\) be any random variable with finite mean and variance. Then

\[P[|X-\mathbb{E} X| \geq t] \leq \frac{\operatorname{Var}(X)}{t^{2}} \]

In statistics we will be most interested in using Chebyshev's Inequality with a sequence of random variables and asking about the behavior in the limit.

The second moment method

Corollary 5.4. Suppose \(X\) is a non-negative, integer-valued random variable. Then

\[P(X \geq 1) \geq 1-\frac{\operatorname{Var}(X)}{(\mathbb{E} X)^{2}} \]

Or in other words, let \(B_{1}(n), B_{2}(n), \ldots, B_{m(n)}(n)\) be a sequence of (bad) events that may depend on some hidden parameter \(n\). Let \(X=\sum_{i=1}^{m(n)} 1_{B_{i}}\) be the number of these events that occur. If

\[\lim _{n \rightarrow \infty} \frac{\operatorname{Var}(X)}{(\mathbb{E} X)^{2}}=0 \]

then \(\lim _{n \rightarrow \infty} P[\) At least one bad event occurs \(]=1\).

坏事情会发生,因为坏事发生的次数是稳定的,并且次数的期望增长速度大于次数的散度。

Convergence of RV

3 types of convergence:

  • Convergence in probability
  • Convergence in distribution
  • Almost sure convergence \(^*\) (non-examinbable)

\[\text{almost surly} \Rightarrow \text{Converge in probability } \Rightarrow \text{Converge in distribution} \]

Convergence in Probability

Definition 6.2. Let \(X_{1}, X_{2}, \ldots\) be an infinite sequence of random variables, and let \(X\) be a single random variable, all defined on the same probability space.
Then we say \(X_{n}\) converges to \(X\) in probability if for every \(\epsilon>0\),

\[\lim _{n \rightarrow \infty} P\left[\left|X_{n}-X\right|>\epsilon\right]=0 . \]

Notice that this is a statement about the limit of probabilities.

Convergence in distribution

Example 6.3. Let \(X_{1}, \ldots\) be a sequence of random variables with \(X_{n} \sim \operatorname{Bin}(n, \lambda / n)\). Then \(\lim _{n \rightarrow \infty} P\left[X_{n}=k\right]=P[\) Pois \((\lambda)=k]\)

Definition 6.4. We say a sequence of random variables \(X_{1}, X_{2}, \ldots\) converges in distribution to a random variable \(X\) if

\[\lim _{n \rightarrow \infty} F_{X_{n}}(t)=F_{X}(t) \]

for every \(t \in \mathbb{R}\) at which \(F_{X}(t)\) is continuous (e.g.convergence holds for all continuity points of \(F_{X}(t)\). We write this with a double arrow:)

\[X_{n} \Rightarrow X \]

Lemma 6.6. Let \(X_{1}, X_{2}, \ldots\) be a sequence of discrete random variables and \(X\) another discrete random variable, they are all non-negative integer valued. Then,

\[X_{n} \Rightarrow X \]

if and only if

\[\lim _{n \rightarrow \infty} f_{X_{n}}(t)=f_{X}(t) \]

for all \(t \in \mathbb{R}\).

The Law of Large Numbers

Weak Law of Large Numbers (WLLN)

Theorem 6.12 (Weak Law of Large Numbers). Let \(X_{1}, X_{2}, \ldots\) be a sequence of \(i . i . d\). random variables with \(\mathbb{E} X_{j}=\mu\) and \(\operatorname{Var}\left(X_{j}\right)=\sigma^{2}<\infty\). Let \(\bar{S}_{n}=\frac{X_{1}+\cdots+X_{n}}{n}\). Then for every \(\epsilon>0\),

\[\lim _{n \rightarrow \infty} P\left[\left|\bar{S}_{n}-\mu\right|>\epsilon\right]=0 \]

Strong Law of Large Numbers is non-examinable.

Characteristic functions

Definition 6.14. The characteristic function \(\phi_{X}(t)\) of a random variable \(X\) is the function

\[\phi_{X}(t)=\mathbb{E}\left[e^{i t X}\right] \]

(This is related to the Fourier transform of the distribution of \(X\) ).

Theorem \(6.15\) (Levy's Continuity Theorem). Let \(X_{1}, X_{2}, \ldots\) be a sequence of random variables and \(X\) another random variable. Then the following are equivalent:

  1. \(X_{n} \Rightarrow X\) (convergence in distribution).
  2. \(\lim _{n \rightarrow \infty} \phi_{X_{n}}(t)=\phi_{X}(t)\) for all \(t\).

Limit theorems via characteristic functions

A proof of the WLLN

Here is the Weak Law of Large Numbers, assuming only the existence of a finite mean.
Theorem 6.16. Let \(X_{1}, X_{2}, \ldots\) be i.i.d. random variables with \(\mathbb{E}_{2} X_{k}=\mu .\) Let \(\bar{S}_{n}=\frac{X_{1}+\cdots X_{n}}{n}\). Then \(\bar{S}_{n} \Rightarrow \mu\)
(Notice the conditions are less restrictive than those when we proved the WLLN using Chebyshev's inequality).

The Central Limit Theorem

Theorem 6.17. Let \(X_{1}, X_{2}, \ldots\) be i.i.d. random variables with \(\mathbb{E} X_{i}=\mu\) and \(\operatorname{Var}\left(X_{i}\right)=\sigma^{2}\). Let

\[\tilde{S}_{n}=\frac{X_{1}+\cdots+X_{n}-n \mu}{\sigma \sqrt{n}} \]

Then \(\tilde{S}_{n} \Rightarrow N(0,1)\).

Moment generating function

Definition 6.18. The moment generating function of a random variable \(X\) is the function

\[M_{X}(t)=\mathbb{E}\left[e^{t X}\right] \]

Beware: the function may not be defined for all values of \(t\), i.e. the expectation might be \(\infty .\)

Probability generating function

Definition 6.19. The probability generating function of a counting random variable \(X\) is the function

\[G_{X}(t)=\sum_{k=0}^{\infty} t^{k} P[X=k]=\mathbb{E}\left[t^{X}\right] \]

Beware: the function may not be defined for all values of \(t\), ie. the sum might be \(\infty\).

Random walks

The simple symmetric random walk

Definition 7.2. The simple symmetric random walk is the discrete-time, discrete-space stochastic process \(S_{0}, S_{1}, S_{2}, \ldots\) with \(S_{0}=0\) and \(S_{n}=X_{1}+\cdots+X_{n}\) where the \(X_{j}\) 's are i.i.d. \(\pm 1\) with probability \(1 / 2\) each.
'Simple' refers to the fact that the increments are just \(+1\) and \(-1\). 'Symmetric' refers to the fact that the probability of \(+1\) is \(1 / 2\).

We can define other random walks. Given any distribution \(X\), let \(X_{1}, X_{2}, \ldots\) be i.i.d. copies of \(X\). Then let \(S_{0}=0\) and \(S_{n}=X_{1}+\cdots+X_{n}\) is a random walk.
Exercise: prove that any random walk is a Markov chain.
Basic properties of the Simple Symmetric Random Walk (SSRW)

  • \(\mathbb{E} S_{n}=0\)
  • \(\operatorname{Var}\left(S_{n}\right)=n\)
  • \(S_{n} / n \rightarrow 0\) in probability and almost surely. (Law of Large Numbers)
  • \(S_{n} / \sqrt{n} \Rightarrow N(0,1)\). (Central Limit Theorem)

标签:mathbb,infty,right,Review,random,Statistic,operatorname,left
From: https://www.cnblogs.com/kion/p/16844812.html

相关文章

  • 第121期:一次成功的基于vue3和ts的CodeReview
    封面图CodeReview现场背景当前项目已经接入了公司自研的前端监控平台,已经有能力对线上运行的各个项目进行错误监控,并且可以统计相关报错信息及日志。对于报错问题的修复前段......
  • 「规则类怪谈」塔 - (preview)
      前段时间,大概五六月份,我很喜欢规则类怪谈。颓废的时候读了一大堆质量参差不齐的作品,试着编了一个世界观然后开写。咕着咕着磨到现在,至少情节大概成形了。最大的阻力在......
  • A Review of Knowledge Graph Completion
    ZaminiM,RezaH,RabieiM.AReviewofKnowledgeGraphCompletion.Information.2022;13(8):396.概述  Currentreal-worldknowledgegraphsareusually......
  • Windows Server **** Preview 通用密钥
    Microsoft还提供了WindowsServer****Preview(微软服务器Windows系统预览版本)密钥,但明确表示它们仅适用于预览版。服务器标准(standard):MFY9F-XBN2F-TYFMP-CCV49-......
  • vue-preview图片预览组件的使用
    一、运行命令`npmivue-preview `安装插件。二、在main.js里面进行全局声明。 三、插入缩略图,imgs1为定义的用于存储缩略图数据的属性。   四、获取数据......
  • Android开发 Jetpack_Compose_2 页面预览@Preview
    前言在学习jetpackcompose如何编写ui之前,我认为还是应该先了解与Androidstudio配合的页面预览@Preview。这样就可以立刻看到UI效果,从而方便后续学习验证代码。所......
  • Review-python-note2
    Note2-bodyandstructure标签(空格分隔):pythonLearningtechniquesAssumingyouareplaying<<callofduty>>andneedgothroughonedifficultscenewhere......
  • Can't locate Statistics/Descriptive.pm in @INC (@INC contains: /usr/local/lib64/
     001、问题Can'tlocateStatistics/Descriptive.pmin@INC(@INCcontains:/usr/local/lib64/perl5/u 002、解决方法:perldoc-lmStatistics::Descriptive......
  • 企业微信第三方开发JSApi调用previewFile没反应 无报错
    在开发企业微信第三方应用时,使用JSApi。初始化wx.config时没有报错,并在jsApiList里加上了previewFile。经过测试,发现IPhone端wx.previewFile为undefined。当前调用方式:......
  • IBM SPSS Statistics 默认工具栏缺失怎么办?
    ibmspssStatisticsMac是一款功能强大的数据统计分析软件。但有时打开Statistics,数据编辑器、语法编辑器或输出查看器上的默认工具栏丢失了。用户定义的工具栏有时也会......