Given a set of measurements $X_{1},X_{2},…,X_{n}$ that are i.i.d., we can estimate their mean using a **sample mean** $X_{n}$, which is given by

This is just the definition of the average. There’s nothing special about it, except we’re adding up random variables instead of numbers.

It is important to note we’re not assuming the distribution that $X_{i}$ follows. It plays a large role in properties like the Law of large numbers and the Central limit theorem.

## The sample mean is also a random variable

From the definition above, we can see that the sample mean is simply the sum of a bunch of r.v.s, divided by a constant. This means that $X_{n}$ is also a sample mean, with a Expectation and Variance.

### Finding the expectation

By LOE, we have

$E(X_{n})=E(n1 i=1∑n X_{i})=n1 i=1∑n E(X_{i})$For some $X_{i}$ where $i∈[1..n]$, let $E(X_{i})=μ$. Then, it must be true that $E(X_{i})=μ$ for *all* $X_{i}$. This is because they all have the same distribution.

Therefore $E(X_{n})=n1 ⋅n⋅μ=μ$. In practice, this means we can just find $E(X_{i})$ for any of the individual r.v.s to find $E(X_{n})$.

### Finding the variance

Using Properties of variance, we have

$Var(X_{n})=Var(n1 i=1∑n X_{i})=n_{2}1 ⋅Var(i=1∑n X_{i})$Since we know all the variables are Independent, it follows directly that

$Var(X_{n})=n_{2}1 ⋅Var(i=1∑n X_{i})=n_{2}1 ⋅i=1∑n Var(X_{i})=n_{2}1 ⋅n⋅σ_{2}=nσ_{2} $In practice, we can find $σ_{2}=Var(X_{i})$ for any $X_{i}$, and then simply divide by $n$.