Normal | R Documentation |
The Normal distribution is ubiquitous in statistics, partially because of the central limit theorem, which states that sums of i.i.d. random variables eventually become Normal. Linear transformations of Normal random variables result in new random variables that are also Normal. If you are taking an intro stats course, you'll likely use the Normal distribution for Z-tests and in simple linear regression. Under regularity conditions, maximum likelihood estimators are asymptotically Normal. The Normal distribution is also called the gaussian distribution.
Normal(mu = 0, sigma = 1)
mu |
The location parameter, written |
sigma |
The scale parameter, written |
We recommend reading this documentation on https://alexpghayes.github.io/distributions3/, where the math will render with additional detail and much greater clarity.
In the following, let X
be a Normal random variable with mean
mu
= \mu
and standard deviation sigma
= \sigma
.
Support: R
, the set of all real numbers
Mean: \mu
Variance: \sigma^2
Probability density function (p.d.f):
f(x) = \frac{1}{\sqrt{2 \pi \sigma^2}} e^{-(x - \mu)^2 / 2 \sigma^2}
Cumulative distribution function (c.d.f):
The cumulative distribution function has the form
F(t) = \int_{-\infty}^t \frac{1}{\sqrt{2 \pi \sigma^2}} e^{-(x - \mu)^2 / 2 \sigma^2} dx
but this integral does not have a closed form solution and must be
approximated numerically. The c.d.f. of a standard Normal is sometimes
called the "error function". The notation \Phi(t)
also stands
for the c.d.f. of a standard Normal evaluated at t
. Z-tables
list the value of \Phi(t)
for various t
.
Moment generating function (m.g.f):
E(e^{tX}) = e^{\mu t + \sigma^2 t^2 / 2}
A Normal
object.
Other continuous distributions:
Beta()
,
Cauchy()
,
ChiSquare()
,
Erlang()
,
Exponential()
,
FisherF()
,
Frechet()
,
GEV()
,
GP()
,
Gamma()
,
Gumbel()
,
LogNormal()
,
Logistic()
,
RevWeibull()
,
StudentsT()
,
Tukey()
,
Uniform()
,
Weibull()
set.seed(27)
X <- Normal(5, 2)
X
mean(X)
variance(X)
skewness(X)
kurtosis(X)
random(X, 10)
pdf(X, 2)
log_pdf(X, 2)
cdf(X, 4)
quantile(X, 0.7)
### example: calculating p-values for two-sided Z-test
# here the null hypothesis is H_0: mu = 3
# and we assume sigma = 2
# exactly the same as: Z <- Normal(0, 1)
Z <- Normal()
# data to test
x <- c(3, 7, 11, 0, 7, 0, 4, 5, 6, 2)
nx <- length(x)
# calculate the z-statistic
z_stat <- (mean(x) - 3) / (2 / sqrt(nx))
z_stat
# calculate the two-sided p-value
1 - cdf(Z, abs(z_stat)) + cdf(Z, -abs(z_stat))
# exactly equivalent to the above
2 * cdf(Z, -abs(z_stat))
# p-value for one-sided test
# H_0: mu <= 3 vs H_A: mu > 3
1 - cdf(Z, z_stat)
# p-value for one-sided test
# H_0: mu >= 3 vs H_A: mu < 3
cdf(Z, z_stat)
### example: calculating a 88 percent Z CI for a mean
# same `x` as before, still assume `sigma = 2`
# lower-bound
mean(x) - quantile(Z, 1 - 0.12 / 2) * 2 / sqrt(nx)
# upper-bound
mean(x) + quantile(Z, 1 - 0.12 / 2) * 2 / sqrt(nx)
# equivalent to
mean(x) + c(-1, 1) * quantile(Z, 1 - 0.12 / 2) * 2 / sqrt(nx)
# also equivalent to
mean(x) + quantile(Z, 0.12 / 2) * 2 / sqrt(nx)
mean(x) + quantile(Z, 1 - 0.12 / 2) * 2 / sqrt(nx)
### generating random samples and plugging in ks.test()
set.seed(27)
# generate a random sample
ns <- random(Normal(3, 7), 26)
# test if sample is Normal(3, 7)
ks.test(ns, pnorm, mean = 3, sd = 7)
# test if sample is gamma(8, 3) using base R pgamma()
ks.test(ns, pgamma, shape = 8, rate = 3)
### MISC
# note that the cdf() and quantile() functions are inverses
cdf(X, quantile(X, 0.7))
quantile(X, cdf(X, 7))
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.