# Probabilistic statistical model of machine learning (with code)

catalogue

Fundamentals of mathematical statistics

2.1 sampling distribution

1) Chi square distribution

2) t distribution

3) F distribution

2.2 law of large numbers

2.3 central limit theorem

summary

# mathematical statistics

Mathematical statistics is a branch of mathematics, which is divided into descriptive statistics and inferential statistics. It takes probability theory Based on this, the statistical regularity of a large number of random phenomena is studied. The task of descriptive statistics is to collect data, sort and group them, prepare frequency distribution table, draw frequency distribution curve, and calculate various characteristic indexes to describe the concentration trend, deviation trend and skewness of frequency distribution. Inferential statistics is to infer and predict the whole on the basis of descriptive statistics and the regularity summarized from sample data.

Because this article will be used many times matplotlib Library If necessary, please move to matplotlib Library Consolidate knowledge points.

## 2.1 sampling distribution

### 1) Chi square() distribution

If n independent random variables ξ ₁， ξ ₂，..., ξ n. All obey the standard normal distribution (also known as independent and identically distributed in the standard normal distribution), then the sum of the squares of the N random variables subject to the standard normal distribution constitutes a new random variable, and its distribution law is called chi square distribution

import numpy as np
from scipy import stats
import matplotlib.pyplot as plt

def diff_chi2_dis():
'''
Chi square distribution under different parameters
:return:
'''
chi2_dis_0_5 = stats.chi2(df = 0.5)
chi2_dis_1 = stats.chi2(df = 1)
chi2_dis_4 = stats.chi2(df = 4)
chi2_dis_10 = stats.chi2(df = 10)
chi2_dis_20 = stats.chi2(df = 20)

#x1 = np.linspace(chi2_dis_0_5.ppf(0.01), chi2_dis_0_5.ppf(0.99), 100)
x2 = np.linspace(chi2_dis_1.ppf(0.65), chi2_dis_1.ppf(0.9999999), 100)
x3 = np.linspace(chi2_dis_4.ppf(0.000001), chi2_dis_4.ppf(0.999999), 100)
x4 = np.linspace(chi2_dis_10.ppf(0.000001), chi2_dis_10.ppf(0.99999), 100)
x5 = np.linspace(chi2_dis_20.ppf(0.00000001), chi2_dis_20.ppf(0.9999), 100)

fig, ax = plt.subplots(1,1)
#ax.plot(x1, chi2_dis_0_5.pdf(x1), 'k-', lw=2, label= 'df=0.5')
ax.plot(x2, chi2_dis_1.pdf(x2), 'g-', lw=2, label= 'df=1')
ax.plot(x3, chi2_dis_4.pdf(x3), 'r-', lw=2, label= 'df=4')
ax.plot(x4, chi2_dis_10.pdf(x4), 'b-', lw=2, label= 'df=10')
ax.plot(x5, chi2_dis_20.pdf(x5), 'y-', lw=2, label= 'df=20')

plt.ylabel('Probability')
plt.title(r'PDF of $\chi^2$Distribution')
ax.legend(loc='best', frameon=False)
plt.show()

diff_chi2_dis()

### 2) t distribution

t-distribution is used to estimate the mean of the population with normal distribution and unknown variance according to small samples. If the population variance is known (for example, when the number of samples is large enough), the normal distribution should be used to estimate the population mean.

import numpy as np
from scipy import stats
import matplotlib.pyplot as plt

def diff_t_dis():
'''
Chi square distribution under different parameters
:return:
'''
norm_dis = stats.norm()
t_dis_1 = stats.t(df = 1)
t_dis_4 = stats.t(df = 4)
t_dis_10 = stats.t(df = 10)
t_dis_20 = stats.t(df = 20)

x1 = np.linspace(norm_dis.ppf(0.000001), norm_dis.ppf(0.999999), 1000)
x2 = np.linspace(t_dis_1.ppf(0.04), t_dis_1.ppf(0.96), 1000)
x3 = np.linspace(t_dis_4.ppf(0.001), t_dis_4.ppf(0.999), 1000)
x4 = np.linspace(t_dis_10.ppf(0.001), t_dis_10.ppf(0.999), 1000)
x5 = np.linspace(t_dis_20.ppf(0.0001), t_dis_20.ppf(0.999), 1000)

fig, ax = plt.subplots(1,1)
ax.plot(x1, norm_dis.pdf(x1), 'k-', lw=2, label= r'N(0,1)')
ax.plot(x2, t_dis_1.pdf(x2), 'g-', lw=2, label= 'df=1')
ax.plot(x3, t_dis_4.pdf(x3), 'r-', lw=2, label= 'df=4')
ax.plot(x4, t_dis_10.pdf(x4), 'b-', lw=2, label= 'df=10')
ax.plot(x5, t_dis_20.pdf(x5), 'y-', lw=2, label= 'df=20')

plt.ylabel('Probability')
plt.title(r'PDF of t Distribution')
ax.legend(loc='best', frameon=False)
plt.show()

diff_t_dis()

### 3) F distribution

F distribution is a sampling distribution of the ratio of two independent random variables subject to Chi square distribution divided by their degrees of freedom. It is an asymmetric distribution and its positions are not interchangeable. F distribution has a wide range of applications, such as analysis of variance and significance test of regression equation.

import numpy as np
from scipy import stats
import matplotlib.pyplot as plt

def diff_F_dis():
'''
Chi square distribution under different parameters
:return:
'''
#F_dis_0_5 = stats.F(dfn = 10, dfd = 1)
F_dis_1_30 = stats.f(dfn = 1, dfd = 30)
F_dis_30_5 = stats.f(dfn = 30, dfd = 5)
F_dis_30_30 = stats.f(dfn = 30, dfd = 30)
F_dis_30_100 = stats.f(dfn = 30, dfd = 100)
F_dis_10_100 = stats.f(dfn = 100, dfd = 100)

#x1 = np.linspace(F_dis_0_5.ppf(0.01), F_dis_0_5.ppf(0.99), 100)
x2 = np.linspace(F_dis_1_30.ppf(0.65), F_dis_1_30.ppf(0.99), 100)
x3 = np.linspace(F_dis_30_5.ppf(0.00001), F_dis_30_5.ppf(0.999), 100)
x4 = np.linspace(F_dis_30_30.ppf(0.00001), F_dis_30_30.ppf(0.999), 100)
x5 = np.linspace(F_dis_30_100.ppf(0.0001), F_dis_30_100.ppf(0.999), 100)
x6 = np.linspace(F_dis_10_100.ppf(0.0001), F_dis_10_100.ppf(0.9999), 100)

fig, ax = plt.subplots(1,1,figsize=(20,10))
#ax.plot(x1, F_dis_0_5.pdf(x1), 'k-', lw=2, label= 'F(0.5,0.5)')
ax.plot(x2, F_dis_1_30.pdf(x2), 'g-', lw=2, label= 'F(1,30)')
ax.plot(x3, F_dis_30_5.pdf(x3), 'r-', lw=2, label= 'F(30,5)')
ax.plot(x4, F_dis_30_30.pdf(x4), 'b-', lw=2, label= 'F(30,30)')
ax.plot(x5, F_dis_30_100.pdf(x5), 'y-', lw=2, label= 'F(30,100)')
ax.plot(x6, F_dis_10_100.pdf(x6), 'y-', lw=2, label= 'F(100,100)')

plt.ylabel('Probability')
plt.title(r'PDF of F Distribution')
ax.legend(loc='best', frameon=False)
plt.show()

diff_F_dis()

# summary

This paper focuses on the combing and statistical methods related to artificial intelligence. Probability and statistical knowledge plays a very important role in the field of artificial intelligence. For example, deep learning theory and probability graph model all rely on the basic modeling language of probability distribution as the framework. Learning and understanding the probability and statistics of machine learning is beneficial to your mastery~~