也称欧几里得距离,是指在m维空间中两个点之间的真实距离。欧式距离在ML中使用的范围比较广,也比较通用,就比如说利用k-Means对二维平面内的数据点进行聚类,对魔都房价的聚类分析(price/m^2 与平均房价)等。
两个n维向量a(x11,x12.....x1n)(x_{11},x_{12}.....x_{1n})(x11,x12.....x1n)与 b(x21,x22.....x2n)(x_{21},x_{22}.....x_{2n})(x21,x22.....x2n)间的欧氏距离
python 实现为:
def EuclideanDistance(x, y):import numpy as npx = np.array(x)y = np.array(y)return np.sqrt(np.sum(np.square(x-y)))
这里传入的参数可以是任意维的,该公式也适应上边的二维和三维
D12=∑k=1n∣x1k−x2k∣D_{12}=\sum_{k=1}^{n}|x_{1k}-x_{2k}|D12=k=1∑n∣x1k−x2k∣
python 实现为:
def ManhattanDistance(x, y):import numpy as npx = np.array(x)y = np.array(y)return np.sum(np.abs(x-y))
切比雪夫距离(Chebyshev Distance)的定义为:max( | x2-x1 | , |y2-y1 | , … ), 切比雪夫距离用的时候数据的维度必须是三个以上
python 实现为:
def ChebyshevDistance(x, y):import numpy as npx = np.array(x)y = np.array(y)return np.max(np.abs(x-y))
有M个样本向量X1~Xm,协方差矩阵记为S,均值记为向量μ,则其中样本向量X到u的马氏距离表示为
D(x)=(X−u)TS−1(X−u)D(x)=\sqrt {(X-u)^TS^{-1}(X-u)}D(x)=(X−u)TS−1(X−u)
python实现:
def MahalanobisDistance(x, y):'''马氏居立中的(x,y)与欧几里得距离的(x,y)不同,欧几里得距离中的(x,y)指2个样本,每个样本的维数为x或y的维数;这里的(x,y)指向量是2维的,样本个数为x或y的维数,若要计算n维变量间的马氏距离则需要改变输入的参数如(x,y,z)为3维变量。'''import numpy as npx = np.array(x)y = np.array(y)X = np.vstack([x,y])X_T = X.Tsigma = np.cov(X)sigma_inverse = np.linalg.inv(sigma)d1=[]for i in range(0, X_T.shape[0]):for j in range(i+1, X_T.shape[0]):delta = X_T[i] - X_T[j]d = np.sqrt(np.dot(np.dot(delta,sigma_inverse),delta.T))d1.append(d)return d1
cosθ=a∗b∣a∣∣b∣cos \theta = \frac {a*b} {|a||b|}cosθ=∣a∣∣b∣a∗b cosθ=∑k=1nx1kx2k∑k=1nx1k2∑k=1nx2k2cos \theta = \frac {\sum_{k=1}^{n}x_{1k}x_{2k}} {\sqrt {\sum_{k=1}^{n}x_{1k}^2}\sqrt {\sum_{k=1}^{n}x_{2k}^2}}cosθ=∑k=1nx1k2∑k=1nx2k2∑k=1nx1kx2k
def moreCos(a,b):sum_fenzi = 0.0sum_fenmu_1,sum_fenmu_2 = 0,0for i in range(len(a)):sum_fenzi += a[i]*b[i]sum_fenmu_1 += a[i]**2 sum_fenmu_2 += b[i]**2 return sum_fenzi/( sqrt(sum_fenmu_1) * sqrt(sum_fenmu_2) )
p∑k=1n∣x1k−x2k∣pp\sqrt{\sum_{k=1}^{n}|x_{1k}-x_{2k}|^p}pk=1∑n∣x1k−x2k∣p
当p=1时,就是曼哈顿距离
当p=2时,就是欧氏距离
当p→∞时,就是切比雪夫距离
python实现:
def MinkowskiDistance(x, y, p):import mathimport numpy as npzipped_coordinate = zip(x, y)return math.pow(np.sum([math.pow(np.abs(i[0]-i[1]), p) for i in zipped_coordinate]), 1/p)
两个等长字符串s1与s2之间的汉明距离定义为将其中一个变为另外一个所需要作的最小替换次数
def hanmingDis(a,b):sumnum = 0for i in range(len(a)):if a[i]!=b[i]:sumnum += 1return sumnum
杰卡德距离,杰卡德距离用两个集合中不同元素占所有元素的比例来衡量两个集合的区分度。
Jδ(A,B)=∣A∪B∣−∣A∩B∣∣A∪B∣J_{\delta}(A, B)=\frac{|A \cup B|-|A \cap B|}{|A \cup B|}Jδ(A,B)=∣A∪B∣∣A∪B∣−∣A∩B∣
def jiekadeDis(a,b):set_a = set(a)set_b = set(b)dis = float(len( (set_a | set_b) - (set_a & set_b) ) )/ len(set_a | set_b)return dis
杰卡德相似系数
两个集合A和B的交集元素在A,B的并集中所占的比例,称为两个集合的杰卡德相似系数,用符号J(A,B)表示。
J(A,B)=∣A∩B∣∣A∪B∣J(A, B)=\frac{|A \cap B|}{|A \cup B|}J(A,B)=∣A∪B∣∣A∩B∣
def jiekadeXSDis(a,b):set_a = set(a)set_b = set(b)dis = float(len(set_a & set_b) )/ len(set_a | set_b)return dis
相关系数
ρXY=Cov(X,Y)D(X)D(Y)=E((X−EX)(Y−EY))D(X)D(Y)\rho_{X Y}=\frac{\operatorname{Cov}(X, Y)}{\sqrt{D(X)} \sqrt{D(Y)}}=\frac{E((X-E X)(Y-E Y))}{\sqrt{D(X)} \sqrt{D(Y)}}ρXY=D(X)D(Y)Cov(X,Y)=D(X)D(Y)E((X−EX)(Y−EY))
import mathdef c_Pearson(x, y):x_mean, y_mean = sum(x)/len(x), sum(y)/len(y)cov =0.0x_pow = 0.0y_pow = 0.0for i in range(len(x)):cov += (x[i]-x_mean) *(y[i] - y_mean)for i in range(len(x)):x_pow += math.pow(x[i] - x_mean, 2)for i in range(len(x)):y_pow += math.pow(y[i] - y_mean, 2)sumBm = math.sqrt(x_pow * y_pow)p = cov / sumBmreturn p
衡量分布的混乱程度或分散程度的一种度量.
(X)=∑i=1n−pilog2pi(X)=\sum_{i=1}^{n}-p_{i} \log _{2} p_{i}(X)=i=1∑n−pilog2pi
import numpy as npdata=['a','b','c','a','a','b']
data1=np.array(data)
#计算信息熵的方法
def calc_ent(x):"""calculate shanno ent of x"""x_value_list = set([x[i] for i in range(x.shape[0])])ent = 0.0for x_value in x_value_list:p = float(x[x == x_value].shape[0]) / x.shape[0]logp = np.log2(p)ent -= p * logpreturn ent