python – 仅在numpy与sklearn之间的PCA实现的差异

from tensorflow.examples.tutorials.mnist import input_data
mnist=input_data.read_data_sets('data/MNIST/', one_hot=True)

numpy实现

# Entire Data set
Data=np.array(mnist.train.images)
#centering the data
mu_D=np.mean(Data, axis=0)
Data-=mu_D


COV_MA = np.cov(Data, rowvar=False)
eigenvalues, eigenvec=scipy.linalg.eigh(COV_MA, eigvals_only=False)
together = zip(eigenvalues, eigenvec)
together = sorted(together, key=lambda t: t[0], reverse=True)
eigenvalues[:], eigenvec[:] = zip(*together)


n=3
pca_components=eigenvec[:,:n]
print(pca_components.shape)
data_reduced = Data.dot(pca_components)
print(data_reduced.shape)
data_original = np.dot(data_reduced, pca_components.T) # inverse_transform
print(data_original.shape)


plt.imshow(data_original[10].reshape(28,28),cmap='Greys',interpolation='nearest')

sklearn实施

from sklearn.decomposition import PCA

pca = PCA(n_components=3)
pca.fit(Data)

data_reduced = np.dot(Data, pca.components_.T) # transform
data_original = np.dot(data_reduced, pca.components_) # inverse_transform
plt.imshow(data_original[10].reshape(28,28),cmap='Greys',interpolation='nearest')

我想用numpy实现PCA算法.但是我不知道如何重建图像,我甚至不知道这段代码是否正确.

实际上,当我使用sklearn.decomposition.PCA时,结果与numpy实现不同.

你能解释一下这些差异吗?

最佳答案 我已经发现了一些差异.

一个:

n=300
projections = only_2.dot(eigenvec[:,:n])
Xhat = np.dot(projections, eigenvec[:,:n].T)
Xhat += mu_D
plt.imshow(Xhat[5].reshape(28,28),cmap='Greys',interpolation='nearest')

我想要的是,如果我的理解是正确的n = 300,你试图拟合300个特征向量,其特征值从高到低.

但是在sklearn

from sklearn.decomposition import PCA

pca = PCA(n_components=1)
pca.fit(only_2)

data_reduced = np.dot(only_2, pca.components_.T) # transform
data_original = np.dot(data_reduced, pca.components_) # invers

在我看来,你只适合FIRST组件(最大化方差的组件),而你不是全部300.

更多:

我可以清楚地说,有一件事就是你似乎理解PCA中发生了什么,但是你在实施它时遇到了麻烦.如果我错了,请纠正我,但是:

data_reduced = np.dot(only_2, pca.components_.T) # transform
data_original = np.dot(data_reduced, pca.components_) # inverse_transform

在这一部分中,您试图将您的特征向量投射到您的数据中,这是您应该在PCA中进行的操作,但在sklearn中,您应该做的是以下内容:

 import numpy as np
 from sklearn.decomposition import PCA

 pca = PCA(n_components=300)
 pca.fit_transform(only_2) 

如果你能告诉我你是如何创造的,那么我明天可以给你一个更具体的答案.

以下是sklearn关于PCA的fit_transform的说法:http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html#sklearn.decomposition.PCA.fit_transform

fit_transform(X, y=None)
Fit the model with X and apply the dimensionality reduction on X.

Parameters: 
X : array-like, shape (n_samples, n_features)
Training data, where n_samples is the number of samples and n_features is the number of features.

y : Ignored
Returns:    
X_new : array-like, shape (n_samples, n_components)
点赞