使用sklearn python初始化GMM

我希望创建一个sklearn GMM对象,其中包含一组预定义的均值,权重和协方差(在网格上).

我设法做到了:

from sklearn.mixture import GaussianMixture
import numpy as np


def get_grid_gmm(subdivisions=[10,10,10], variance=0.05 ):
    n_gaussians = reduce(lambda x, y: x*y,subdivisions)
    step = [ 1.0/(2*subdivisions[0]),  1.0/(2*subdivisions[1]),  1.0/(2*subdivisions[2])]

    means = np.mgrid[ step[0] : 1.0-step[0]: complex(0,subdivisions[0]),
                      step[1] : 1.0-step[1]: complex(0,subdivisions[1]),
                      step[2] : 1.0-step[2]: complex(0,subdivisions[2])]
    means = np.reshape(means,[-1,3])
    covariances = variance*np.ones_like(means)
    weights = (1.0/n_gaussians)*np.ones(n_gaussians)
    gmm = GaussianMixture(n_components=n_gaussians, covariance_type='spherical' )
    gmm.weights_ = weights
    gmm.covariances_ = covariances
    gmm.means_ = means
    return gmm

def main():
    xx = np.random.rand(100,3)
    gmm = get_grid_gmm()
    y= gmm.predict_proba(xx)

if __name__ == "__main__":
    main()

问题是它缺少我稍后需要使用的gmm.predict_proba()方法.
我怎么能克服这个?

更新:我更新了代码,以显示错误的完整示例

UPDATE2

我根据评论和答案更新了代码

from sklearn.mixture import GaussianMixture
import numpy as np


def get_grid_gmm(subdivisions=[10,10,10], variance=0.05 ):
    n_gaussians = reduce(lambda x, y: x*y,subdivisions)
    step = [ 1.0/(2*subdivisions[0]),  1.0/(2*subdivisions[1]),  1.0/(2*subdivisions[2])]

    means = np.mgrid[ step[0] : 1.0-step[0]: complex(0,subdivisions[0]),
                      step[1] : 1.0-step[1]: complex(0,subdivisions[1]),
                      step[2] : 1.0-step[2]: complex(0,subdivisions[2])]
    means = np.reshape(means,[3,-1])
    covariances = variance*np.ones(n_gaussians)
    cov_type = 'spherical'
    weights = (1.0/n_gaussians)*np.ones(n_gaussians)
    gmm = GaussianMixture(n_components=n_gaussians, covariance_type=cov_type )
    gmm.weights_ = weights
    gmm.covariances_ = covariances
    gmm.means_ = means
    from sklearn.mixture.gaussian_mixture import _compute_precision_cholesky
    gmm.precisions_cholesky_ = _compute_precision_cholesky(covariances, cov_type)
    gmm.precisions_ = gmm.precisions_cholesky_ ** 2
    return gmm

def main():
    xx = np.random.rand(100,3)
    gmm = get_grid_gmm()
    _, y = gmm._estimate_log_prob(xx)
    y = np.exp(y)

if __name__ == "__main__":
    main()

没有更多错误,但_estimate_log_prob和predict_proba不会为拟合的GMM产生相同的结果.为什么会这样?

最佳答案 由于您不训练模型而只是使用函数进行估算,因此您不需要使用该对象,但可以使用它们在引擎盖下使用的相同功能.你可以试试_estimate_log_gaussian_prob.这就是他们在内部做的事情.

看看来源:

特别是在基层
https://github.com/scikit-learn/scikit-learn/blob/ab93d657eb4268ac20c4db01c48065b5a1bfe80d/sklearn/mixture/base.py#L342

即调用特定方法,而该方法又调用一个函数
https://github.com/scikit-learn/scikit-learn/blob/ab93d657eb4268ac20c4db01c48065b5a1bfe80d/sklearn/mixture/gaussian_mixture.py#L671

点赞