nlp – 为GLoVe单词矢量文件创建Spark模式

可在此处下载的GLoVe预训练单词向量(
https://nlp.stanford.edu/projects/glove/)具有以下文件格式:

government 0.38797 -1.0825 0.45025 -0.23341 0.086307 -0.25721 -0.18281 -0.10037 -0.50099 -0.58361 -0.052635 -0.14224 0.0090217 -0.38308 0.18503 0.42444 0.10611 -0.1487 1.0801 0.065757 0.64552 0.1908 -0.14561 -0.87237 -0.35568 -2.435 0.28428 -0.33436 -0.56139 0.91404 4.0129 0.072234 -1.2478 -0.36592 -0.50236 0.011731 -0.27409 -0.50842 -0.2584 -0.096172 -0.67109 0.40226 0.27912 -0.37317 -0.45049 -0.30662 -1.6426 1.1936 0.65343 -0.76293

它是一个以空格分隔的文件,其中每行中的第一个标记是单词,其余N个列是单词向量的浮点值. N可以是50,100,200或300,具体取决于所使用的文件.上面的例子是N = 50(即50维字矢量).

如果我将数据文件加载为带有sep =”和header = False的csv(文件中没有标题),我会得到以下内容:

Row(_c0='the', _c1='0.418', _c2='0.24968', _c3='-0.41242', _c4='0.1217', _c5='0.34527', _c6='-0.044457', _c7='-0.49688', _c8='-0.17862', _c9='-0.00066023', _c10='-0.6566', _c11='0.27843', _c12='-0.14767', _c13='-0.55677', _c14='0.14658', _c15='-0.0095095', _c16='0.011658', _c17='0.10204', _c18='-0.12792', _c19='-0.8443', _c20='-0.12181', _c21='-0.016801', _c22='-0.33279', _c23='-0.1552', _c24='-0.23131', _c25='-0.19181', _c26='-1.8823', _c27='-0.76746', _c28='0.099051', _c29='-0.42125', _c30='-0.19526', _c31='4.0071', _c32='-0.18594', _c33='-0.52287', _c34='-0.31681', _c35='0.00059213', _c36='0.0074449', _c37='0.17778', _c38='-0.15897', _c39='0.012041', _c40='-0.054223', _c41='-0.29871', _c42='-0.15749', _c43='-0.34758', _c44='-0.045637', _c45='-0.44251', _c46='0.18785', _c47='0.0027849', _c48='-0.18411', _c49='-0.11514', _c50='-0.78581')

我的问题是,是否有一种方法可以指定一个模式,以便第一列可以作为StringType列读入,剩下的N列读取为N个浮点值的ArrayType?

最佳答案 您可以尝试使用以下pyspark方法来获取所需的模式,然后在加载GLoVe数据时可以将其用作schema()选项.我们的想法是使用循环来定义N维嵌入的浮点类型.当然,它有点hacky,可能有一个更优雅的解决方案.

from pyspark.sql.types import StructType, StructField
from pyspark.sql.types import IntegerType, StringType, FloatType

def make_glove_schema(keyword="word", N=50):
    """Make a GloVe schema of length N + 1, with 1 : N+1 as float types.

    Params
    ------
    keyword (str): name of the first, i.e. 0th index column
    N (int): dimension of GLoVe representation

    Returns
    -------
    S (pyspark.sql.types.StructType): schema to use when loading GLoVe data.

    """
    a = StructType([StructField(keyword, StringType())])
    b = StructType([StructField(str(x), FloatType()) for x in range(N)])
    x = [a.fields + b.fields]
    Z = StructType(x[0])
    #type(Z) == type(a) == type(b)   # True
    return Z

然后,您可以按如下方式引用/加载(制表符分隔的)文件,假设您已指定或具有sqlContext:

glove_schema = make_glove_schema()
f = "path_to_your_glove_data"
df_glove = sqlContext.read.format("csv").\
                option("delimiter","\t").\
                schema(glove_schema).\
                load(f)
点赞