我有一个3000×300矩阵文件(浮点数).当我读取并转换为float时,我得到了float64,这在
python中是默认的.我尝试使用numpy和map()将其转换为float32(),但它们看起来都非常低效.
我的代码:
x = open(readFrom, 'r').readlines()
y = [[float(i) for i in s.split()] for s in x]
所用时间:0:00:00.996000
numpy实现:
x = open(readFrom, 'r').readlines()
y = [[np.float32(i) for i in s.split()] for s in x]
所用时间:0:00:06.093000
地图()
x = open(readFrom, 'r').readlines()
y = [map(np.float32, s.split()) for s in x]
所用时间:0:00:05.474000
如何非常有效地转换为float32?
谢谢.
更新:
numpy.loadtxt()或numpy.genfromtxt()无效(给出内存错误)的大文件.我发布了一个与此相关的问题,我在这里介绍的方法适用于巨大的矩阵文件(50,000×5000). here is the question
最佳答案 如果内存有问题,并且如果您提前知道字段的大小,则可能不希望首先读取整个文件.这样的事情可能更合适:
#allocate memory (np.empty would work too and be marginally faster,
# but probably not worth mentioning).
a=np.zeros((3000,300),dtype=np.float32)
with open(filename) as f:
for i,line in enumerate(f):
a[i,:]=map(np.float32,line.split())
在我的机器上进行几次快速(和令人惊讶的)测试后,似乎甚至不需要地图:
a=np.zeros((3000,300),dtype=np.float32)
with open(filename) as f:
for i,line in enumerate(f):
a[i,:]=line.split()
这可能不是最快的,但它肯定是最有效的内存方式.
一些测试:
import numpy as np
def func1(): #No map -- And pretty speedy :-).
a=np.zeros((3000,300),dtype=np.float32)
with open('junk.txt') as f:
for i,line in enumerate(f):
a[i,:]=line.split()
def func2():
a=np.zeros((3000,300),dtype=np.float32)
with open('junk.txt') as f:
for i,line in enumerate(f):
a[i,:]=map(np.float32,line.split())
def func3():
a=np.zeros((3000,300),dtype=np.float32)
with open('junk.txt') as f:
for i,line in enumerate(f):
a[i,:]=map(float,line.split())
import timeit
print timeit.timeit('func1()',setup='from __main__ import func1',number=3) #1.36s
print timeit.timeit('func2()',setup='from __main__ import func2',number=3) #11.53s
print timeit.timeit('func3()',setup='from __main__ import func3',number=3) #1.72s