我有一个程序在intel Edison(32位Yocto
Linux)上运行.它读取传感器数据,然后将传感器数据写入文件.数据包含1个int和13个双包,每秒有100个包.一段时间后,我将从这个文件中删除文件并使用在x64 windows机器上运行的工具读取这些文件.
目前我正在将数据写为原始文本文件(因为字符串很好且可移植).但是,由于将为此编写的数据量,我正在寻找节省空间的方法.但是,我正在试图找到一种方法,在另一方面解释这一点时不会丢失任何数据.
我最初的想法是继续创建一个如下所示的结构:
struct dataStruct{
char front;
int a;
double b, c, d, e, f, g, h, i, j, l, m, n, o;
char end;
}
然后按如下方式进行联合:
union dataUnion{
dataStruct d;
char[110] c;
}
//110 was chosen because an int = 4 char, and a double = 8 char,
//so 13*8 = 104, and therefore d = 1 + 4 + 13*8 + 1 = 110
然后将char数组写入文件.然而,一点点的阅读告诉我,这样的实现可能不一定在OS之间兼容(更糟糕的是……它可能在某些时间工作而不是其他时间……).
所以我想知道 – 是否有一种可移植的方式来保存这些数据而不仅仅将其保存为原始文本?
最佳答案 正如其他人所说:序列化可能是解决您问题的最佳方案.
由于你处于资源有限的环境中,我建议使用像MsgPack这样的东西.它只是标题(给定一个C 11编译器),相当轻,格式简单,C接口很好.它甚至允许您非常容易地序列化用户定义的类型(即类/结构):
// adapted from https://github.com/msgpack/msgpack-c/blob/master/QUICKSTART-CPP.md
#include <msgpack.hpp>
#include <vector>
#include <string>
struct dataStruct {
int a;
double b, c, d, e, f, g, h, i, j, l, m, n, oo; // yes "oo", because "o" clashes with msgpack :/
MSGPACK_DEFINE(a, b, c, d, e, f, g, h, i, j, l, m, n, oo);
};
int main(void) {
std::vector<dataStruct> vec;
// add some elements into vec...
// you can serialize dataStruct directly
msgpack::sbuffer sbuf;
msgpack::pack(sbuf, vec);
msgpack::unpacked msg;
msgpack::unpack(&msg, sbuf.data(), sbuf.size());
msgpack::object obj = msg.get();
// you can convert object to dataStruct directly
std::vector<dataStruct> rvec;
obj.convert(&rvec);
}
作为替代方案,你可以查看谷歌的FlatBuffers.它看起来资源效率很高,但我还没有尝试过.
编辑:这是一个完整的例子,说明整个序列化 – 文件I / O – 反序列化循环:
// adapted from:
// https://github.com/msgpack/msgpack-c/blob/master/QUICKSTART-CPP.md
// https://github.com/msgpack/msgpack-c/wiki/v1_1_cpp_unpacker#msgpack-controls-a-buffer
#include <msgpack.hpp>
#include <fstream>
#include <iostream>
using std::cout;
using std::endl;
struct dataStruct {
int a;
double b, c, d, e, f, g, h, i, j, l, m, n, oo; // yes "oo", because "o" clashes with msgpack :/
MSGPACK_DEFINE(a, b, c, d, e, f, g, h, i, j, l, m, n, oo);
};
std::ostream& operator<<(std::ostream& out, const dataStruct& ds)
{
out << "[a:" << ds.a << " b:" << ds.b << " ... oo:" << ds.oo << "]";
return out;
}
int main(void) {
// serialize
{
// prepare the (buffered) output file
std::ofstream ofs("log.bin");
// prepare a data structure
dataStruct ds;
// fill in sample data
ds.a = 1;
ds.b = 1.11;
ds.oo = 101;
msgpack::pack(ofs, ds);
cout << "serialized: " << ds << endl;
ds.a = 2;
ds.b = 2.22;
ds.oo = 202;
msgpack::pack(ofs, ds);
cout << "serialized: " << ds << endl;
// continuously receiving data
//while ( /* data is being received... */ ) {
//
// // initialize ds...
//
// // serialize ds
// // You can use any classes that have the following member function:
// // https://github.com/msgpack/msgpack-c/wiki/v1_1_cpp_packer#buffer
// msgpack::pack(ofs, ds);
//}
}
// deserialize
{
// The size may decided by receive performance, transmit layer's protocol and so on.
// prepare the input file
std::ifstream ifs("log.bin");
std::streambuf* pbuf = ifs.rdbuf();
const std::size_t try_read_size = 100; // arbitrary number...
msgpack::unpacker unp;
dataStruct ds;
// read data while there are still unprocessed bytes...
while (pbuf->in_avail() > 0) {
unp.reserve_buffer(try_read_size);
// unp has at least try_read_size buffer on this point.
// input is a kind of I/O library object.
// read message to msgpack::unpacker's internal buffer directly.
std::size_t actual_read_size = ifs.readsome(unp.buffer(), try_read_size);
// tell msgpack::unpacker actual consumed size.
unp.buffer_consumed(actual_read_size);
msgpack::unpacked result;
// Message pack data loop
while(unp.next(result)) {
msgpack::object obj(result.get());
obj.convert(&ds);
// use ds
cout << "deserialized: " << ds << endl;
}
// All complete msgpack message is proccessed at this point,
// then continue to read addtional message.
}
}
}
输出:
serialized: [a:1 b:1.11 ... oo:101]
serialized: [a:2 b:2.22 ... oo:202]
deserialized: [a:1 b:1.11 ... oo:101]
deserialized: [a:2 b:2.22 ... oo:202]