我有一个3×3单应矩阵,可以正常使用OpenCV的warpPerspective,但出于性能原因,我需要在GPU上进行扭曲.什么是最好的方法?我尝试在顶点着色器中进行乘法以获得纹理坐标,然后渲染四边形,但我得到了奇怪的扭曲.我不确定插值是否按预期工作.附加输出以进行比较(它涉及两个不同但足够近的镜头).
来自GPU的warp和其他图像的绝对差异:
OpenCV中的warp和其他图像的复合:
编辑:
以下是我的着色器:任务是图像校正(使epilines成为扫描线)的绝对差异.
// Vertex Shader
static const char* warpVS = STRINGIFY
(
uniform highp mat3 homography1;
uniform highp mat3 homography2;
uniform highp int width;
uniform highp int height;
attribute highp vec2 position;
varying highp vec2 refTexCoords;
varying highp vec2 curTexCoords;
highp vec2 convertToTexture(highp vec3 pixelCoords) {
pixelCoords /= pixelCoords.z; // need to project
pixelCoords /= vec3(float(width), float(height), 1.0);
pixelCoords.y = 1.0 - pixelCoords.y; // origin is in bottom left corner for textures
return pixelCoords.xy;
}
void main(void)
{
gl_Position = vec4(position / vec2(float(width) / 2.0, float(height) / 2.0) - vec2(1.0), 0.0, 1.0);
gl_Position.y = -gl_Position.y;
highp vec3 initialCoords = vec3(position, 1.0);
refTexCoords = convertToTexture(homography1 * initialCoords);
curTexCoords = convertToTexture(homography2 * initialCoords);
}
);
// Fragment Shader
static const char* warpFS = STRINGIFY
(
varying highp vec2 refTexCoords;
varying highp vec2 curTexCoords;
uniform mediump sampler2D refTex;
uniform mediump sampler2D curTex;
uniform mediump sampler2D maskTex;
void main(void)
{
if (texture2D(maskTex, refTexCoords).r == 0.0) {
discard;
}
if (any(bvec4(curTexCoords[0] < 0.0, curTexCoords[1] < 0.0, curTexCoords[0] > 1.0, curTexCoords[1] > 1.0))) {
discard;
}
mediump vec4 referenceColor = texture2D(refTex, refTexCoords);
mediump vec4 currentColor = texture2D(curTex, curTexCoords);
gl_FragColor = vec4(abs(referenceColor.r - currentColor.r), 1.0, 0.0, 1.0);
}
);
最佳答案 我认为你只需要按像素进行投影.使refTexCoords和curTexCoords至少为vec3,然后在纹理查找之前在像素着色器中执行/ z.甚至更好地使用textureProj GLSL指令.
您希望在顶点着色器中执行线性的所有操作,但是像投影这样的操作需要在每个像素的片段着色器中完成.
此链接可能对某些背景有帮助:http://www.reedbeta.com/blog/2012/05/26/quadrilateral-interpolation-part-1/