I made a little python script that produce some 3D points coordinates, deriving them from stereo vision (2 IR cameras). 3D coordinates are certainly correct.
Now I have a third RGB camera and I was given the calibration matrices (I can't verify them): K is the matrix of intrinsics and R,t are the component of extrinsic parameters. The RGB image is 1280x800 But I can't set the right correspondence.
I thought it would be easy use the projection formula, deriving the projection matrix "Pcolor" as Pcolor = K[R|t] and re-projecting the XYZ 3D coordinates (named "Pworld") as it follows:
I expected to obtain (u,v,w) so I normalized the result dividing it by w.
#import numpy as np
ExtrColor = np.concatenate((Rcolor,Tcolor), axis = 1)
#Rcolor is the 3x3 rotation matrix, Tcolor the column translation array
# after calculation of X,Y,Z
Pworld = np.matrix([[X], [Y], [Z], [1]])
Pcolor = np.dot((np.dot(Kcolor,ExtrColor)),Pworld)
u = round(Pcolor[0,0]/Pcolor[2,0])
v = round(Pcolor[1,0]/Pcolor[2,0])
Then I found that I obtain u and v values greater than 12000 instead of being in the range of the image (x<1280 and y<800).
I can't figure out what is the problem. Has anyone ever got a similar problem? I don't think about problem of scale factor in XYZ coordinates, it should be ineffective in this formulation of the problem . Is a problem of my usage of np.dot? I'm quite sure it is a small error but I can't see it.
Thanks for answering (and sorry for the poor english) !
I checked similar questions in stack-overflow, here the problem seems to be similar but the method is different.
PS This time I do not want to obtain the result using openCV or other libraries of functions, if it is possible