View Single Post
# 35 09-12-2012 , 06:47 AM
Registered User
Join Date: Aug 2011
Location: Sliema Malta
Posts: 497
Did you write the 3d scanner from scratch? A c++ app? Just curious. With the interface did you utilize a GUI library or run it solely as a console app?

I have made my own game engine and know much about the properties of shaders and how you mathematically calculate and derive vectors, a matrix, normals, spec, and raycasts etc. But I was curious about the reverse engineering process of how a image is taken and the process is calculated. Because you are not utilizing a said eyeLoc and raycasting into 3d space and creating the 2d imageplane pixel. But the process must be similar to create the point cloud. Does it still have many issues with BRDF's that aren't lambertian?

Are you setting up a grid and calibrating the camera off reference points? With my motion capture system I would have a grid comprised of 12 cameras and would "wand" the area to calibrate and then be able to triangulate the reflectors on my suits. Just intertesting since it is kind of the opposite side of the programming spectrum I am doing.


Last edited by Chavfister; 09-12-2012 at 06:58 AM.