- Research
- Open Access
- Published:

# Performance analysis of 3-D shape measurement algorithm with a short baseline projector-camera system

*Robotics and Biomimetics*
**volume 1**, Article number: 1 (2014)

## Abstract

A number of works for 3-D shape measurement based on structured light have been well-studied in the last decades. A common way to model the system is to use the binocular stereovision-like model. In this model, the projector is treated as a camera, thus making a projector-camera-based system unified with a well-established traditional binocular stereovision system. After calibrating the projector and camera, a 3-D shape information is obtained by conventional triangulation. However, in such a stereovision-like system, the short baseline problem exists and limits the measurement accuracy. Hence, in this work, we present a new projecting-imaging model based on fringe projection profilometry (FPP). In this model, we first derive a rigorous mathematical relationship that exists between the height of an object’s surface, the phase difference distribution map, and the parameters of the setup. Based on this model, we then study the problem of how the uncertainty of relevant parameters, particularly the baseline’s length, affects the 3-D shape measurement accuracy using our proposed model. We provide an extensive uncertainty analysis on the proposed model through partial derivative analysis, relative error analysis, and sensitivity analysis. Moreover, the Monte Carlo simulation experiment is also conducted which shows that the measurement performance of the projector-camera system has a short baseline.

## Introduction

Noncontact optical measurement methodology has been widely used in many industrial applications, such as industry inspection and 3-D printing manufacturing. Among these mature optical 3-D measurement techniques, the structured light technique has been widely used in recent years due to its good characteristics of high precision, flexibility, and robustness to texture-less object surface reconstruction. Numbers of works have been presented in this issue [1]–[3]. According to a different model, the methods for obtaining 3-D shape information with a structured light system can be simply divided into two main categories: one common way is using conventional binocular stereo vision or named ‘CSV’ model and the other strategy is adopting fringe projection profilometry (FPP) technique. In the first category, the projector is always treated like a camera. In this model, projector and camera in the system are always required to be pre-calibrated before 3-D shape measurement task. Many camera calibration methods are available to be utilized directly [4],[5]. However, for projector calibration, even with the latest accurate calibration methods using active target and phase-shifting technique [17],[18], the accuracy of projector calibration can hardly reach as the same level as the camera. One of the intuitive reasons is that the parameters of a projector cannot be calibrated individually without the help of camera. Thus, the error propagation from camera calibration process is unavoidable and the overall system calibration accuracy is limited. The biased calibrated parameters of camera and projector will decrease the measurement accuracy. Particularly, in a short baseline arrangement system, the bias from feature point localization on the image will also be magnified with biased parameters. Therefore, how to accurately calibrate a short baseline arrangement system is more critical than a general configuration system which usually has a much larger baseline. In the second category, the projector is commonly regarded as grating optical device. The height information is obtained from the phase-to-height mapping relationship between phase distribution and geometric parameters of the system. Hence, the projector is not needed to be pre-calibrated anymore. In some presented phase-to-height model, even the camera is also not required to be pre-calibrated [7],[8].

In this paper, our method falls into the second category. We propose a generic FPP-based projecting-imaging model and explore the relationship between the phase distribution, height information, and geometric parameters of the system. Based on the proposed model, we then study the problem of how the uncertainty of relevant parameters, the length of baseline in particular, affects the 3-D shape measurement accuracy. In other words, we focus on the performance analysis of the 3-D shape measurement according to our proposed model particularly when the system has a short baseline.

## Background and literature review

### Conventional stereovision model

The schematic diagram of a projector-camera (Pro-Cam) structured light system is illustrated in Figure 1. The key problem in the 3-D shape measurement process is how to determine the corresponding relationship between the point on camera’s image plane, projector’s image plane, and the point on the object’s surface. It is worthy noticing that these three points also represent the three vertices of a triangle, respectively. Generally, the structured light pattern projected from a projector plays a role of bridge. Once the relationship between the projector image plane and camera image plane is established, the short baseline arranged 3-D shape measurement system, which is illustrated in Figure 1, can be unified with a classic binocular stereovision system.

Given a point on the camera’s image plane and its corresponding point on the projector’s image plane, the coordinate of its corresponding point on the object’s surface can be determined by conventional triangulation algorithm [16],[19]. The relationship between nondistorted point *m*
_{
u
}(*u*
_{
c
}, *v*
_{
c
}) on the camera’s image plane and its corresponding point *P*
_{
W
} = [*X*
^{W}, *Y*
^{W}, *Z*
^{W}]^{T} on the object’s surface can be described as follows:

where (*u*
_{
c 0}, *v*
_{
c 0}) is the coordinate of the principle point. *f*
_{
cx
} and *f*
_{
cy
} are the focal length in pixels of the camera image plane along *u* and *v* axes, *λ*
_{
c
} denotes the skewness of the two image axes on the camera’s image plane. Then perspective transformation of camera imaging process can be simplified from Equation 1 and denoted as

Similar for the projector, using phase-shifting technique [17], we can obtain a similar relationship:

Arranging the variables, we can get the following relationship from Equations 2 and 3.

where \mathit{K}=\left[\begin{array}{ccc}\hfill {\mathit{A}}_{11}-{\mathit{A}}_{31}{\mathit{u}}_{2}^{\mathit{c}}\hfill & \hfill {\mathit{A}}_{12}-{\mathit{A}}_{32}{\mathit{u}}_{2}^{\mathit{c}}\hfill & \hfill {\mathit{A}}_{13}-{\mathit{A}}_{33}{\mathit{u}}_{2}^{\mathit{c}}\hfill \\ \hfill {\mathit{A}}_{21}-{\mathit{A}}_{31}{\mathit{v}}_{2}^{\mathit{c}}\hfill & \hfill {\mathit{A}}_{22}-{\mathit{A}}_{32}{\mathit{v}}_{2}^{\mathit{c}}\hfill & \hfill {\mathit{A}}_{23}-{\mathit{A}}_{33}{\mathit{v}}_{2}^{\mathit{c}}\hfill \\ \hfill {\mathit{B}}_{11}-{\mathit{B}}_{31}{\mathit{u}}_{1}^{\mathit{p}}\hfill & \hfill {\mathit{B}}_{12}-{\mathit{B}}_{32}{\mathit{u}}_{1}^{\mathit{p}}\hfill & \hfill {\mathit{B}}_{13}-{\mathit{B}}_{33}{\mathit{u}}_{1}^{\mathit{p}}\hfill \\ \hfill {\mathit{B}}_{21}-{\mathit{B}}_{31}{\mathit{v}}_{1}^{\mathit{p}}\hfill & \hfill {\mathit{B}}_{22}-{\mathit{B}}_{32}{\mathit{v}}_{1}^{\mathit{p}}\hfill & \hfill {\mathit{B}}_{23}-{\mathit{B}}_{33}{\mathit{v}}_{1}^{\mathit{p}}\hfill \end{array}\right] is a 4 × 3 matrix, \mathit{Q}=\left[\begin{array}{c}\hfill {\mathit{A}}_{34}{\mathit{u}}_{2}^{\mathit{c}}-{\mathit{A}}_{14}\hfill \\ \hfill {\mathit{A}}_{34}{\mathit{v}}_{2}^{\mathit{c}}-{\mathit{A}}_{24}\hfill \\ \hfill {\mathit{B}}_{34}{\mathit{u}}_{1}^{\mathit{p}}-{\mathit{B}}_{14}\hfill \\ \hfill {\mathit{B}}_{34}{\mathit{v}}_{1}^{\mathit{p}}-{\mathit{B}}_{24}\hfill \end{array}\right] is a vector, and *P* = [*X*
^{W}, *Y*
^{W}, *Z*
^{W}]^{T}. It is worth to note that if we are given the real distorted image point *m*
_{
d
}, it has to be transformed to the nondistorted point *m*
_{
u
} first. However, it is difficult to directly get the analytical inversion from distorted image point to nondistorted image point. In this work, the iterative method [12] is adopted and the iteration relationship is given as follows:

Therefore, if we know the real pixel point pair {\mathit{m}}_{\mathit{d}}^{\mathit{c}}\left({\mathit{u}}_{\mathit{d}}^{\mathit{c}},{\mathit{v}}_{\mathit{d}}^{\mathit{c}}\right) and {\mathit{m}}_{\mathit{d}}^{\mathit{p}}\left({\mathit{u}}_{\mathit{d}}^{\mathit{p}},{\mathit{v}}_{\mathit{d}}^{\mathit{p}}\right), the remainder point in the triangle which is also the corresponding point in 3-D space *P*
_{
W
}(*X*
_{
W
}, *Y*
_{
W
}, *Z*
_{
W
})^{T}, can be obtained by

It is well known that 3-D shape measurement accuracy can be improved through appropriately enlarging the baseline between two optical devices [6],[15]. One intuitive reason is that by enlarging the baseline, the ambiguity of the correspondence problem which lies between the pixel on the right camera’s image plane and left camera’s image plane can be alleviated. A brief schematic of a short baseline stereovision system setup is shown in Figure 2. Because the pixel on the image plane has a certain physical size, the feature point *A* and the feature point *B* lie in the same uncertainty area (UA) which is denoted as blue rhombus. All the points lying in this area are corresponding to the same pixel point on the right camera and left camera’s image plane, respectively. In other words, all the points lying in the same UA present the same depth information. Hence, the short baseline problem can be defined, that is, the depth measurement error, which can be denoted as UA shown in Figure 2, will be enlarged in a short baseline system.

Equation 6 shows that an accurate measurement result is depending on the four factors: (1) accurate determination of the pixel point *m*
_{
cu
}(*u*
_{
c
}, *v*
_{
c
})^{T} on camera’s image plane, (2) unbiased camera’s parameters, (3) unbiased projector’s parameters, and (4) accurate determination of the pixel point which corresponds to the pixel point on the projector’s image plane. The first two conditions are easy to be satisfied with available camera calibration algorithm [4],[18] and well-developed image processing technique [19],[20]. However, the latter two are much more difficult to be achieved. Furthermore, since the projector cannot ‘capture’ like a camera, how to determine the corresponding pixel point on the projector’s image plane is a challenge. If the pixel point on the projector’s image plane is biased, as Figure 2 shows, the measurement error will be enlarged in a short baseline arranged Pro-Cam system.

### FPP-based phase-to-height mapping model

Fringe projection techniques have been used for 3-D object surface measurement for years because of its flexibility and good performance characteristic. In these techniques, the projector is commonly regarded as grating optical device, and series of fringe patterns (commonly sinusoidal fringe patterns) are projected onto an object’s surface and then captured from other direction by a camera. The captured fringe patterns are deformed with respect to the geometry of the object’s surface. Hence, the intensity distribution of the deformed pattern on the image plane can be retrieved through phase-measuring techniques. One of the classic techniques is Fourier-transform analysis [7]. The other widely used technique is phase-shifting algorithm [2]. Whichever technique is adopted, the critical final step is to create a mapping relationship between the pixel point on the image plane, its corresponding phase, and the height information. One of the basic geometry setup of the measuring system is shown in Figure 3. In the setup, the optical axis *I*
_{c}
*O* of the imaging system camera system is perpendicular to the reference plane. The optical axis *I*
_{p}
*O* of the projection system intersects with the optical axis *I*
_{c}
*O* at point *O* and makes an angle *π*/2 − *θ* with the reference plane. The line joining the two optical centers is the baseline *b*, and the projector and the camera have equal height *L*
_{p} = *L*
_{c} = *L* with respect to the reference plane. In the work in [8], the phase and height relationship is simply derived using the triangulation method, which is

However, there are two hypotheses assumed in this work. Firstly, the distance between the camera’s optical center and the reference plane is much larger than the height of the object, which is *L* ≫ *h* in general case. The other assumption is that the periodicity of projected fringe pattern, which is denoted as *p* on the reference plane, is evenly distributed along with axis *X*
_{
w
}. However, in the practical case, the above hypothesis conditions are just two ideal conditions. A thorough analysis in [8] indicates that the periodicity of projected fringe pattern which distributes on the reference plane is in fact a function of the lateral coordinate *x*, the periodicity *p*
_{0} of the projecting pattern on the LCD image plane, and the angle *θ* between projector and camera. Hence, in the work in [9], based on the same geometry system setup shown in Figure 3, a more practical expression for phase and height relationship is given as follows:

More accurate results were reported in this work. However, the requirement of parallelity and orthogonality in the work in [8],[10] still limited the generality and flexibility in the actual measurement. Therefore, an improved structure of the measurement system is presented by the work in [3]. In this structure, which is shown in Figure 4, the line joining the optical centers of the projector system and camera system is not parallel to the reference plane but makes an angle with the reference plane. In addition, both optical axes are not required to be orthogonal to the reference plane. Comparing with the structure in the basic setup shown in Figure 3, it is more general and closer to practical situation.

After transforming the World Coordinate System to the charge-coupled device (CCD) imaging coordinate system, a final phase-to-height relationship in given as follows [3]:

where coefficients *C*
_{1} to *C*
_{7} are related to the geometric parameters of the measuring system and the intrinsic and extrinsic parameters of the imaging system. It is worthy noticing that there is another work presented by Du and Wang [1]. In this work, the two optical devices (camera and projector) are arbitrarily arranged; in other words, the geometry structure is also generic and has no special restriction. In other words, it implies that their model fits the case where the baseline between two optical devices could be as small as it could. The phase-to-height mapping relationship was given similarly like in Equation 10, which is described as follows:

where coefficients *C*
_{0} to *C*
_{5} and *D*
_{0} to *D*
_{5} are related to the geometric parameters of the measuring system and the intrinsic and extrinsic parameters of the imaging system. As we can see from Equations 10 and 11, in either of the works [1] and [3], the parameters which are physically meaningful (i.e., the length of the baseline) in the presented phase-to-height mapping model are hardly analyzed due to the difficulty of isolating these parameters from the calibrated coefficients. Moreover, they utilized a least-square method to calibrate the coefficients, yet not the related geometric parameters. Therefore, it is necessary to present a practical and analyzable model for convenient analyzing the geometry parameters. In particular, due to the specificity of our proposed short baseline arrangement system, the baseline’s length influence is given the priority to be analyzed. Based on the generic setup in the work [3], we derive a new and different model for accurate phase-to-height mapping determination and parameters analysis.

## Research design and methodology

In the following, a phase-to-height mapping model is presented for parameters’ influence analysis. In this model, a rigorous mathematical relationship that exists between the height of an object’s surface, the phase difference distribution map, and the parameters of the setup is firstly derived. Based on this model, we then study the problem of how the uncertainty of relevant parameters, particularly the baseline’s length, affects the 3-D shape measurement accuracy. The uncertainty analysis on the proposed model including partial derivative analysis, relative error analysis, and sensitivity analysis are performed. Moreover, the Monte Carlo simulation experiment is also conducted.

## Methods

### Our proposed projecting and imaging model

The geometric optical geometry of our setup is shown in Figure 4. *I*
_{p} and *I*
_{c} are the exit pupil and entrance pupil of the projector and camera, respectively. The optical axes *I*
_{p}
*O* and *I*
_{c}
*O* cross the reference plane at point *O* and make angles *θ*
_{1} and *θ*
_{2} with *Z*
_{
W
} axis (i.e., the normal direction of the reference plane), respectively. The baseline between these two optical centers is *I*
_{p}
*I*
_{c} = *b*, which is not parallel to the reference plane. *M* is the perpendicular projection point of *I*
_{p} on the reference plane, and the distance between them is *I*
_{p}
*M* = *L*
_{p}. *N* is the perpendicular projection point of *I*
_{c} on the reference plane, and the distance between them is *I*
_{c}
*N* = *L*
_{c}. Point *A* on the reference plane and point *P* on the object surface correspond to the same image pixel location on the CCD plane. Point *C* on the reference plane and point *P* on the object surface are on the same pixel ray projecting from the projector. We add several dashed lines in the figure as guidelines for analysis. The dashed line *I*
_{p}
*F* is parallel with the reference plane and crosses the extension line of *BP* (*BP* = *h*) at point *D*, which intersects with lines *I*
_{c}
*P* and *I*
_{c}
*N* at points *E* and *F* respectively. In this work, we mainly focus on the measurement performance with respect to the influence of one parameter, which is the baseline *b*. Hence, similarly to the work in [10], we can assume that the fringe patterns formed by the projector are parallel to *Y*
_{
W
}. From the geometry setup in Figure 4, we can get that the triangle *APB* is similar with the triangle *EI*
_{c}
*F*,

Similarly, from the fact that triangle *APB* is similar with triangle *ANI*
_{c} and triangle *ACP* is similar with triangle *I*
_{p}
*PE*, we can get

Submitting Equation 12 into Equation 14, we can get

Note that *AN* = *AC* + *OC* + *ON* = *AC* + *x* + *L*
_{c} tan *θ*
_{2}, submitting this relationship and Equation 16 into 18, we obtain

where *p* denotes the periodicity of the fringe patterns on the reference plane under divergent illumination. According to the work in [3], we can get

Submitting Equation 17 into Equation 16, we can get the final relationship between the phase distribution *φ*(*x*, *y*) and the height information *h*(*x*, *y*), which is expressed as

It can also be written in a concise form as

where parameters *c*
_{1}, *c*
_{2}, *c*
_{3}, *c*
_{4} are related with geometric parameters *L*
_{p}, *L*
_{c}, *p*, *b*, *α*, *θ*
_{1}, *θ*
_{2} and can be denoted as

### Performance analysis

#### Influence of the length of baseline *b*

The geometric parameters of the system setup include the angle between the optical axis of the projector and the camera, the distance between optical center of camera system and reference plane, the focal length of camera system and the periodicity of projected fringe patterns, etc. In this paper, we regard the baseline’s length as the priority factor and focus on the length of baseline’s influence on the final measurement result. From Equation 19, we can transform it into another form, which takes the baseline *b* as an input variable and is expressed as follows:

where \left\{\begin{array}{c}\hfill {\mathit{K}}_{1}\left(\mathit{x},\mathit{y}\right)={\mathit{L}}_{\mathrm{p}}{\mathit{L}}_{\mathrm{c}}\mathit{p}\left|\mathrm{\Delta}{\mathit{\phi}}_{\mathit{PA}}\left(\mathit{x},\mathit{y}\right)\right|\hfill \\ \hfill {\mathit{K}}_{2}\left(\mathit{x},\mathit{y}\right)=\mathit{p}\left|\mathrm{\Delta}{\mathit{\phi}}_{\mathit{PA}}\left(\mathit{x},\mathit{y}\right)\right|+2\mathit{\pi}{\mathit{L}}_{\mathrm{p}}{\mathit{L}}_{\mathrm{c}}tan{\mathit{\theta}}_{1}+2\mathit{\pi}{\mathit{L}}_{\mathrm{c}}^{2}tan{\mathit{\theta}}_{2}\hfill \\ \hfill {\mathit{K}}_{3}\left(\mathit{x},\mathit{y}\right)=\mathit{p}sin\mathit{\alpha}\left|\mathrm{\Delta}{\mathit{\phi}}_{\mathit{PA}}\left(\mathit{x},\mathit{y}\right)\right|+2\mathit{\pi x}sin\mathit{\alpha}+2\mathit{\pi}{\mathit{L}}_{\mathrm{c}}tan{\mathit{\theta}}_{2}sin\mathit{\alpha}.\hfill \end{array}\right.

From Equation 19, we get the relationship between the phase difference and the height information:

Similar to the derivative method in the work in [10], the partial derivative of Equation 23 with respect to the baseline *b* is calculated, and Equations 18 and 19 are submitted into the result. We get

where \left\{\begin{array}{l}{\mathit{Q}}_{1}=2\mathit{\pi p}sin\mathit{\alpha}\left({\mathit{L}}_{\mathrm{p}}{\mathit{L}}_{\mathrm{c}}tan{\mathit{\theta}}_{1}+{\mathit{L}}_{\mathrm{c}}^{2}tan{\mathit{\theta}}_{2}+\mathit{x}-\mathit{xb}sin\mathit{\alpha}+{\mathit{L}}_{\mathrm{c}}tan{\mathit{\theta}}_{2}-{\mathit{L}}_{\mathrm{c}}tan{\mathit{\theta}}_{2}\mathit{b}sin\mathit{\alpha}\right)\\ {\mathit{Q}}_{2}=2\mathit{\pi p}{sin}^{2}\mathit{\alpha}\left({\mathit{L}}_{\mathrm{c}}tan{\mathit{\theta}}_{2}+\mathit{xb}\right)\\ {\mathit{Q}}_{3}=2\mathit{\pi p}{\mathit{L}}_{\mathrm{p}}{\mathit{L}}_{\mathrm{c}}\left(\mathit{x}sin\mathit{\alpha}+{\mathit{L}}_{\mathrm{c}}tan{\mathit{\theta}}_{2}sin\mathit{\alpha}\right)\\ {\mathit{Q}}_{4}=2\mathit{\pi p}{\mathit{L}}_{\mathrm{p}}{{\mathit{L}}_{\mathrm{c}}}^{2}\left({\mathit{L}}_{\mathrm{p}}tan{\mathit{\theta}}_{1}+{\mathit{L}}_{\mathrm{c}}tan{\mathit{\theta}}_{2}\right)\\ {\mathit{Q}}_{5}=2\mathit{\pi p}{\mathit{L}}_{\mathrm{p}}{\mathit{L}}_{\mathrm{c}}\left({\mathit{L}}_{\mathrm{c}}tan{\mathit{\theta}}_{2}sin\mathit{\alpha}+\mathit{x}sin\mathit{\alpha}\right).\end{array}\right.

Equation 23 shows that the height error ∂*h*(*x*, *y*)/∂*b* is a function of the parameters *Q*
_{1}, *Q*
_{2}, *Q*
_{3}, *Q*
_{4}, *Q*
_{5}, *b*, *h*. The dependence of ∂*h*(*x*, *y*)/∂*b* on *h* is shown in Figure 5 with respect to the variation of other parameters *Q*
_{1}, *Q*
_{2}, *Q*
_{3}, *Q*
_{4}, *Q*
_{5}. The red curve and blue curve in Figure 5 indicate the lengths of baseline *b* = 30 mm and *b* = 120 mm, respectively. A real experimental system setup consists of a pico-projector (Optoma PK301; Optoma USA, Fremont, CA, USA) and a mini-camera (Point Grey FL3-U3-13S2M-CS; Point Grey Research KK, Chiyoda-ku, Tokyo, Japan**)** with a 6-mm focal length. Hence, the parameters variation are in the following ranges: *L*
_{p} from 390 to 420 mm, *L*
_{c} from 400 to 450 mm, *p* from 10 to 20 mm, *θ*
_{1} from 0° to 15°, *θ*
_{2} from 0° to 10°, *α* from 0° to 30°, *x* from −150 to 150 mm.

It is important to note that the parameters in the following analysis are also falling into this range. The results shown in Figure 5 indicate that in the two cases (the baseline *b* = 30 mm and the baseline *b* = 120 mm), the measurement error becomes larger as the height of target object increases. However, in the red curve which indicates the shorter baseline case (*b* = 30 mm), the measurement error is smaller (less than 0.1); meanwhile, the relationship between ∂*h*(*x*, *y*)/∂*b* and *h* is almost linear. On the other hand, when the baseline changes to a larger value (*b* = 120 mm), the relationship between the measurement error ∂*h*(*x*, *y*)/∂*b* and *h* is nonlinear and as the height of the object increases the measurement error increases faster than the shorter baseline (*b* = 30 mm). In particular, when the height of target object equals to 100 mm, the maximum value of the measurement error is larger than 0.5.

### Relative measurement error analysis

A relative measurement error analysis is also conducted with respect to other parameters *K*
_{1}, *K*
_{2}, *K*
_{3} while the baseline is assumed fixed. Suppose there are small errors *δK*
_{1}, *δK*
_{2}, *δK*
_{3} existing in the parameters *K*
_{1}, *K*
_{2}, *K*
_{3}, respectively. The relative measurement error of height *δh*/*h* can be expressed by the following generic approximation:

Submitting Equation 21 into Equation 24, we can obtain

Equation 25 indicates that the relative measurement error *δh*/*h* is a function of the length of baseline *b*, the parameters *K*
_{1}, *K*
_{2}, *K*
_{3}, and their variation *δK*
_{1}, *δK*
_{2}, *δK*
_{3}.

The results are shown in Figure 6. Without loss of generality, we set the variation of parameters *δK*
_{1}, *δK*
_{2}, *δK*
_{3} = 0.01. The results tell us that when the height of the target object increases to 100 mm, the yellow curve which represents the largest baseline configuration system (*b* = 120 mm) yields the biggest relative measurement error, which is 25%, 0.0005% for minimizing and maximizing the parameters *K*
_{1}, *K*
_{2}, *K*
_{3} respectively. Meanwhile, the red curve (*b* = 30 mm) presents the smallest relative measurement error, which is 5%, 0.0001% for minimizing and maximizing the parameters *K*
_{1}, *K*
_{2}, *K*
_{3} respectively. Hence, we can get the same conclusion that if the length of the baseline increases, the relative error of the measured height becomes bigger, when the same errors *δK*
_{1}, *δK*
_{2}, *δK*
_{3} are introduced into the system.

### Sensitivity analysis

Furthermore, we take a sensitivity analysis with respect to the baseline *b*. In the following part, *b*
_{e} represents the estimates of *b* and Δ*b*/*b* = (*b*
_{e} − *b*)/*b* indicates the relative discrepancy with respect to the nominal values. The error ∆*h* can be expressed as the difference between the depth values calculated by substituting the two values *b*
_{e} and *b* into Equation 21:

When Equations 22 and 26 are combined, relative error ∆*h*/*h* results:

Equation 27 expresses ∆*h*/*h* as a hyperbolic function of ∆*b*/*b*, but for small values of ∆*b*/*b*, the function is almost linear, as shown in Figure 7. The yellow curve which represents the largest baseline configuration system (*b* = 120 mm) yields the smallest relative variation of height with respect to the same relative discrepancy of baseline, while the red curve which represents the shortest baseline arrangement system (*b* = 30 *mm*) presents the biggest relative variation of height. This means that for a system with a shorter baseline, the proposed model is more sensitive to the small variation of other parameters. In other words, we can get the conclusion that the larger the baseline, the less sensitive is the system with respect to the same bias in the calibrated parameters. This conclusion is the same as presented in the discussion part of the work [11].

## Results and discussion

A measurement error analysis with respect to the variation of baseline has been performed using partial derivative analysis, relative measurement error analysis, and sensitivity analysis. These analysis methods have been limited to parameters *L*
_{
p
}, *L*
_{
c
}, *θ*
_{1}, *θ*
_{2}, *α*, *p*, *x* and can be accurately calibrated. Parameters *p*, *x* can be easily evaluated with small uncertainty during the measurement by exploitation of the scale factor from pixels to millimeters in the reference frame. On the contrary, accurate determination of parameters *L*
_{
p
}, *L*
_{
c
}, *θ*
_{1}, *θ*
_{2}, *α* is hard due to the difficulty of precisely measuring the position of the pupils of the projector and of the camera. Meanwhile, it is also hard to determine the accurate relative orientation of the DMD image plane in a DLP-based projector. Hence, in order to eliminate the effect of other unknown uncertainty factors introduced into the analysis process, one way of doing the uncertainty propagation estimation for nonlinear systems is using Monte Carlo analysis [13],[14]. The real experimental system setup (shown in Figure 8) consists of a pico-projector (Optoma PK301) and a mini-camera (Point Grey FL3-U3-13S2M-CS) with a 6-mm focal length. Hence, the variation of the parameters can be defined the same as in the previous part.

We present a global sensitivity analysis that permits the evaluation of the uncertainty distribution from other input parameters (i.e., *L*
_{
p
}, *L*
_{
c
}, *θ*
_{1}, *θ*
_{2}, *α*) to the output (the height information) with respect to the different baseline lengths. In this method, we use four objects with different heights for the experiment. After that, in order to obtain the distribution map of height value, we change the length of baseline and its relevant parameters (i.e., *θ*
_{1}, *θ*
_{2}, *α*), but keep the other parameters unchanged. It is worth noticing that the unchanged parameters in the evaluating process are randomly selected from the given range.

Figure 9 illustrates the measured height variation (comparing with selected ground truth value) which is corresponding to the random variation of other parameters when the baseline changes from 30 to 120 mm. The initial parameters are randomly selected in the pre-measured range for these four tests. We conduct each set of running for 30 times; in each set, we get the height information from the same sampled point on the object’s surface (i.e., *x* = 188.63 mm). As we can see from the figure, the biggest variations of height are 0.6404, 0.8835, 2.1207, and 4.1729 mm for given baselines of 30, 60, 90, and 120 mm, respectively. Moreover, at each experiment set, when the baseline changes are bigger, the variation of height also increases.

## Conclusions

In this paper, 3-D shape measurement error analysis is performed based on a short baseline Pro-Cam system. The first model is based on conventional stereovision technique. Through analysis, we obtain that as the baseline becomes shorter, the two main factors, which are the inherent biased parameters of the projector and unavoidable biased pixel point localization on the projector’s image plane, have more uncertainty. Therefore, the measurement accuracy is further destroyed. In the second one, we propose a FPP technique-based projecting-imaging model. After deriving a new phase-to-height mapping relationship, measurement error which mainly refers to the height error is analyzed with respect to the length of baseline through partial derivative analysis, relative measurement error analysis, and sensitivity analysis. From the analysis result, we conclude that the smaller the baseline, the more sensitive the system is and the relative measurement error is smaller when if the same biases are introduced in the calibrated parameters. The Monte Carlo simulation experimental results also demonstrate the same measurement result using the proposed model under the short baseline configuration.

## Authors’ information

JL received B.S. and M.S. degrees from the Department of Applied Physics, Sichuan University, Chengdu, China, in 2008 and 2011, respectively. Currently, he is a Ph.D. student in the Department of Mechanical and Biomedical Engineering at City University of Hong Kong, Hong Kong. His research interests include Photoelectric Information processing, 3-D measurement, 3-D reconstruction, and robot vision. YFL received the Ph.D. degree in robotics from the Department of Engineering Science, University of Oxford, Oxford, U.K., in 1993. From 1993 to 1995, he was a Postdoctoral Research Associate in the Department of Computer Science, University of Wales, Aberystwyth, U.K. He joined City University of Hong Kong, Hong Kong, in 1995 where he is currently a Professor in the Department of Mechanical and Biomedical Engineering. His research interests include robot vision, sensing, and sensor-based control for robotics.

## References

Du H, Wang Z: Three-dimensional shape measurement with an arbitrarily arranged fringe projection profilometry system.

*Opt Lett*2007, 32(16):2438–2440. 10.1364/OL.32.002438Zhang S: Phase unwrapping error reduction framework for a multiple-wavelength phase-shifting algorithm.

*Opt Eng*2009, 48(10):105601–105608. 10.1117/1.3251280Xiao Y, Cao Y, Wu Y: Improved algorithm for phase-to-height mapping in phase measuring profilometry.

*Appl Optics*2012, 51(8):1149–1155. 10.1364/AO.51.001149Zhang Z: A flexible new technique for camera calibration.

*IEEE Trans Pattern Anal Mach Intell*2000, 22(11):1330–1334. 10.1109/34.888718Janne H: Geometric camera calibration using circular control points.

*IEEE Trans Pattern Anal Mach Intell*1992, 14(10):965–980. 10.1109/34.159901Kytö M, Nuutinen M, Oittinen P: Method for measuring stereo camera depth accuracy based on stereoscopic vision.

*IS&T/SPIE Electronic Imaging*2011, 7864: 78640I-78640I.Mao X, Chen W, Su X: Fourier transform profilometry based on a projecting-imaging model.

*JOSA A*2007, 24(12):3735–3740. 10.1364/JOSAA.24.003735Quan C, He XY, Wang CF: Shape measurement of small objects using LCD fringe projection with phase shifting.

*Opt Commun*2001, 189(1):21–29. 10.1016/S0030-4018(01)01038-0Spagnolo GS, Guattari G, Sapia C: Contouring of artwork surface by fringe projection and FFT analysis.

*Opt Lasers Eng*2000, 33(2):141–156. 10.1016/S0143-8166(00)00023-3Zhang Z, Zhang D, Peng X: Performance analysis of a 3D full-field sensor based on fringe projection.

*Opt Lasers Eng*2004, 42(3):341–353. 10.1016/j.optlaseng.2003.11.004Zappa E: Sensitivity analysis applied to an improved Fourier-transform profilometry.

*Opt Lasers Eng*2011, 49(2):210–221. 10.1016/j.optlaseng.2010.09.016Hammersley JM, Handscomb DC, Weiss G: Monte carlo methods.

*Phys Today*1965, 18: 55. 10.1063/1.3047186Fishman GS:

*Monte Carlo: concepts, algorithms, and applications*. Springer, New York; 1996.Saltelli A, Ratto M, Andres T:

*Global sensitivity analysis: the primer*. Wiley.com 2008.Delon J, Rougé B: Small baseline stereovision.

*J Math Imaging Vis*2007, 28(3):209–223. 10.1007/s10851-007-0001-1Hong BJ, Park CO, Seo NS: A Real-time Compact Structured-light based Range Sensing System.

*Semicond Sci Tech*2012, 12(2):193–202.Li Z, Shi Y, Wang C: Accurate calibration method for a structured light system.

*Opt Eng*2008, 47(5):053604–053604–9. 10.1117/1.2931517Huang L, Zhang Q, Asundi A: Camera calibration with active phase target: improvement on feature detection and optimization.

*Opt Lett*2013, 38: 1446–1448. 10.1364/OL.38.001446Hartley RI, Sturm P: Triangulation.

*Comput Vis Image Underst*1997, 68(2):146–157. 10.1006/cviu.1997.0547Schalkoff RJ:

*Digital image processing and computer vision*. Wiley, New York; 1989.

## Acknowledgements

This work was supported by City University of Hong Kong (Project No. 7002829) and the National Natural Science Foundation of China (No. 61273286).

## Author information

### Authors and Affiliations

### Corresponding author

## Additional information

### Competing interests

The authors declared that they have no competing interests.

### Authors’ contributions

JL carried out the numerical simulation, participated in the model derivation and drafted the manuscript. YFL proposed the short-baseline problem, participated in the model derivation and revised the draft. Both authors read and approved the final manuscript.

## Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

## Rights and permissions

**Open Access** This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

## About this article

### Cite this article

Liu, J., Li, Y. Performance analysis of 3-D shape measurement algorithm with a short baseline projector-camera system.
*Robot. Biomim.* **1**, 1 (2014). https://doi.org/10.1186/s40638-014-0001-8

Received:

Accepted:

Published:

DOI: https://doi.org/10.1186/s40638-014-0001-8

### Keywords

- 3-D shape measurement
- Projector-camera
- Short baseline
- Binocular stereovision
- FPP