Open Access

Performance analysis of 3-D shape measurement algorithm with a short baseline projector-camera system

Robotics and Biomimetics20141:1

DOI: 10.1186/s40638-014-0001-8

Received: 9 December 2013

Accepted: 24 March 2014

Published: 28 August 2014

Abstract

A number of works for 3-D shape measurement based on structured light have been well-studied in the last decades. A common way to model the system is to use the binocular stereovision-like model. In this model, the projector is treated as a camera, thus making a projector-camera-based system unified with a well-established traditional binocular stereovision system. After calibrating the projector and camera, a 3-D shape information is obtained by conventional triangulation. However, in such a stereovision-like system, the short baseline problem exists and limits the measurement accuracy. Hence, in this work, we present a new projecting-imaging model based on fringe projection profilometry (FPP). In this model, we first derive a rigorous mathematical relationship that exists between the height of an object’s surface, the phase difference distribution map, and the parameters of the setup. Based on this model, we then study the problem of how the uncertainty of relevant parameters, particularly the baseline’s length, affects the 3-D shape measurement accuracy using our proposed model. We provide an extensive uncertainty analysis on the proposed model through partial derivative analysis, relative error analysis, and sensitivity analysis. Moreover, the Monte Carlo simulation experiment is also conducted which shows that the measurement performance of the projector-camera system has a short baseline.

Keywords

3-D shape measurement Projector-camera Short baseline Binocular stereovision FPP

Introduction

Noncontact optical measurement methodology has been widely used in many industrial applications, such as industry inspection and 3-D printing manufacturing. Among these mature optical 3-D measurement techniques, the structured light technique has been widely used in recent years due to its good characteristics of high precision, flexibility, and robustness to texture-less object surface reconstruction. Numbers of works have been presented in this issue [1]–[3]. According to a different model, the methods for obtaining 3-D shape information with a structured light system can be simply divided into two main categories: one common way is using conventional binocular stereo vision or named ‘CSV’ model and the other strategy is adopting fringe projection profilometry (FPP) technique. In the first category, the projector is always treated like a camera. In this model, projector and camera in the system are always required to be pre-calibrated before 3-D shape measurement task. Many camera calibration methods are available to be utilized directly [4],[5]. However, for projector calibration, even with the latest accurate calibration methods using active target and phase-shifting technique [17],[18], the accuracy of projector calibration can hardly reach as the same level as the camera. One of the intuitive reasons is that the parameters of a projector cannot be calibrated individually without the help of camera. Thus, the error propagation from camera calibration process is unavoidable and the overall system calibration accuracy is limited. The biased calibrated parameters of camera and projector will decrease the measurement accuracy. Particularly, in a short baseline arrangement system, the bias from feature point localization on the image will also be magnified with biased parameters. Therefore, how to accurately calibrate a short baseline arrangement system is more critical than a general configuration system which usually has a much larger baseline. In the second category, the projector is commonly regarded as grating optical device. The height information is obtained from the phase-to-height mapping relationship between phase distribution and geometric parameters of the system. Hence, the projector is not needed to be pre-calibrated anymore. In some presented phase-to-height model, even the camera is also not required to be pre-calibrated [7],[8].

In this paper, our method falls into the second category. We propose a generic FPP-based projecting-imaging model and explore the relationship between the phase distribution, height information, and geometric parameters of the system. Based on the proposed model, we then study the problem of how the uncertainty of relevant parameters, the length of baseline in particular, affects the 3-D shape measurement accuracy. In other words, we focus on the performance analysis of the 3-D shape measurement according to our proposed model particularly when the system has a short baseline.

Background and literature review

Conventional stereovision model

The schematic diagram of a projector-camera (Pro-Cam) structured light system is illustrated in Figure 1. The key problem in the 3-D shape measurement process is how to determine the corresponding relationship between the point on camera’s image plane, projector’s image plane, and the point on the object’s surface. It is worthy noticing that these three points also represent the three vertices of a triangle, respectively. Generally, the structured light pattern projected from a projector plays a role of bridge. Once the relationship between the projector image plane and camera image plane is established, the short baseline arranged 3-D shape measurement system, which is illustrated in Figure 1, can be unified with a classic binocular stereovision system.
Figure 1

Schematic diagram of a Pro-Cam structured light system.

Given a point on the camera’s image plane and its corresponding point on the projector’s image plane, the coordinate of its corresponding point on the object’s surface can be determined by conventional triangulation algorithm [16],[19]. The relationship between nondistorted point m u (u c , v c ) on the camera’s image plane and its corresponding point P W  = [X W , Y W , Z W ] T on the object’s surface can be described as follows:
s c u c v c 1 = f cx λ c u c 0 0 f cy v c 0 0 0 1 R 11 c R 12 c R 13 c t 1 c R 21 c R 22 c R 23 c t 2 c R 31 c R 32 c R 33 c t 3 c X W Y W Z W 1
(1)
where (u c 0, v c 0) is the coordinate of the principle point. f cx and f cy are the focal length in pixels of the camera image plane along u and v axes, λ c denotes the skewness of the two image axes on the camera’s image plane. Then perspective transformation of camera imaging process can be simplified from Equation 1 and denoted as
s c u 2 c s c v 2 c s c = A 11 A 12 A 13 A 14 A 21 A 22 A 23 A 24 A 31 A 32 A 33 A 34 X W Y W Z W 1 .
(2)
Similar for the projector, using phase-shifting technique [17], we can obtain a similar relationship:
s p u 1 p s p v 1 p s p = B 11 B 12 B 13 B 14 B 21 B 22 B 23 B 24 B 31 B 32 B 33 B 34 X W Y W Z W 1 .
(3)
Arranging the variables, we can get the following relationship from Equations 2 and 3.
K P = Q ,
(4)
where K = A 11 A 31 u 2 c A 12 A 32 u 2 c A 13 A 33 u 2 c A 21 A 31 v 2 c A 22 A 32 v 2 c A 23 A 33 v 2 c B 11 B 31 u 1 p B 12 B 32 u 1 p B 13 B 33 u 1 p B 21 B 31 v 1 p B 22 B 32 v 1 p B 23 B 33 v 1 p is a 4 × 3 matrix, Q = A 34 u 2 c A 14 A 34 v 2 c A 24 B 34 u 1 p B 14 B 34 v 1 p B 24 is a vector, and P = [X W , Y W , Z W ] T . It is worth to note that if we are given the real distorted image point m d , it has to be transformed to the nondistorted point m u first. However, it is difficult to directly get the analytical inversion from distorted image point to nondistorted image point. In this work, the iterative method [12] is adopted and the iteration relationship is given as follows:
m u = m d k 1 r 2 + k 2 r 4 m u + 2 k 3 x u y u + k 4 r 2 + 2 x u 2 k 3 r 2 + 2 y u 2 + 2 k 4 x u y u m d = m u .
(5)
Therefore, if we know the real pixel point pair m d c u d c , v d c and m d p u d p , v d p , the remainder point in the triangle which is also the corresponding point in 3-D space P W (X W , Y W , Z W ) T , can be obtained by
P = K T K 1 K T Q .
(6)
It is well known that 3-D shape measurement accuracy can be improved through appropriately enlarging the baseline between two optical devices [6],[15]. One intuitive reason is that by enlarging the baseline, the ambiguity of the correspondence problem which lies between the pixel on the right camera’s image plane and left camera’s image plane can be alleviated. A brief schematic of a short baseline stereovision system setup is shown in Figure 2. Because the pixel on the image plane has a certain physical size, the feature point A and the feature point B lie in the same uncertainty area (UA) which is denoted as blue rhombus. All the points lying in this area are corresponding to the same pixel point on the right camera and left camera’s image plane, respectively. In other words, all the points lying in the same UA present the same depth information. Hence, the short baseline problem can be defined, that is, the depth measurement error, which can be denoted as UA shown in Figure 2, will be enlarged in a short baseline system.
Figure 2

A brief structure of a short baseline stereovision system setup.

Equation 6 shows that an accurate measurement result is depending on the four factors: (1) accurate determination of the pixel point m cu (u c , v c ) T on camera’s image plane, (2) unbiased camera’s parameters, (3) unbiased projector’s parameters, and (4) accurate determination of the pixel point which corresponds to the pixel point on the projector’s image plane. The first two conditions are easy to be satisfied with available camera calibration algorithm [4],[18] and well-developed image processing technique [19],[20]. However, the latter two are much more difficult to be achieved. Furthermore, since the projector cannot ‘capture’ like a camera, how to determine the corresponding pixel point on the projector’s image plane is a challenge. If the pixel point on the projector’s image plane is biased, as Figure 2 shows, the measurement error will be enlarged in a short baseline arranged Pro-Cam system.

FPP-based phase-to-height mapping model

Fringe projection techniques have been used for 3-D object surface measurement for years because of its flexibility and good performance characteristic. In these techniques, the projector is commonly regarded as grating optical device, and series of fringe patterns (commonly sinusoidal fringe patterns) are projected onto an object’s surface and then captured from other direction by a camera. The captured fringe patterns are deformed with respect to the geometry of the object’s surface. Hence, the intensity distribution of the deformed pattern on the image plane can be retrieved through phase-measuring techniques. One of the classic techniques is Fourier-transform analysis [7]. The other widely used technique is phase-shifting algorithm [2]. Whichever technique is adopted, the critical final step is to create a mapping relationship between the pixel point on the image plane, its corresponding phase, and the height information. One of the basic geometry setup of the measuring system is shown in Figure 3. In the setup, the optical axis I c O of the imaging system camera system is perpendicular to the reference plane. The optical axis I p O of the projection system intersects with the optical axis I c O at point O and makes an angle π/2 − θ with the reference plane. The line joining the two optical centers is the baseline b, and the projector and the camera have equal height L p = L c = L with respect to the reference plane. In the work in [8], the phase and height relationship is simply derived using the triangulation method, which is
h x , y = λ Δ φ AD ,
(7)
λ = Lp / 2 πb .
(8)
Figure 3

The basic geometry setup of a Pro-Cam system.

However, there are two hypotheses assumed in this work. Firstly, the distance between the camera’s optical center and the reference plane is much larger than the height of the object, which is Lh in general case. The other assumption is that the periodicity of projected fringe pattern, which is denoted as p on the reference plane, is evenly distributed along with axis X w . However, in the practical case, the above hypothesis conditions are just two ideal conditions. A thorough analysis in [8] indicates that the periodicity of projected fringe pattern which distributes on the reference plane is in fact a function of the lateral coordinate x, the periodicity p 0 of the projecting pattern on the LCD image plane, and the angle θ between projector and camera. Hence, in the work in [9], based on the same geometry system setup shown in Figure 3, a more practical expression for phase and height relationship is given as follows:
h x , y = L / 2 π L 2 b cos θ p 0 Δ φ AD L + xcosθsinθ 2 b cos θ sin θ L + x cos θ sin θ + 1 .
(9)
More accurate results were reported in this work. However, the requirement of parallelity and orthogonality in the work in [8],[10] still limited the generality and flexibility in the actual measurement. Therefore, an improved structure of the measurement system is presented by the work in [3]. In this structure, which is shown in Figure 4, the line joining the optical centers of the projector system and camera system is not parallel to the reference plane but makes an angle with the reference plane. In addition, both optical axes are not required to be orthogonal to the reference plane. Comparing with the structure in the basic setup shown in Figure 3, it is more general and closer to practical situation.
Figure 4

A generic geometry setup of a Pro-Cam system.

After transforming the World Coordinate System to the charge-coupled device (CCD) imaging coordinate system, a final phase-to-height relationship in given as follows [3]:
h x , y = C 1 Δ φ AD + C 2 u Δ φ AD 1 + C 3 u + C 4 Δ φ D + C 5 Δ φ AD + C 6 u Δ φ D + C 7 u Δ φ AD
(10)
where coefficients C 1 to C 7 are related to the geometric parameters of the measuring system and the intrinsic and extrinsic parameters of the imaging system. It is worthy noticing that there is another work presented by Du and Wang [1]. In this work, the two optical devices (camera and projector) are arbitrarily arranged; in other words, the geometry structure is also generic and has no special restriction. In other words, it implies that their model fits the case where the baseline between two optical devices could be as small as it could. The phase-to-height mapping relationship was given similarly like in Equation 10, which is described as follows:
h x , y = C 0 + C 1 φ D + C 2 + C 3 φ D I D + C 4 + C 5 φ D J D D 0 + D 1 φ D + D 2 + D 3 φ D I D + D 4 + D 5 φ D J D ,
(11)
where coefficients C 0 to C 5 and D 0 to D 5 are related to the geometric parameters of the measuring system and the intrinsic and extrinsic parameters of the imaging system. As we can see from Equations 10 and 11, in either of the works [1] and [3], the parameters which are physically meaningful (i.e., the length of the baseline) in the presented phase-to-height mapping model are hardly analyzed due to the difficulty of isolating these parameters from the calibrated coefficients. Moreover, they utilized a least-square method to calibrate the coefficients, yet not the related geometric parameters. Therefore, it is necessary to present a practical and analyzable model for convenient analyzing the geometry parameters. In particular, due to the specificity of our proposed short baseline arrangement system, the baseline’s length influence is given the priority to be analyzed. Based on the generic setup in the work [3], we derive a new and different model for accurate phase-to-height mapping determination and parameters analysis.

Research design and methodology

In the following, a phase-to-height mapping model is presented for parameters’ influence analysis. In this model, a rigorous mathematical relationship that exists between the height of an object’s surface, the phase difference distribution map, and the parameters of the setup is firstly derived. Based on this model, we then study the problem of how the uncertainty of relevant parameters, particularly the baseline’s length, affects the 3-D shape measurement accuracy. The uncertainty analysis on the proposed model including partial derivative analysis, relative error analysis, and sensitivity analysis are performed. Moreover, the Monte Carlo simulation experiment is also conducted.

Methods

Our proposed projecting and imaging model

The geometric optical geometry of our setup is shown in Figure 4. I p and I c are the exit pupil and entrance pupil of the projector and camera, respectively. The optical axes I p O and I c O cross the reference plane at point O and make angles θ 1 and θ 2 with Z W axis (i.e., the normal direction of the reference plane), respectively. The baseline between these two optical centers is I p I c = b, which is not parallel to the reference plane. M is the perpendicular projection point of I p on the reference plane, and the distance between them is I p M = L p. N is the perpendicular projection point of I c on the reference plane, and the distance between them is I c N = L c. Point A on the reference plane and point P on the object surface correspond to the same image pixel location on the CCD plane. Point C on the reference plane and point P on the object surface are on the same pixel ray projecting from the projector. We add several dashed lines in the figure as guidelines for analysis. The dashed line I p F is parallel with the reference plane and crosses the extension line of BP (BP = h) at point D, which intersects with lines I c P and I c N at points E and F respectively. In this work, we mainly focus on the measurement performance with respect to the influence of one parameter, which is the baseline b. Hence, similarly to the work in [10], we can assume that the fringe patterns formed by the projector are parallel to Y W . From the geometry setup in Figure 4, we can get that the triangle APB is similar with the triangle EI c F,
AB / EF = PB / I c F = PB / b sin α .
(12)
Similarly, from the fact that triangle APB is similar with triangle ANI c and triangle ACP is similar with triangle I p PE, we can get
AB / AN = PB / I c N = PB / L c ,
(13)
AC / PB = I p E / PD = I p F EF / PD = L p tan θ 1 + L c tan θ 2 EF / L p PB .
(14)
Submitting Equation 12 into Equation 14, we can get
AC / PB = L p tan θ 1 + L c tan θ 2 AB b sin α / PB / L p PB .
(15)
Note that AN = AC + OC + ON = AC + x + L c tan θ 2, submitting this relationship and Equation 16 into 18, we obtain
PB = L p L c AC / AC L c + L p L c tan θ 1 + L c 2 tan θ 2 AC + x + L c tan θ 2 b sin α
(16)
where p denotes the periodicity of the fringe patterns on the reference plane under divergent illumination. According to the work in [3], we can get
AC = p φ C φ A / 2 π = p Δ φ PA / 2 π
(17)
Submitting Equation 17 into Equation 16, we can get the final relationship between the phase distribution φ(x, y) and the height information h(x, y), which is expressed as
h x , y = ( L p L c p Δ φ PA x , y ) / [ ( p pb sin α ) Δ φ PA x , y 2 πbx sin α + 2 π L p L c tan θ 1 + 2 π L c 2 tan θ 2 2 π L c tan θ 2 b sin α ] .
(18)
It can also be written in a concise form as
h x , y = c 1 Δ φ PA x , y / c 2 Δ φ PA x , y + c 3 x + c 4 ,
(19)
where parameters c 1, c 2, c 3, c 4 are related with geometric parameters L p, L c, p, b, α, θ 1, θ 2 and can be denoted as
c 1 = L p L c p , c 2 = p pb sin α c 3 = 2 πb sin α , c 4 = 2 π L p L c tan θ 1 + 2 π L c 2 tan θ 2 2 π L c tan θ 2 b sin α .
(20)

Performance analysis

Influence of the length of baseline b

The geometric parameters of the system setup include the angle between the optical axis of the projector and the camera, the distance between optical center of camera system and reference plane, the focal length of camera system and the periodicity of projected fringe patterns, etc. In this paper, we regard the baseline’s length as the priority factor and focus on the length of baseline’s influence on the final measurement result. From Equation 19, we can transform it into another form, which takes the baseline b as an input variable and is expressed as follows:
h x , y = K 1 x , y / K 2 x , y K 3 x , y b
(21)
where K 1 x , y = L p L c p Δ φ PA x , y K 2 x , y = p Δ φ PA x , y + 2 π L p L c tan θ 1 + 2 π L c 2 tan θ 2 K 3 x , y = p sin α Δ φ PA x , y + 2 πx sin α + 2 π L c tan θ 2 sin α .
From Equation 19, we get the relationship between the phase difference and the height information:
Δ φ PA x , y = h x , y c 3 x + c 4 h c 2 c 1 .
(22)
Similar to the derivative method in the work in [10], the partial derivative of Equation 23 with respect to the baseline b is calculated, and Equations 18 and 19 are submitted into the result. We get
h x , y / b = Q 1 h 2 x , y Q 2 h 2 x , y b Q 3 h x , y / Q 4 Q 5 b
(23)
where Q 1 = 2 πp sin α L p L c tan θ 1 + L c 2 tan θ 2 + x xb sin α + L c tan θ 2 L c tan θ 2 b sin α Q 2 = 2 πp sin 2 α L c tan θ 2 + xb Q 3 = 2 πp L p L c x sin α + L c tan θ 2 sin α Q 4 = 2 πp L p L c 2 L p tan θ 1 + L c tan θ 2 Q 5 = 2 πp L p L c L c tan θ 2 sin α + x sin α .
Equation 23 shows that the height error ∂h(x, y)/∂b is a function of the parameters Q 1, Q 2, Q 3, Q 4, Q 5, b, h. The dependence of ∂h(x, y)/∂b on h is shown in Figure 5 with respect to the variation of other parameters Q 1, Q 2, Q 3, Q 4, Q 5. The red curve and blue curve in Figure 5 indicate the lengths of baseline b = 30 mm and b = 120 mm, respectively. A real experimental system setup consists of a pico-projector (Optoma PK301; Optoma USA, Fremont, CA, USA) and a mini-camera (Point Grey FL3-U3-13S2M-CS; Point Grey Research KK, Chiyoda-ku, Tokyo, Japan) with a 6-mm focal length. Hence, the parameters variation are in the following ranges: L p from 390 to 420 mm, L c from 400 to 450 mm, p from 10 to 20 mm, θ 1 from 0° to 15°, θ 2 from 0° to 10°, α from 0° to 30°, x from −150 to 150 mm.
Figure 5

Plot of the measurement errors h / b versus the height with respect to the different length of baseline.

It is important to note that the parameters in the following analysis are also falling into this range. The results shown in Figure 5 indicate that in the two cases (the baseline b = 30 mm and the baseline b = 120 mm), the measurement error becomes larger as the height of target object increases. However, in the red curve which indicates the shorter baseline case (b = 30 mm), the measurement error is smaller (less than 0.1); meanwhile, the relationship between ∂h(x, y)/∂b and h is almost linear. On the other hand, when the baseline changes to a larger value (b = 120 mm), the relationship between the measurement error ∂h(x, y)/∂b and h is nonlinear and as the height of the object increases the measurement error increases faster than the shorter baseline (b = 30 mm). In particular, when the height of target object equals to 100 mm, the maximum value of the measurement error is larger than 0.5.

Relative measurement error analysis

A relative measurement error analysis is also conducted with respect to other parameters K 1, K 2, K 3 while the baseline is assumed fixed. Suppose there are small errors δK 1, δK 2, δK 3 existing in the parameters K 1, K 2, K 3, respectively. The relative measurement error of height δh/h can be expressed by the following generic approximation:
δh / h = h K 1 δ K 1 2 + h K 2 δ K 2 2 + h K 3 δ K 3 2 / h
(24)
Submitting Equation 21 into Equation 24, we can obtain
δh / h = δ K 1 2 + h 2 δ K 2 2 + b 2 h 2 δ K 3 2 / K 1 .
(25)

Equation 25 indicates that the relative measurement error δh/h is a function of the length of baseline b, the parameters K 1, K 2, K 3, and their variation δK 1, δK 2, δK 3.

The results are shown in Figure 6. Without loss of generality, we set the variation of parameters δK 1, δK 2, δK 3 = 0.01. The results tell us that when the height of the target object increases to 100 mm, the yellow curve which represents the largest baseline configuration system (b = 120 mm) yields the biggest relative measurement error, which is 25%, 0.0005% for minimizing and maximizing the parameters K 1, K 2, K 3 respectively. Meanwhile, the red curve (b = 30 mm) presents the smallest relative measurement error, which is 5%, 0.0001% for minimizing and maximizing the parameters K 1, K 2, K 3 respectively. Hence, we can get the same conclusion that if the length of the baseline increases, the relative error of the measured height becomes bigger, when the same errors δK 1, δK 2, δK 3 are introduced into the system.
Figure 6

Plot of the relative measurement error δh / h versus the baseline b. (A) For parameters K 1,K 2,K 3 go to the minimum value. (B) For parameters K 1,K 2,K 3 go to the maximum value.

Sensitivity analysis

Furthermore, we take a sensitivity analysis with respect to the baseline b. In the following part, b e represents the estimates of b and Δb/b = (b e − b)/b indicates the relative discrepancy with respect to the nominal values. The error ∆h can be expressed as the difference between the depth values calculated by substituting the two values b e and b into Equation 21:
Δ h = K 1 K 2 K 3 b e K 1 K 2 K 3 b .
(26)
When Equations 22 and 26 are combined, relative error ∆h/h results:
Δ h / h = K 1 K 2 K 3 b e / K 1 K 2 K 3 b 1 = Δb b 1 1 + Δb b K 2 / K 3 b
(27)
Equation 27 expresses ∆h/h as a hyperbolic function of ∆b/b, but for small values of ∆b/b, the function is almost linear, as shown in Figure 7. The yellow curve which represents the largest baseline configuration system (b = 120 mm) yields the smallest relative variation of height with respect to the same relative discrepancy of baseline, while the red curve which represents the shortest baseline arrangement system (b = 30 mm) presents the biggest relative variation of height. This means that for a system with a shorter baseline, the proposed model is more sensitive to the small variation of other parameters. In other words, we can get the conclusion that the larger the baseline, the less sensitive is the system with respect to the same bias in the calibrated parameters. This conclusion is the same as presented in the discussion part of the work [11].
Figure 7

Plot of relative variation of height ∆ h/h as a function of relative discrepancy of baseline ∆ b/b . (A) With respect to the minimum value of parameters K 2 K 3. (B) With respect to the maximum value of parameters K 2 K 3.

Results and discussion

A measurement error analysis with respect to the variation of baseline has been performed using partial derivative analysis, relative measurement error analysis, and sensitivity analysis. These analysis methods have been limited to parameters L p , L c , θ 1, θ 2, α, p, x and can be accurately calibrated. Parameters p, x can be easily evaluated with small uncertainty during the measurement by exploitation of the scale factor from pixels to millimeters in the reference frame. On the contrary, accurate determination of parameters L p , L c , θ 1, θ 2, α is hard due to the difficulty of precisely measuring the position of the pupils of the projector and of the camera. Meanwhile, it is also hard to determine the accurate relative orientation of the DMD image plane in a DLP-based projector. Hence, in order to eliminate the effect of other unknown uncertainty factors introduced into the analysis process, one way of doing the uncertainty propagation estimation for nonlinear systems is using Monte Carlo analysis [13],[14]. The real experimental system setup (shown in Figure 8) consists of a pico-projector (Optoma PK301) and a mini-camera (Point Grey FL3-U3-13S2M-CS) with a 6-mm focal length. Hence, the variation of the parameters can be defined the same as in the previous part.
Figure 8

The experiment setup.

We present a global sensitivity analysis that permits the evaluation of the uncertainty distribution from other input parameters (i.e., L p , L c , θ 1, θ 2, α) to the output (the height information) with respect to the different baseline lengths. In this method, we use four objects with different heights for the experiment. After that, in order to obtain the distribution map of height value, we change the length of baseline and its relevant parameters (i.e., θ 1, θ 2, α), but keep the other parameters unchanged. It is worth noticing that the unchanged parameters in the evaluating process are randomly selected from the given range.

Figure 9 illustrates the measured height variation (comparing with selected ground truth value) which is corresponding to the random variation of other parameters when the baseline changes from 30 to 120 mm. The initial parameters are randomly selected in the pre-measured range for these four tests. We conduct each set of running for 30 times; in each set, we get the height information from the same sampled point on the object’s surface (i.e., x = 188.63 mm). As we can see from the figure, the biggest variations of height are 0.6404, 0.8835, 2.1207, and 4.1729 mm for given baselines of 30, 60, 90, and 120 mm, respectively. Moreover, at each experiment set, when the baseline changes are bigger, the variation of height also increases.
Figure 9

Plot the variation of measured height with respect to the different baseline lengths.

Conclusions

In this paper, 3-D shape measurement error analysis is performed based on a short baseline Pro-Cam system. The first model is based on conventional stereovision technique. Through analysis, we obtain that as the baseline becomes shorter, the two main factors, which are the inherent biased parameters of the projector and unavoidable biased pixel point localization on the projector’s image plane, have more uncertainty. Therefore, the measurement accuracy is further destroyed. In the second one, we propose a FPP technique-based projecting-imaging model. After deriving a new phase-to-height mapping relationship, measurement error which mainly refers to the height error is analyzed with respect to the length of baseline through partial derivative analysis, relative measurement error analysis, and sensitivity analysis. From the analysis result, we conclude that the smaller the baseline, the more sensitive the system is and the relative measurement error is smaller when if the same biases are introduced in the calibrated parameters. The Monte Carlo simulation experimental results also demonstrate the same measurement result using the proposed model under the short baseline configuration.

Authors’ information

JL received B.S. and M.S. degrees from the Department of Applied Physics, Sichuan University, Chengdu, China, in 2008 and 2011, respectively. Currently, he is a Ph.D. student in the Department of Mechanical and Biomedical Engineering at City University of Hong Kong, Hong Kong. His research interests include Photoelectric Information processing, 3-D measurement, 3-D reconstruction, and robot vision. YFL received the Ph.D. degree in robotics from the Department of Engineering Science, University of Oxford, Oxford, U.K., in 1993. From 1993 to 1995, he was a Postdoctoral Research Associate in the Department of Computer Science, University of Wales, Aberystwyth, U.K. He joined City University of Hong Kong, Hong Kong, in 1995 where he is currently a Professor in the Department of Mechanical and Biomedical Engineering. His research interests include robot vision, sensing, and sensor-based control for robotics.

Declarations

Acknowledgements

This work was supported by City University of Hong Kong (Project No. 7002829) and the National Natural Science Foundation of China (No. 61273286).

Authors’ Affiliations

(1)
Department of Mechanical and Biomedical Engineering, City University of Hong Kong

References

  1. Du H, Wang Z: Three-dimensional shape measurement with an arbitrarily arranged fringe projection profilometry system. Opt Lett 2007, 32(16):2438–2440. 10.1364/OL.32.002438View ArticleGoogle Scholar
  2. Zhang S: Phase unwrapping error reduction framework for a multiple-wavelength phase-shifting algorithm. Opt Eng 2009, 48(10):105601–105608. 10.1117/1.3251280View ArticleGoogle Scholar
  3. Xiao Y, Cao Y, Wu Y: Improved algorithm for phase-to-height mapping in phase measuring profilometry. Appl Optics 2012, 51(8):1149–1155. 10.1364/AO.51.001149View ArticleGoogle Scholar
  4. Zhang Z: A flexible new technique for camera calibration. IEEE Trans Pattern Anal Mach Intell 2000, 22(11):1330–1334. 10.1109/34.888718View ArticleGoogle Scholar
  5. Janne H: Geometric camera calibration using circular control points. IEEE Trans Pattern Anal Mach Intell 1992, 14(10):965–980. 10.1109/34.159901View ArticleGoogle Scholar
  6. Kytö M, Nuutinen M, Oittinen P: Method for measuring stereo camera depth accuracy based on stereoscopic vision. IS&T/SPIE Electronic Imaging 2011, 7864: 78640I-78640I.Google Scholar
  7. Mao X, Chen W, Su X: Fourier transform profilometry based on a projecting-imaging model. JOSA A 2007, 24(12):3735–3740. 10.1364/JOSAA.24.003735View ArticleGoogle Scholar
  8. Quan C, He XY, Wang CF: Shape measurement of small objects using LCD fringe projection with phase shifting. Opt Commun 2001, 189(1):21–29. 10.1016/S0030-4018(01)01038-0View ArticleGoogle Scholar
  9. Spagnolo GS, Guattari G, Sapia C: Contouring of artwork surface by fringe projection and FFT analysis. Opt Lasers Eng 2000, 33(2):141–156. 10.1016/S0143-8166(00)00023-3View ArticleGoogle Scholar
  10. Zhang Z, Zhang D, Peng X: Performance analysis of a 3D full-field sensor based on fringe projection. Opt Lasers Eng 2004, 42(3):341–353. 10.1016/j.optlaseng.2003.11.004MathSciNetView ArticleGoogle Scholar
  11. Zappa E: Sensitivity analysis applied to an improved Fourier-transform profilometry. Opt Lasers Eng 2011, 49(2):210–221. 10.1016/j.optlaseng.2010.09.016View ArticleGoogle Scholar
  12. Hammersley JM, Handscomb DC, Weiss G: Monte carlo methods. Phys Today 1965, 18: 55. 10.1063/1.3047186View ArticleGoogle Scholar
  13. Fishman GS: Monte Carlo: concepts, algorithms, and applications. Springer, New York; 1996.MATHView ArticleGoogle Scholar
  14. Saltelli A, Ratto M, Andres T: Global sensitivity analysis: the primer. Wiley.com 2008.Google Scholar
  15. Delon J, Rougé B: Small baseline stereovision. J Math Imaging Vis 2007, 28(3):209–223. 10.1007/s10851-007-0001-1View ArticleGoogle Scholar
  16. Hong BJ, Park CO, Seo NS: A Real-time Compact Structured-light based Range Sensing System. Semicond Sci Tech 2012, 12(2):193–202.View ArticleGoogle Scholar
  17. Li Z, Shi Y, Wang C: Accurate calibration method for a structured light system. Opt Eng 2008, 47(5):053604–053604–9. 10.1117/1.2931517MathSciNetView ArticleGoogle Scholar
  18. Huang L, Zhang Q, Asundi A: Camera calibration with active phase target: improvement on feature detection and optimization. Opt Lett 2013, 38: 1446–1448. 10.1364/OL.38.001446View ArticleGoogle Scholar
  19. Hartley RI, Sturm P: Triangulation. Comput Vis Image Underst 1997, 68(2):146–157. 10.1006/cviu.1997.0547View ArticleGoogle Scholar
  20. Schalkoff RJ: Digital image processing and computer vision. Wiley, New York; 1989.Google Scholar

Copyright

© Liu and Li; licensee Springer. 2014

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.