 Research
 Open Access
 Published:
AprilTag arrayaided extrinsic calibration of camera–laser multisensor system
Robotics and Biomimetics volume 3, Article number: 13 (2016)
Abstract
This paper presents a new algorithm for extrinsically calibrating a multisensor system including multiple cameras and a 2D laser scanner. On the basis of the camera pose estimation using AprilTag, we design an AprilTag array as the calibration target and employ a nonlinear optimization to calculate the singlecamera extrinsic parameters when multiple tags are in the field of view of the camera. The extrinsic parameters of camera–camera and laser–camera are then calibrated, respectively. A global optimization is finally used to refine all the extrinsic parameters by minimizing a reprojection error. This algorithm is adapted to the extrinsic calibration of multiple cameras even if there is nonoverlapping field of view. For algorithm validation, we have built a microaerial vehicle platform with multisensor system to collect real data, and the experiment results confirmed that the proposed algorithm yields great performance.
Background
Nowadays, multiple sensors are widely used in various robot systems such as unmanned ground and aerial vehicles. These sensors provide abundant information like visual image and range measurement of the environment. Fusing these sensors information is necessary to understand the environment significantly. But, whenever multiple sensors are combined, one also has to deal with additional calibration issues, which is frequently overlooked. Quantities are seldom measured at the same position and in the same coordinate frame, implying that the alignment, the relative position and/or orientation of the sensors have to be determined. A good calibration is a prerequisite to do sensor fusion.
In many challenging tasks for robot systems like environment 3D mapping [1] and selflocalization [2], cameras and 2D laser scanner supply intensity information and depth information, respectively. At the same time, a large field of view is usually required in these tasks. Capturing a large field of view is often no possible by using a single camera only, and multiple cameras have to be used. Hence, multiple cameras and 2D laser scanner will play a more and more important roles in robot systems.
In this paper, we suggest a convenient calibration method for a multisensor system including multiple cameras and a 2D laser scanner without assuming overlapping fields of view. The essential contributions of this work are the following:

We propose an extrinsic calibration algorithm for multiple cameras, even if there is no overlapping field of view among them.

We combine a block of tags of the AprilTag [3] into an AprilTag array to be the calibration target (see Fig. 1b) and optimize the estimated camera poses when multiple complete tags are in the field of view of a single camera.

We propose an extrinsic calibration algorithm between a camera and a 2D laser scanner using AprilTag array calibration target, and it is integrated with the multicamera extrinsic calibration into a multisensor joint extrinsic calibration with a final global optimization.
This paper is organized as follows: Firstly, “Related work” section provides a review of related approaches. A description of extrinsic calibration pattern of multisensor system is given in “Calibration pattern” section. In “Extrinsic calibration of multisensor system” section, the joint extrinsic calibration algorithm for multisensor system is introduced. Experiments and discussions are presented in “Experiments” section. “Conclusion and future work” section provides the conclusion and future work.
Related work
With the preference of multicamera system over single camera, the extrinsic parameters calibration among multiple cameras becomes the basic requirement. In order to improve the accuracy and ignore the timeconsuming of calibration, offline estimation of extrinsic parameters is desirable. In contrast to online calibration methods like [4], offline calibration relies on calibration patterns with known geometry and appearance and need not consider the realtime capability of the calibration algorithm. Conventional offline calibration uses artificial and twodimensional calibration targets like checkerboard. It is popular because its corners can be detected accurately and reliably, while other patterns are also possibly demonstrated in [5, 6]. Svoboda et al. [7] make use of the overlapping fields of view of the cameras, and it can calibrate stereo camera or circular camera rig with all cameras pointing inwards. However, systems with cameras pointing outwards are increasingly popular, and usually there are minimal or no overlapping fields of view. Li et al. [5] presented a multicamera system calibration toolbox adapted to minimal overlapping fields of view using a feature descriptor. In [8, 9], handeye calibration methods are used to calibrate this system, but they are often not accurate due to visual odometry drift. In addition to camera models, the production convenience and expansibility of the calibration pattern are also focused on. In early research, Bouguet [10] made use of cubes with a chessboard or circular dots on their surfaces. Yet this pattern is not convenient to build with high precision. Strauβ et al. [11] use checkerboard targets and combine many of them to a rigid, threedimensional target. The checkerboards are provided with a graphical 2D code for uniquely identifying every single corner of the checkerboard. But the calibration target is also with complex structure and not easy to produce.
Our multicamera calibration work focuses on both the adaptability to camera models and the production convenience of calibration pattern. We group many tags of AprilTag [3] to an array and print them into a board with great planarity. These tags are identified by their different appearances, and each of them is marked as a unique ID. The extrinsic parameters of multiple cameras can be estimated only if there is one complete tag at least in each camera’s field of view.
To extrinsically calibrate a laser range finder and a camera, different calibration targets and geometry constraints are presented. Kwak et al. [12] and Li et al. [13] propose vshaped calibration target and the rightangled triangulation board, respectively, to generate constraints on the extrinsic parameters through establishing linepoint correspondences. Their drawback is that it is difficult to guarantee that the selected laser points exactly lie on the calibration target boundary. Zhang et al. [14] use the checkerboard to be the calibration target. As the state of the art, this method establishes constraints on the extrinsic parameters with plane parameters and is extended to extrinsically calibrate other kinds of range sensors and cameras [15–17]. In our research, the checkerboard was replaced with planar AprilTag array as the calibration target, and the planeline correspondences [14] were employed to build the constraints.
Calibration pattern
AprilTagbased pose estimation
AprilTag is a robust and flexible visual fiducial system proposed by Olson [3] in 2011. It uses a 2D bar code style “tag” as Fig. 1a shows, allowing full 6DOF localization of features from a single image.
This system is composed of two major components: the tag detector and the coding system. The job of the tag detector is to estimate the position of possible tags in an image using a graphbased image segmentation algorithm and then estimate the camera pose relative to the tag, that is to say computing the Euclidean transformation. The transformation between camera coordinate system and tag coordinate system is given by the transformation matrix T:
where the R _{3×3} is a rotation matrix with R ^{T} R = RR ^{T} = I and t _{3×1} is a translation vector. The Euler angles are represented by the vector r _{3×1}=[r _{ x }, r _{ y }, r _{ z }]^{T}. The tag coordinate system is presented in Fig. 2b. The coding system is to determine whether the tags are valid or not. It uses a lexicode system that can generate codes for any arbitrary tag size. There are several useful code families computed and distributed by Olson’s [3] software.
Design of calibration target
We use part of tags of the code family 36h11 distributed by Olson [3] to generate an AprilTag array in a plane. The AprilTag array can be easily printed out in a paper with great planarity as our calibration target. The distributions of the tags in the AprilTag array are manually designed. Therefore, the transformations among all the tags are known accurately. According to Fig. 1b, we can clearly see that there is no rotation and translation along zaxis. The transformation matrix between arbitrary two tags (ID = i and ID = j) can be simplified as:
where [t _{ x }, t _{ y }, 0]^{T} = t _{ i } ^{j} .
The coordinate system of the first tag (ID = 0) in the AprilTag array is treated as the global coordinate system. Thus, the transformation matrix T _{ n } ^{0} between an arbitrary tag (ID = n) coordinate system and the global coordinate system can be computed precisely without effort.
As the multisensor system calibration target, the AprilTag array has several advantages compared with other conventional calibration targets like checkerboards:

1.
Accurate localization In [3], the localization accuracy of the AprilTag has been evaluated using a raytracing simulator. Two experiments measuring the orientation accuracy and distance accuracy validated the high precision and reliability in localization. Without loss of generality, realworld performance of the system will be lower than synthetic experiments for noise, lighting variation, etc. But it is still good.

2.
Great adaptability to camera models The camera pose in tag coordinate system can be estimated as long as there is one complete tag in the field of view. For the multicamera system with nonoverlapping views, we can decouple these cameras into several different pairs of neighboring cameras to guarantee that at least one tag appears in fields of view. Hence, the extrinsic calibration of multiple cameras can be realized through coordinate systems transformations, which we will talk about in “Extrinsic calibration of multisensor system” section.

3.
Reliable tags identification The estimated camera pose is in the tag coordinate system. Therefore, identifying these different tags in the AprilTag array is important to localize the camera in global coordinate system. The AprilTag provides the users a large number of distinguishable codes for tags and reliable tag identification algorithm.

4.
Convenient production and expansibility The AprilTag array can be generated easily with expected size and distribution by a CDE package available at https://github.com/ethzasl/kalibr/wiki/downloads.
Furthermore, we can conveniently extend the AprilTag array by printing additional part and then jointing the original target.
Extrinsic calibration of multisensor system
The extrinsic calibration of multiple sensors is to identify the rigid transformations among these sensor coordinate systems. This paper focuses on the extrinsic calibration of a multisensor system including multiple cameras and a 2D laser scanner. This calibration process can be carried out in both static and dynamic environments. The static environment is advised since the dynamic environment may lead to lower accuracy due to the multiplesensor data unsynchronization problem. In general, the calibration is preoperated before the final task and a static environment can be easily established. Therefore, most calibration is operated under a static environment.
This calibration is composed of two components: camera to camera and camera to 2D laser scanner. These two calibration processes are combined as a joint procedure.
Multiplecamera extrinsic calibration
We divide all the cameras into neighboring camera pairs (see Fig. 3) and then estimate all the camera extrinsic parameters through extrinsically calibrating all the neighboring camera pairs. Hence, the main problem is how to estimate the extrinsic parameters between the camera pairs. According to the diagram of the dualcamera extrinsic calibration (Fig. 4), the closer to 180° the angle θ between their optical axes is, the larger the AprilTag array needs to be for the condition that at least one complete AprilTag needs to be in the field of view of each camera. More importantly, the angle Φ between the AprilTag’s normal vector and the vector to the camera affects the localization accuracy [3], and without loss of generality, the Φ only depends on the θ. Therefore, the θ should be the most important reference to the camera pairs grouping in principle.
Dualcamera extrinsic calibration
Figure 4 presents the diagram of the dualcamera extrinsic calibration. The two cameras, forward camera and downward camera, can be located in different tag coordinate systems, which means that the (R _{ D } ^{P} , t _{ D } ^{P} ) and (R _{ k } ^{F} , t _{ k } ^{F} ) are provided by AprilTag [3] directly. R _{ D } ^{P} and R _{ k } ^{F} stand for rotation matrixes from downward camera coordinate system to tag (ID = p) coordinate system; t _{ D } ^{P} and t _{ k } ^{F} denote the translation vectors. The purpose of this camera extrinsic calibration is to estimate (R _{ D } ^{F} , t _{ D } ^{F} ). According to the Euclidean transformation [18] and formula (1), the extrinsic parameters of the dual cameras can be obtained by the Euclidean transformation matrixes’ multiplication:
where
and T _{ D } ^{p} , T _{ p } ^{k} , T _{ k } ^{F} denote corresponding Euclidean transformations. It should be noted that the T _{ p } ^{k} can be computed conveniently because all the tag coordinate system transformations are modeled in advance.
Singlecamera localization optimization
It is possible that there are several complete tags appearing in the field of view of a single camera at the same time. The AprilTag would provide us several poses relative to these detected tags. These poses may not be same after the transformations into global coordinate system for the reason that the pose estimation error in different tag coordinate systems is different. Obviously, these measurement errors affect the performance of the extrinsic calibration.
The origin Q _{ o } ^{G} = [0, 0, 0]^{T} of the global coordinate system can be represented in the tag (ID = i) coordinate system as Q _{ o } ^{i} . Then, we can transform it into camera coordinate system:
and we can transform the origin Q _{ o } ^{G} into camera coordinate system directly:
where R _{ C } ^{G} and t _{ C } ^{G} are the extrinsic parameters of a single camera. We can refine them by a minimization on a Euclidean distances function F(R _{ C } ^{G} , t _{ C } ^{G} ):
The function above assumes that there are n tags detected in image. R _{ n } ^{C} and t _{ n } ^{C} denote tag (ID = n) rotation matrix and translation vector, respectively, relative to the camera, which are given by AprilTag. We model the F(R _{ C } ^{G} , t _{ C } ^{G} ) minimization as a nonlinear optimization problem and figure it out by using the Levenberg–Marquardt method [19, 20].
Camera and laser extrinsic calibration
It is assumed that there is at least one camera sharing common field of view with the 2D laser scanner, which means that the laser points are in the field of view of that camera. We extrinsically calibrate the 2D laser scanner and the camera using planeline correspondence. Unlike [14, 21], our method employs the AprilTag array to be the calibration target.
Denote the coordinates of a point with respect to the 2D laser scanner coordinate system and camera coordinate system by Q ^{L} and Q ^{D}. The coordinates are related as follows:
where R _{ L } ^{D} and t _{ L } ^{D} represent rotation matrix and translation vector from laser scanner coordinate system to camera coordinate system. The estimation of (R _{ L } ^{D} , t _{ L } ^{D} ) is the purpose of this calibration.
As Fig. 5 shows, the AprilTag array is placed in the common field of view of the camera and laser scanner. We can compute the normal vector n _{ i } ^{D} of the AprilTag array in the camera coordinate system through singlecamera extrinsic calibration in previous section. There is an intersection line L _{ i } (3 × 1 vector in laser scanner coordinate system) of the AprilTag array and laser scan plane, on which laser scanner provides many discrete points {Q _{ ij } ^{L} }. According to the planeline correspondence, two geometry constraints on (R _{ L } ^{D} , t _{ L } ^{D} ) are given:
Q _{ i } ^{D} is the 3D coordinate of a point in AprilTag array plane with respect to the downward camera coordinate system. There are 6 degrees of freedom in R _{ L } ^{D} and t _{ L } ^{D} together. Hence, to solve above equations, three correspondences are required at least. The whole process includes linear solution and nonlinear optimization. The nonlinear optimization is a nonlinear minimization process on the Euclidean distances from laser points to the AprilTag array plane. The error function is defined as:
where n _{ i } ^{D}  equals the distance from camera to the calibration plane. This error function is minimized by the Levenberg–Marquardt method [19, 20]. The details of the linear solution, nonlinear optimization and global optimization are described in [14].
Global optimization
As mentioned in previous section, we divide the multiplesensor extrinsic calibration into two independent processes, but they can be finished in a common procedure. Simultaneously, a closedloop optimization can be used to refine the extrinsic parameters obtained by the two processes. Assuming that the downward camera and the laser scanner have a common field of view and the T _{ D } ^{F} and T _{ L } ^{D} have been calibrated, T _{ L } ^{F} can be computed as:
Although the forward camera and the laser scanner may have no common field of view, we can still refine T _{ L } ^{F} by the nonlinear minimization process described in previous section.
Figure 6 shows the whole calibration algorithm schematic, and the details are described as the following workflow:
Algorithm: Multisensor system extrinsic calibration 

Inputs: 
1. Simultaneous images and laser scan points’ coordinates. 
2. Intrinsic parameters of each camera. 
3. The transformations between two arbitrary tags in the calibration target. 
Procedure: 
1. Dualcamera extrinsic calibration 
(a) Singlecamera extrinsic calibration using AprilTag. 
(b) Optimization of singlecamera extrinsic parameters by a nonlinear minimization process. 
(c) Computing the extrinsic parameters (R _{ D } ^{F} , t _{ D } ^{F} ) by the matrix transformations. 
2. Camera and laser extrinsic calibration 
(a) Using planeline correspondence for the geometry constraints. 
(b) Solving these equations for the linear solution. 
(c) Optimizing the linear solution to refine the camera and laser extrinsic parameters. 
3. Global optimization to refine the multisensor system extrinsic parameters. 
(a) Computing the extrinsic parameters between other cameras and the laser. 
(b) Global optimization of all the sensors extrinsic parameters. 
Output: Extrinsic parameters of multisensor system 
Experiments
Singlecamera pose estimation
As description of the camera–laser extrinsic calibration, the normal vector n _{ i } ^{D} is obtained by singlecamera pose estimation. Therefore, the dualcamera extrinsic calibration and camera–laser extrinsic calibration significantly depend on the singlecamera pose estimation. To estimate the camera pose with higher accuracy, a nonlinear optimization process is employed when multiple tags are in the field of view of the single camera (see Fig. 7). The calibration pattern plane is pasted on the wall, which is defined by 10 × 4 tags with 0.2 m × 0.2 m size of each tag and 0.2 m space between two tags. We mounted a camera (Point Grey FireFly MV) and a PIXHAWK on a tripod. The camera attitude can be measured by the IMU module in PIXHAWK. These layouts make us conveniently locate the camera in calibration target coordinate system using some measuring devices, and the measurements are regarded as the ground truth.
Figure 7 shows the experiment scene. In one of the experiments, there are about 8 complete tags in the field of view of the camera. We record all the poses provided by each tag and use the nonlinear optimization algorithm to compute the camera final pose in global coordinate system. Four sets of data are collected, and we compute the position estimation errors in X, Y and Z axes, respectively, by each tag. Figure 8 presents the errors distribution and the final pose with nonlinear optimization process. The rootmeansquare errors (RMSEs) are computed to be compared with the optimization result errors, and the results are presented in Table 1. Obviously, this optimization shows a better position estimation accuracy when multiple tags are in the field of view.
We use the IMU to measure the yaw angles as the ground truth because the camera, mounted on the tripod, is rotated at yaw direction, and the experiment results show that the pitch and roll angles are closed to 0. From Table 2, we can see that the yaw angle estimation errors are decreased by the optimization process. In summary, these experiments above show higher pose estimation accuracy with the nonlinear optimization when multiple tags are in the field of view.
Multisensor jointly extrinsic calibration
The proposed algorithm has been implemented in ROS (Robot Operating System) and tested on real data with a of a rotarywing UAV platform (see Fig. 9). An onboard processing computer (Intel NUC D54250WYKH), a flight controller (PIXHAWK), a cameras–laser sensor module (two Point Grey FireFly MV cameras, Hokuyo UTM30LX laser scanner) and a GPS are integrated as an onboard multisensor system.
This experiment is to extrinsically calibrate the onboard multisensor system and globally refine all the extrinsic parameters. With the basis of the accurate pose estimation presented in previous section, the dualcamera extrinsic calibration should be as accurate as the singlecamera extrinsic calibration. Here, we run two sets of independent trials. Figure 10a shows the dualcamera calibration results after global refinement, and the two end points of each red dotted line stand for forward and downward camera positions, respectively, at the same time. It should be noted that there is a result seems to be with larger error in Trial1. Actually, it is the inappropriate perspective that leads to this misunderstanding. This result is corresponding to the first blue point in Fig. 10b, and obviously, the distance error is not so large. The distances between the dual cameras, which is the length of these dotted lines in Fig. 10a, are computed conveniently to be compared with the ground truth (see Fig. 10b). The RMSE (0.0230 and 0.0055 m) is acceptable.
Meanwhile, the downward camera and the laser scanner are extrinsically calibrated in the two sets of trials. To demonstrate a better performance of our method over the method in [14], the laser points are projected onto the images of the downward camera using the extrinsic parameters provided by both methods. Figure 11 shows the mapping results. We do not have the ground truth of the camera–laser extrinsic parameters, but the mapping results by our method (red dotted line) are more reasonable. Hence, we can conclude that our method calibrates the extrinsic parameters more accurately.
Conclusion and future work
In this paper, we proposed a new algorithm for extrinsically calibrating multisensor system including multiple cameras and a 2D laser scanner using the AprilTag array as the calibration target. This algorithm uses the AprilTag to estimate cameras’ poses and employ a nonlinear optimization method to refine these poses when multiple tags are in the field of view. Then, the camera–camera and laser–camera extrinsic parameters are estimated on the basis of the singlecamera pose. Finally, a global optimization is used to refine all the extrinsic parameters. This algorithm is adapted to multiplecamera extrinsic calibration with nonoverlapping field of view, and it has the advantages of being simple to use and yields great performance, which have been validated by real data.
In future work, more experiments and analyses about the influences of the tags number in the field of view or the camera module, etc. on the calibration accuracy should be carried out. In addition, it should be possible to extend the multisensor system into other sensor configurations, and an accurate and stable dynamic calibration can be taken into account.
References
 1.
Biber P, Andreasson H, Duckett T, Schilling A. 3D modeling of indoor environments by a mobile robot with a laser scanner and panoramic camera. In: IEEE/RSJ international conference on intelligent robots and system (IROS), 2004, pp. 3430–35.
 2.
Huh S, Shim D, Kim J. Integrated navigation system using camera and gimbaled laser scanner for indoor and outdoor autonomous flight of UAVs. In: IEEE/RSJ international conference on intelligent robots and system (IROS), 2013, pp. 3158–63.
 3.
Olson E. AprilTag: a robust and flexible visual fiducial system. In: IEEE international conference on robotics and automation (ICRA), 2011, pp. 3400–07.
 4.
Knorr M, Niehsen W, Stiller C. Online extrinsic multicamera calibration using ground plane induced homographies. In: IEEE intelligent vehicles symposium, 2013, pp. 236–241.
 5.
Li B, Heng L, Koser K, Pollefeys M. A multiplecamera system calibration toolbox using a feature descriptorbased calibration pattern. In: IEEE/RSJ international conference on intelligent robots and systems (IROS), 2013, pp. 1301–07.
 6.
Schmalz C, Forster F, Angelopoulou E. Camera calibration active versus passive targets. Opt Eng. 2011;50(11):113601.
 7.
Svoboda T, Martinec D, Pajdla T. A convenient multicamera selfcalibration for virtual environments. Presence Teleoperators Virtual Environ. 2005;14(4):407–22.
 8.
Tsai R, Lenz R. A new technique for fully autonomous and efficient 3D robotics hand/eye calibration. IEEE Trans Robot Autom. 1989;5(3):345–58.
 9.
Shiu Y, Ahmad S. Calibration of wristmounted robotic sensors by solving homogeneous transform equations of the form ax = xb. IEEE Trans Robot Autom. 1989;5(1):16–29.
 10.
Bouguest J. Camera calibration toolbox for matlab. 2004.
 11.
Strauβ T, Ziegler J, Beck J. Calibration multiple cameras with nonoverlapping views using coded checkerboard targets. In: International conference on intelligent transportation system (ITSC), 2014, pp. 2623–28.
 12.
Kwak K, Huber D, Badino H, Kanade T. Extrinsic calibration of a single line scanning lidar and a camera. In: IEEE/RSJ international conference on intelligent robots and systems (IROS), 2011, pp. 25–30.
 13.
Li G, Liu Y, Dong L, Cai X, Zhou D. An algorithm for extrinsic parameters calibration of a camera and laser range finder using line features. In: IEEE/RSJ international conference on intelligent robots and systems (IROS), 2007, pp. 3854–59.
 14.
Zhang Q, Pless R. Extrinsic calibration of a camera and laser range finder (improves camera calibration). In: IEEE/RSJ international conference on intelligent robots and systems (IROS), 2004, pp. 2301–06.
 15.
Geiger A, Moosmann F, Car O, Schuster B. Automatic camera and range sensor calibration using a single shot. In: IEEE international conference on robotics and automation (ICRA), 2012, pp. 3926–43.
 16.
Zhou L, Deng Z. Extrinsic calibration of a camera and a LIDAR based on decoupling the rotation from the translation. In: IEEE intelligent vehicles symposium (IV), 2012, pp. 642–8.
 17.
Mirzaei F, Kottas D, Roumeliotis S. 3D LIDARcamera intrinsic and extrinsic calibration: identifiability and analytical leastsquaresbased initialization. Int J Robot Res. 2012;31(4):452–67.
 18.
Corke P. Robotics, vision and control. Stanford: Stanford University; 2014.
 19.
Levenberg K. A method for the solution of certain problems in least squares. Q Appl Math. 1944;2:164–8.
 20.
Marquardt D. An algorithm for least squares estimation of nonlinear parameters. SLAM J Appl Math. 1963;11(2):431–41.
 21.
Zhou L. A new minimal solution for the extrinsic calibration of a 2D LIDAR and a camera using three planeline correspondences. IEEE Sens J. 2014;14(2):442–54.
Authors’ contributions
DT proposed the calibration algorithm, designed the validation process, and wrote the manuscript. TH, LS, ZM and CP built the experiment system for the validation, analyzed the data, and revised the manuscript. All authors read and approved the final manuscript.
Acknowledgements
This work was supported in part by the Scientific Research Foundation of Graduate School of National University of Defense Technology, in part by the National Science Foundation of China under Grant 61005077 and Grant 61075072 and in part by the Foundation for the Author of Excellent Doctoral Dissertation of Hunan Province under Grant YB2011B0001. The authors would like to thank Boxin Zhao, Zhaowei Ma, Shulong Zhao, Zhiwei Zhong, Dou Hu, etc. for their contribution in building the multisensor system.
Competing interests
The authors declare that they have no competing interests.
Author information
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Tang, D., Hu, T., Shen, L. et al. AprilTag arrayaided extrinsic calibration of camera–laser multisensor system. Robot. Biomim. 3, 13 (2016). https://doi.org/10.1186/s4063801600440
Received:
Accepted:
Published:
Keywords
 Multisensor
 AprilTag array
 Extrinsic calibration
 Nonoverlapping