Header menu link for other important links
X
Video stitching using interacting multiple model based feature tracking
, S.I. Gandhi
Published in Springer New York LLC
2019
Volume: 78
   
Issue: 2
Pages: 1375 - 1397
Abstract
In this paper, we propose a novel video stitching algorithm for videos from multiple cameras using interacting multiple model feature tracking to maintain spatial-temporal consistency. Apart from image alignment challenges while stitching a video, inter frame consistency, video jitter due to moving object and camera movement also need to be addressed. To address these challenges, feature point detected in the initial frame is tracked in the subsequent frames to maintain spatial-temporal consistency and reduce computation complexity in feature point detection. Firstly, the feature points are detected using Features from Accelerated Segment Test algorithm. Secondly, using Binary Robust Invariant Scalable Keypoints descriptor values are obtained from detected feature points and matched using hamming distance. The outliers are removed by Random Sample Consensus Algorithm. Once, the first frame is stitched, feature points detected from first frame are tracked using kalman filter with interacting multiple model. The tracked feature points are descripted and homography between the frames are found. This will maintain the spatio-temporal consistency by reducing jitter effect between frames after stitching, and since the frames are neglected from feature point detection, computation complexity is reduced. From the experimental results, we observed that the execution time of the proposed method is less and the performance of structural similarity is better than the existing methods. © 2018, Springer Science+Business Media, LLC, part of Springer Nature.
About the journal
JournalData powered by TypesetMultimedia Tools and Applications
PublisherData powered by TypesetSpringer New York LLC
ISSN13807501