یک الگوریتم بخش بندی مجموعه تصویر مرکز محور برای ساختار کارآمد از حرکت
ترجمه نشده

یک الگوریتم بخش بندی مجموعه تصویر مرکز محور برای ساختار کارآمد از حرکت

عنوان فارسی مقاله: یک الگوریتم بخش بندی مجموعه تصویر مرکز محور برای ساختار کارآمد از حرکت
عنوان انگلیسی مقاله: A center-driven image set partition algorithm for efficient structure from motion
مجله/کنفرانس: علوم اطلاعات - Information Sciences
رشته های تحصیلی مرتبط: مهندسی کامپیوتر
گرایش های تحصیلی مرتبط: مهندسی الگوریتم ها و محاسبات، مهندسی نرم افزار، هوش مصنوعی
کلمات کلیدی فارسی: مرکز محور، بخش بندی مجموعه تصویر، مدل 3D، ساختار از حرکت
کلمات کلیدی انگلیسی: Center-driven، Image set partitioning، ۳D reconstruction، Structure from Motion
نوع نگارش مقاله: مقاله پژوهشی (Research Article)
شناسه دیجیتال (DOI): https://doi.org/10.1016/j.ins.2018.11.055
دانشگاه: Hubei Key Laboratory of Intelligent Geo-Information Processing, School of Computer Science, China University of Geosciences, Wuhan 430074, China
صفحات مقاله انگلیسی: 41
ناشر: الزویر - Elsevier
نوع ارائه مقاله: ژورنال
نوع مقاله: ISI
سال انتشار مقاله: 2019
ایمپکت فاکتور: 6/774 در سال 2018
شاخص H_index: 154 در سال 2019
شاخص SJR: 1/620 در سال 2018
شناسه ISSN: 0020-0255
شاخص Quartile (چارک): Q1 در سال 2017
فرمت مقاله انگلیسی: PDF
وضعیت ترجمه: ترجمه نشده است
قیمت مقاله انگلیسی: رایگان
آیا این مقاله بیس است: خیر
کد محصول: E11244
فهرست مطالب (انگلیسی)

Abstract

1- Introduction

2- The proposed center driven data partitioning method

3- Model reconstruction and merging

4- Experiment results

5- Conclusion

References

بخشی از مقاله (انگلیسی)

Abstract

This paper proposes a novel center-driven image set partitioning method dedicated for efficient Structure from Motion (SfM) on unevenly distributed images. First, multiple base clusters are found at places with high image density. Instead of building a small initial model from two images, we build multiple initial base models from these base clusters. This promises that the scene is reconstructed from dense places to sparse areas, which can reduce error accumulation when images have weak overlap. Second, the whole image set is divided into several region clusters to decide which images should be reconstructed from the same base model. In this step, the base models are treated as centers and the affinity between an image with each of them is measured by the reconstruction path length. To enable faster speed, images in each region cluster are further divided into several sub-region clusters so that they could be added to the same base model simultaneously. Based on the above partitioning results, the partial 3D models are reconstructed in parallel and then merged. Experiments show that the proposed method achieves remarkable speedup and better completeness than state-of-the-art methods, without significant accuracy deterioration.

Introduction

Investigating 3D information assists many applications in computer vision [41, 40, 48, 37, 20, 11, 18]. Structure from Motion (SfM) is widely used in reconstructing 3D camera poses and sparse point cloud from unordered images. With the rapid development of the Internet, these images can be easily searched and downloaded through keywords. However, due to the large scale nature of such problems, accuracy and efficiency are still two most challenging issues. Existing SfM methods can be divided into three classes: incremental [34, 32], global [42, 6, 7] and hybrid [5, 50]. This paper mainly focuses on the first type. A typical incremental SfM pipeline consists of three steps: 1) Constructing scene graph via image matching and geometry verification. 2) Selecting two starting images and build an initial model for the incremental process. 3) Adding new images to the existing model and run Bundle Adjustment (BA) [39] to refine parameters. The last step is repeated until no more images could be added. Some of these methods perform in a top-down manner. A coarse model which spans the whole scene is reconstructed as quickly as possible in the first stage and then it is enriched in the second stage. Snavely et al. extracted a skeletal graph [33] that covers the full scene with the minimum number of interior nodes. Leaf nodes can be added after the skeletal set is reconstructed. This method is further used in [2], which designed a system running on a distributed cluster to efficiently reconstruct a city in one day. The concept of using iconic scene graphs to capture the major aspects of the scene is proposed in [19] and [27]. After clustering images in the GIST [25] feature space, they selected an iconic image for each cluster. The viewing graph formed by iconic images is computed via vocabulary tree [24] indexing. Frahm et al. [13] improved the work of [2] by reconstructing a city on a single machine with multi-core CPUs and GPUs. When selecting iconic images the image descriptors were compressed to shorter binary codes so that it is memory efficient for GPU computation. Heinly et al. [16] advanced the state-of-the-art from city-scale modeling to world-scale modeling on a single computer. They also leveraged the idea of iconic images.