国产bbaaaaa片,成年美女黄网站色视频免费,成年黄大片,а天堂中文最新一区二区三区,成人精品视频一区二区三区尤物

首頁(yè)> 美國(guó)衛(wèi)生研究院文獻(xiàn)>Sensors (Basel Switzerland) >Robust Cylindrical Panorama Stitching for Low-Texture Scenes Based on Image Alignment Using Deep Learning and Iterative Optimization
【2h】

Robust Cylindrical Panorama Stitching for Low-Texture Scenes Based on Image Alignment Using Deep Learning and Iterative Optimization

機(jī)譯:基于深度學(xué)習(xí)和迭代優(yōu)化的圖像對(duì)齊的低紋理場(chǎng)景魯棒圓柱形全景拼接

代理獲取
本網(wǎng)站僅為用戶提供外文OA文獻(xiàn)查詢和代理獲取服務(wù),本網(wǎng)站沒有原文。下單后我們將采用程序或人工為您竭誠(chéng)獲取高質(zhì)量的原文,但由于OA文獻(xiàn)來(lái)源多樣且變更頻繁,仍可能出現(xiàn)獲取不到、文獻(xiàn)不完整或與標(biāo)題不符等情況,如果獲取不到我們將提供退款服務(wù)。請(qǐng)知悉。

摘要

Cylindrical panorama stitching is able to generate high resolution images of a scene with a wide field-of-view (FOV), making it a useful scene representation for applications like environmental sensing and robot localization. Traditional image stitching methods based on hand-crafted features are effective for constructing a cylindrical panorama from a sequence of images in the case when there are sufficient reliable features in the scene. However, these methods are unable to handle low-texture environments where no reliable feature correspondence can be established. This paper proposes a novel two-step image alignment method based on deep learning and iterative optimization to address the above issue. In particular, a light-weight end-to-end trainable convolutional neural network (CNN) architecture called ShiftNet is proposed to estimate the initial shifts between images, which is further optimized in a sub-pixel refinement procedure based on a specified camera motion model. Extensive experiments on a synthetic dataset, rendered photo-realistic images, and real images were carried out to evaluate the performance of our proposed method. Both qualitative and quantitative experimental results demonstrate that cylindrical panorama stitching based on our proposed image alignment method leads to significant improvements over traditional feature based methods and recent deep learning based methods for challenging low-texture environments.
機(jī)譯:圓柱全景拼接可以生成具有寬視場(chǎng)(FOV)的高分辨率場(chǎng)景圖像,使其成為環(huán)境感應(yīng)和機(jī)器人定位等應(yīng)用的有用場(chǎng)景表示。在場(chǎng)景中有足夠可靠的特征的情況下,基于手工特征的傳統(tǒng)圖像拼接方法對(duì)于從一系列圖像構(gòu)建圓柱全景圖非常有效。但是,這些方法無(wú)法處理無(wú)法建立可靠特征對(duì)應(yīng)關(guān)系的低紋理環(huán)境。針對(duì)上述問題,本文提出了一種基于深度學(xué)習(xí)和迭代優(yōu)化的新型兩步圖像對(duì)齊方法。特別是,提出了一種名為ShiftNet的輕量級(jí)端到端可訓(xùn)練卷積神經(jīng)網(wǎng)絡(luò)(CNN)體系結(jié)構(gòu),以估計(jì)圖像之間的初始偏移,并根據(jù)指定的相機(jī)運(yùn)動(dòng)模型在亞像素細(xì)化過(guò)程中對(duì)其進(jìn)行了進(jìn)一步優(yōu)化。在合成數(shù)據(jù)集,渲染的逼真圖像和真實(shí)圖像上進(jìn)行了廣泛的實(shí)驗(yàn),以評(píng)估我們提出的方法的性能。定性和定量實(shí)驗(yàn)結(jié)果均表明,基于我們提出的圖像對(duì)齊方法的圓柱全景拼接可顯著改善傳統(tǒng)的基于特征的方法以及針對(duì)挑戰(zhàn)性低紋理環(huán)境的基于深度學(xué)習(xí)的新方法。

著錄項(xiàng)

相似文獻(xiàn)

  • 外文文獻(xiàn)
  • 中文文獻(xiàn)
  • 專利
代理獲取

客服郵箱:kefu@zhangqiaokeyan.com

京公網(wǎng)安備:11010802029741號(hào) ICP備案號(hào):京ICP備15016152號(hào)-6 六維聯(lián)合信息科技 (北京) 有限公司?版權(quán)所有
  • 客服微信

  • 服務(wù)號(hào)