政大機構典藏-National Chengchi University Institutional Repository(NCCUR):Item 140.119/122133
English  |  正體中文  |  简体中文  |  Post-Print筆數 : 27 |  全文笔数/总笔数 : 110080/141030 (78%)
造访人次 : 46386834      在线人数 : 832
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜寻范围 查询小技巧:
  • 您可在西文检索词汇前后加上"双引号",以获取较精准的检索结果
  • 若欲以作者姓名搜寻,建议至进阶搜寻限定作者字段,可获得较完整数据
  • 进阶搜寻
    政大機構典藏 > 資訊學院 > 資訊科學系 > 學位論文 >  Item 140.119/122133


    请使用永久网址来引用或连结此文件: https://nccur.lib.nccu.edu.tw/handle/140.119/122133


    题名: 無人機前方視野精準輪廓線條跟隨方法之研究
    Accurate contour line-following methods for UAV forward view
    作者: 李恭儀
    Lee, Gong-Yi
    贡献者: 劉吉軒
    Liu, Jyi-Shane
    李恭儀
    Lee, Gong-Yi
    关键词: 無人機
    自主飛行控制
    輪廓線條跟隨
    二維方向向量機率模型
    UAV
    Autonomous flight control
    Contour line following
    Two-Dimensional direction vector probability model
    日期: 2018
    上传时间: 2019-01-23 14:55:20 (UTC+8)
    摘要: 現有的線條跟隨技術主要讓無人機跟隨下方視野的線條,以等高定速向前跟隨,並調整轉向角改變跟隨方向。然而當無人機跟隨前方視野的線條時,調整轉向角會使無人機前方視野的線條消失,而且現有的線條跟隨不會再偵測到跟隨過的線條,但前方視野依然能偵測到跟隨過的線條,需要固定前方視野來決定跟隨方向。而且當無人機改變方向飛行時,定速所產生的移動慣性會使無人機偏離原本的飛行路徑。除此之外,現有的研究亦存在一些問題,如線條的錯誤偵測、沒有固定的方式評估線條跟隨的表現,以及未要求跟隨的精準度。
    因此本研究提出二維方向向量機率模型,解決無人機跟隨前方視野的線條時所產生的方向性問題以及避免線條的錯誤偵測影響跟隨。本研究以雙層二維方向向量機率模型,搭配慣性速度抑制方法,能夠抑制無人機改變方向飛行時所產生的移動慣性,使無人機能夠快速且精準的進行線條跟隨。本研究提出兩種評估指標1. 無人機視覺中心位於目標路徑寬度以內之程度以及2. 無人機視覺中心偏移目標路徑寬度以外的位移誤差,評估無人機進行精準的線條跟隨時的表現。
    最後本研究透過在真實世界的實驗,驗證提出跟隨方法的可行性、穩定性以及精準性。以先前研究中最能精準跟隨線條的基於向量域的線條跟隨作為基準,本研究所提出的方法經過兩種評估指標進行評估後,皆比基準表現得更好。其中,雙層二維方向向量機率模型搭配慣性速度抑制方法的表現最為突出,該方法具備選擇飛行方向、校正自身位置以及慣性速度抑制的功能,能跟隨複雜的線條。經過真實世界的考驗,本研究提出的方法能實際應用在真實世界上。未來能針對前方視野的線條跟隨研究進行更進一步的改進與延伸,包含了移動時的穩定性、改進位移誤差以及戶外實際應用,如高壓電塔檢測、摩天樓設備安檢等任務。
    Majority of existing line following techniques focused on allowing the drone to follow lines located bottom of the drone’s front view camera. The drone are often in constant speed and changes its following direction by adjusting its steering angle. However, when the drone needs to follow lines located vertically at the center of the front view camera, adjusting the steering angle will make the line disappeared from the drone’s vision. Even though the previously followed line can still be seen in the front view camera, current existing line following techniques cannot detect a previously followed the line, the line needs to be fixed in front of the view to determine the direction of the follow. Moreover, when the drone changes direction, the moving inertia will cause the drone to deviate from the original flight path. In addition, there are still rooms of improvement for existing research, such as error detection of lines, lack of common methods to evaluate the performance of line following and does not include line following accuracy as performance measurement.
    Therefore, this study proposes a two-dimensional directional vector probability model to solve the directionality problem caused by the drone following the line of the front view and to avoid error detection of the line. In this study, the two-layer two-dimensional vector probability model and the inertial speed suppression method can suppress the moving inertia generated by the UAV when changing direction, allowing the drone to follow line quickly and accurately. This study also proposes two evaluation indicators to evaluate the performance of the drone for precise line following: 1) The degree of UAV vision center within the width of the target path, 2) The displacement error of the drone vision center from the target path.
    Finally, this study verifies the feasibility, stability and accuracy of the proposed method by experimenting in the real world. Based on the most accurate vector-based line following methods in the previous study as the benchmark, using our proposed evaluation methods, the proposed method in this study performs better than the benchmark. Among them, the two-layer two-dimensional direction vector probability model and the inertial speed suppression method are the most prominent, it has the capability of selecting flight direction, correcting its own position and suppressing the inertia speed, and can follow complex lines. After experimenting in the real world, the method proposed in this study can be practically applied in the real world. In future, we can further improve and extend the line following research for front view, including stability during movement, improvement for displacement error and practical applications for outdoor, such as high-voltage tower inspection and security inspection for skyscraper facilities.
    參考文獻: [1]Khan, M. I., Salam, M. A., Afsar, M. R., Huda, M. N., & Mahmud, T. (2016, July). Design, fabrication & performance analysis of an unmanned aerial vehicle. In AIP Conference Proceedings (Vol. 1754, No. 1, p. 060007). AIP Publishing.
    [2]Intel Corporation. (n.d.). Retrieved January 9, 2018, from https://click.intel.com/intel-aero-ready-to-fly-drone.html
    [3]DJI M200. (n.d.). Retrieved November 23, 2018, from https://www.dji.com/zh-tw/matrice-200-series?site=brandsite&from=nav
    [4]Parrot Drones SAS. (n.d.). Retrieved October 2, 2018, from https://www.parrot.com/us/drones/parrot-bebop-2.
    [5]Punetha, D., Kumar, N., & Mehta, V. (2013). Development and Applications of Line Following Robot Based Health Care Management System.International Journal of Advanced Research in Computer Engineering & Technology (IJARCET), 2(8), 2446-2450.
    [6]Dupuis, J. F., & Parizeau, M. (2006, June). Evolving a vision-based line-following robot controller. In Computer and Robot Vision, 2006. The 3rd Canadian Conference on (pp. 75-75). IEEE.
    [7]Nelson, D. R., Barber, D. B., McLain, T. W., & Beard, R. W. (2007). Vector field path following for miniature air vehicles. IEEE Transactions on Robotics, 23(3), 519-529.
    [8]Sujit, P. B., Saripalli, S., & Sousa, J. B. (2013, July). An evaluation of UAV path following algorithms. In Control Conference (ECC), 2013 European (pp. 3332-3337). IEEE.
    [9]Brandao, A. S., Martins, F. N., & Soneguetti, H. B. (2015, July). A vision-based line following strategy for an autonomous uav. In Informatics in Control, Automation and Robotics (ICINCO), 2015 12th International Conference on (Vol. 2, pp. 314-319). IEEE.
    [10]Martinez, C., Sampedro, C., Chauhan, A., & Campoy, P. (2014, May). Towards autonomous detection and tracking of electric towers for aerial power line inspection. In Unmanned Aircraft Systems (ICUAS), 2014 International Conference on (pp. 284-295). IEEE.
    [11]Choi, S. S., & Kim, E. K. (2015, July). Building crack inspection using small UAV. In Advanced Communication Technology (ICACT), 2015 17th International Conference on (pp. 235-238). IEEE.
    [12]Nguyen, V. N., Jenssen, R., & Roverso, D. (2018). Automatic autonomous vision-based power line inspection: A review of current status and the potential role of deep learning. International Journal of Electrical Power & Energy Systems, 99, 107-120.
    [13]Ghani, N. M. A., Naim, F., & Yon, T. P. (2011). Two wheels balancing robot with line following capability. World Academy of Science, Engineering and Technology, 55, 634-638.
    [14]Páll, E., Mathe, K., Tamas, L., & Busoniu, L. (2014, May). Railway track following with the AR. Drone using vanishing point detection. In Automation, Quality and Testing, Robotics, 2014 IEEE International Conference on (pp. 1-6). IEEE.
    [15]Cerón, A., Mondragón, I., & Prieto, F. (2018). Onboard visual-based navigation system for power line following with UAV. International Journal of Advanced Robotic Systems, 15(2), 1729881418763452.
    [16]Hartley, R., Kamgar-Parsi, B., & Narber, C. (2018). Using Roads for Autonomous Air Vehicle Guidance. IEEE Transactions on Intelligent Transportation Systems.
    [17]Shen, W., Wang, X., Wang, Y., Bai, X., & Zhang, Z. (2015). Deepcontour: A deep convolutional feature learned by positive-sharing loss for contour detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 3982-3991).
    [18]Bertasius, G., Shi, J., & Torresani, L. (2015). Deepedge: A multi-scale bifurcated deep network for top-down contour detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 4380-4389).
    [19]Arbelaez, P., Maire, M., Fowlkes, C., & Malik, J. (2011). Contour detection and hierarchical image segmentation. IEEE transactions on pattern analysis and machine intelligence, 33(5), 898-916.
    [20]Barton, M. J. (2001). Controller development and implementation for path planning and following in an autonomous urban vehicle. Undergraduate thesis, University of Sydney.
    [21]Park, S., Deyst, J., & How, J. P. (2007). Performance and lyapunov stability of a nonlinear path following guidance method. Journal of Guidance, Control, and Dynamics, 30(6), 1718-1728.
    [22]Kothari, M., Postlethwaite, I., & Gu, D. W. (2010). A Suboptimal Path Planning Algorithm Using Rapidly-exploring Random Trees. International Journal of Aerospace Innovations, 2.
    [23]Ratnoo, A., Sujit, P. B., & Kothari, M. (2011, September). Adaptive optimal path following for high wind flights. In 18th International Federation of Automatic Control (IFAC) World Congress (pp. 12-985).
    [24]Su, J. H., Lee, C. S., Huang, H. H., Chuang, S. H., & Lin, C. Y. (2010). An intelligent line-following robot project for introductory robot courses. World Transactions on Engineering and Technology Education, 8(4), 455-461.
    [25]Khalife, J., Shamaei, K., Bhattacharya, S., & Kassas, Z. (2018, September). Centimeter-accurate UAV navigation with cellular signals. In Proceedings of ION GNSS Conference.
    [26]Kothari, M., Postlethwaite, I., & Gu, D. W. (2014). UAV path following in windy urban environments. Journal of Intelligent & Robotic Systems, 74(3-4), 1013-1028.
    [27]Sipser, M. (2006). Introduction to the Theory of Computation(Vol. 2). Boston: Thomson Course Technology.
    [28]Alsalam, B. H. Y., Morton, K., Campbell, D., & Gonzalez, F. (2017, March). Autonomous UAV with vision based on-board decision making for remote sensing and precision agriculture. In Aerospace Conference, 2017 IEEE (pp. 1-12). IEEE.
    [29]Puterman, M. L. (2014). Markov decision processes: discrete stochastic dynamic programming. John Wiley & Sons.
    [30]Koenig, J., Malberg, S., Martens, M., Niehaus, S., Krohn-Grimberghe, A., & Ramaswamy, A. (2018). Multi-Stage Reinforcement Learning For Object Detection. arXiv preprint arXiv:1810.10325.
    [31]Xiang, Y., Alahi, A., & Savarese, S. (2015). Learning to track: Online multi-object tracking by decision making. In Proceedings of the IEEE international conference on computer vision (pp. 4705-4713).
    [32]Turchetta, M., Berkenkamp, F., & Krause, A. (2016). Safe exploration in finite markov decision processes with gaussian processes. In Advances in Neural Information Processing Systems (pp. 4312-4320).
    [33]Ferreira, L. A., Bianchi, R. A., Santos, P. E., & de Mantaras, R. L. (2018, June). A method for the online construction of the set of states of a Markov Decision Process using Answer Set Programming. In International Conference on Industrial, Engineering and Other Applications of Applied Intelligent Systems (pp. 3-15). Springer, Cham.
    [34]Suprayitno, H., Ratnasari, V., & Saraswati, N. (2017, November). Experiment Design for Determining the Minimum Sample Size for Developing Sample Based Trip Length Distribution. In IOP Conference Series: Materials Science and Engineering (Vol. 267, No. 1, p. 012029). IOP Publishing.
    [35]Parrot Bebop2 Power Technical specifications. (n.d.). Retrieved November 13, 2018, from https://www.parrot.com/us/drones/parrot-bebop-2-power-pack-fpv#technicals.
    [36]Bebop_autonomy. (n.d.). Retrieved November 13, 2018, from https://bebop-autonomy.readthedocs.io/en/latest.
    描述: 碩士
    國立政治大學
    資訊科學系
    105753006
    資料來源: http://thesis.lib.nccu.edu.tw/record/#G0105753006
    数据类型: thesis
    DOI: 10.6814/THE.NCCU.CS.001.2019.B02
    显示于类别:[資訊科學系] 學位論文

    文件中的档案:

    档案 大小格式浏览次数
    300601.pdf2983KbAdobe PDF2154检视/开启


    在政大典藏中所有的数据项都受到原著作权保护.


    社群 sharing

    著作權政策宣告 Copyright Announcement
    1.本網站之數位內容為國立政治大學所收錄之機構典藏,無償提供學術研究與公眾教育等公益性使用,惟仍請適度,合理使用本網站之內容,以尊重著作權人之權益。商業上之利用,則請先取得著作權人之授權。
    The digital content of this website is part of National Chengchi University Institutional Repository. It provides free access to academic research and public education for non-commercial use. Please utilize it in a proper and reasonable manner and respect the rights of copyright owners. For commercial use, please obtain authorization from the copyright owner in advance.

    2.本網站之製作,已盡力防止侵害著作權人之權益,如仍發現本網站之數位內容有侵害著作權人權益情事者,請權利人通知本網站維護人員(nccur@nccu.edu.tw),維護人員將立即採取移除該數位著作等補救措施。
    NCCU Institutional Repository is made to protect the interests of copyright owners. If you believe that any material on the website infringes copyright, please contact our staff(nccur@nccu.edu.tw). We will remove the work from the repository and investigate your claim.
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 回馈