<listing id="l9bhj"><var id="l9bhj"></var></listing>
<var id="l9bhj"><strike id="l9bhj"></strike></var>
<menuitem id="l9bhj"></menuitem>
<cite id="l9bhj"><strike id="l9bhj"></strike></cite>
<cite id="l9bhj"><strike id="l9bhj"></strike></cite>
<var id="l9bhj"></var><cite id="l9bhj"><video id="l9bhj"></video></cite>
<menuitem id="l9bhj"></menuitem>
<cite id="l9bhj"><strike id="l9bhj"><listing id="l9bhj"></listing></strike></cite><cite id="l9bhj"><span id="l9bhj"><menuitem id="l9bhj"></menuitem></span></cite>
<var id="l9bhj"></var>
<var id="l9bhj"></var>
<var id="l9bhj"></var>
<var id="l9bhj"><strike id="l9bhj"></strike></var>
<ins id="l9bhj"><span id="l9bhj"></span></ins>
Volume 42 Issue 4
Apr.  2020
Turn off MathJax
Article Contents
TAO Lei, HONG Tao, CHAO Xu. Drone identification and location tracking based on YOLOv3[J]. Chinese Journal of Engineering, 2020, 42(4): 463-468. doi: 10.13374/j.issn2095-9389.2019.09.10.002
Citation: TAO Lei, HONG Tao, CHAO Xu. Drone identification and location tracking based on YOLOv3[J]. Chinese Journal of Engineering, 2020, 42(4): 463-468. doi: 10.13374/j.issn2095-9389.2019.09.10.002

Drone identification and location tracking based on YOLOv3

doi: 10.13374/j.issn2095-9389.2019.09.10.002
More Information
  • Corresponding author: E-mail: taolei@buaa.edu.cn
  • Received Date: 2019-09-10
  • Publish Date: 2020-04-01
  • In recent years, increasing incidents of drone intrusion have occurred, and the drone collisions have become common. As a result, accidents may occur in densely populated areas. Therefore, drone monitoring is an important research topic in the field of security. Although many types of drone monitoring programs exist, most of them are costly and difficult to implement. To solve this problem, in the 5G context, this study proposed a method of using a city’s existing monitoring network to acquire data based on a deep learning algorithm for drone target detection, constructing a recognizable drone, and tracking the unmanned aerial vehicle. The method used the improved YOLOv3 (You only look once) model to detect the presence of drones in video frames. The YOLOv3 algorithm is the third generation version of the YOLO series, belonging to the one-stage target detection algorithm. This algorithm has significant advantages over the two-stage type of algorithm in speed. YOLOv3 outputs the position information of the drone in the video frame. According to the position information, the PID (Proportion integration differentiation) algorithm was used to adjust the center of the camera to track the drone. Then, the parameters of the plurality of cameras were used to calculate the actual coordinates of the drone, thereby realizing the positioning. We built the dataset by taking photos of the drone's flight, searching and downloading drone pictures from the Internet, and labeling the drones in the image by using the labelImg tool. The dataset was classified according to the number of rotors of the drone. In the experiment, the detection model was trained by the dataset classified by the number of rotors. The trained model can achieve 83.24% accuracy and 88.15% recall rate on the test set, and speed of 20 frames per second on the computer equipped with NVIDIA GTX 1060 for real-time tracking.

     

  • loading
  • [1]
    Dimitropoulos K, Grammalidis N, Gragopoulos I, et al. Detection, tracking and classification of vehicles and aircraft based on magnetic sensing technology. Int J Appl Math Comput Sci, 2006, 1: 195
    [2]
    de Haag M U, Bartone C G, Braasch M S. Flight-test evaluation of small form-factor LiDAR and radar sensors for sUAS detect-and-avoid applications // 2016 IEEE/AIAA 35th Digital Avionics Systems Conference (DASC). Sacramento, 2016: 1
    [3]
    Saqib M, Khan S D, Sharma N, et al. A study on detecting drones using deep convolutional neural networks // 2017 14th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS). Lecce, 2017: 1
    [4]
    Aker C, Kalkan S. Using deep networks for drone detection // 2017 14th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS). Lecce, 2017: 1
    [5]
    Ganti S R, Kim Y. Implementation of detection and tracking mechanism for small UAS // 2016 International Conference on Unmanned Aircraft Systems (ICUAS). Arlington, 2016: 1254
    [6]
    Nam H, Han B. Learning multi-domain convolutional neural networks for visual tracking // Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, 2016: 4293
    [7]
    Zhang D, Maei H, Wang X, et al. Deep reinforcement learning for visual object tracking in videos[J/OL]. arXiv preprint (2017-04-10)[2019-09-10]. https://arxiv.org/abs/1701.08936
    [8]
    Xi X, Yu Z, Zhan Z, et al. Multi-task cost-sensitive-convolutional neural network for car detection. IEEE Access, 2019, 7: 98061 doi: 10.1109/ACCESS.2019.2927866
    [9]
    Wu Y W, Sui Y, Wang G H. Vision-based real-time aerial object localization and tracking for UAV sensing system. IEEE Access, 2017, 5: 23969 doi: 10.1109/ACCESS.2017.2764419
    [10]
    Rozantsev A, Lepetit V, Fua P. Flying objects detection from a single moving camera // Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Boston, 2015: 4128
    [11]
    Girshick R. Fast R-CNN // Proceedings of the IEEE International Conference on Computer Vision. Santiago, 2015: 1440
    [12]
    Ren S, He K, Girshick R, et al. Faster r-cnn: towards real-time object detection with region proposal networks // Advances in Neural Information Processing Systems. Canada, 2015: 91
    [13]
    Liu W, Anguelov D, Erhan D, et al. SSD: single shot multibox detector // European Conference on Computer Vision. Amsterdam, 2016: 21
    [14]
    Redmon J, Farhadi A. Yolov3: an incremental improvement[J/OL]. arXiv preprint (2018-04-08)[2019-09-10]. https://arxiv.org/abs/1804.02767
    [15]
    Redmon J, Farhadi A. YOLO9000: better, faster, stronger // Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, 2017: 7263
    [16]
    Redmon J, Divvala S, Girshick R, et al. You only look once: unified, real-time object detection // Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, 2016: 779
    [17]
    Coluccia A, Fascista A, Schumann A, et al. Drone-vs-Bird detection challenge at IEEE AVSS2019// 2019 16th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS). Taipei, 2019: 1
    [18]
    Liu H, Wei Z Q, Chen Y T, et al. Drone detection based on an audio-assisted camera array // 2017 IEEE Third International Conference on Multimedia Big Data (BigMM). Laguna Hills, 2017: 402
    [19]
    Mezei J, Fiaska V, Molnár A. Drone sound detection // 2015 16th IEEE International Symposium on Computational Intelligence and Informatics (CINTI). Budapest, 2015: 333
    [20]
    Nguyen P, Ravindranatha M, Nguyen A, et al. Investigating cost-effective rf-based detection of drones // Proceedings of the 2nd Workshop on Micro Aerial Vehicle Networks, Systems, and Applications for Civilian Use. Singapore, 2016: 17
    [21]
    Lin T Y, Maire M, Belongie S, et al. Microsoft coco: common objects in context // European Conference on Computer Vision. Zurich, 2014: 740
    [22]
    Deng J, Dong W, Socher R, et al. Imagenet: a large-scale hierarchical image database // 2009 IEEE Conference on Computer Vision and Pattern Recognition. Miami, 2009: 248
    [23]
    Kingma D P, Ba J. Adam: a method for stochastic optimization[J/OL]. arXiv preprint (2017-01-30)[2019-09-10]. https://arxiv.org/abs/1412.6980
  • 加載中

Catalog

    通訊作者: 陳斌, bchen63@163.com
    • 1. 

      沈陽化工大學材料科學與工程學院 沈陽 110142

    1. 本站搜索
    2. 百度學術搜索
    3. 萬方數據庫搜索
    4. CNKI搜索

    Figures(6)  / Tables(1)

    Article views (3155) PDF downloads(227) Cited by()
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return
    久色视频