• 中文核心期刊
  • CSCD来源期刊
  • 中国科技核心期刊
  • CA、CABI、ZR收录期刊

基于深度学习和无人机遥感技术的玉米雄穗检测研究

梁胤豪, 陈全, 董彩霞, 杨长才

梁胤豪,陈全,董彩霞,等. 基于深度学习和无人机遥感技术的玉米雄穗检测研究 [J]. 福建农业学报,2020,35(4):456−464. DOI: 10.19303/j.issn.1008-0384.2020.04.014
引用本文: 梁胤豪,陈全,董彩霞,等. 基于深度学习和无人机遥感技术的玉米雄穗检测研究 [J]. 福建农业学报,2020,35(4):456−464. DOI: 10.19303/j.issn.1008-0384.2020.04.014
LIANG Y H, CHEN Q, DONG C X, et al. Application of Deep-learning and UAV for Field Surveying Corn Tassel [J]. Fujian Journal of Agricultural Sciences,2020,35(4):456−464. DOI: 10.19303/j.issn.1008-0384.2020.04.014
Citation: LIANG Y H, CHEN Q, DONG C X, et al. Application of Deep-learning and UAV for Field Surveying Corn Tassel [J]. Fujian Journal of Agricultural Sciences,2020,35(4):456−464. DOI: 10.19303/j.issn.1008-0384.2020.04.014

基于深度学习和无人机遥感技术的玉米雄穗检测研究

基金项目: 国家自然科学基金项目(61802064、61701117);福建省自然科学基金项目(2019J01402)
详细信息
    作者简介:

    梁胤豪(1998−),男,研究方向:计算机应用(E-mail:lyhmath@163.com

    通讯作者:

    杨长才(1981−),男,博士,副教授,研究方向:计算机视觉与作物表型识别(E-mail:changcaiyang@gmail.com

  • 中图分类号: S 513;S 127

Application of Deep-learning and UAV for Field Surveying Corn Tassel

  • 摘要:
      目的  玉米雄穗在玉米的生长过程和最终产量中起关键作用,使用无人机采集玉米抽穗期的RGB图像,研究不同的目标检测算法,构建适用于无人机智能检测玉米雄穗的模型,自动计算图像中雄穗的个数。
      方法  使用无人飞行器(UAV)在25 m飞行高度下获得大量玉米抽穗时期的RGB图像,裁剪并标注出图像中玉米雄穗的位置和大小,训练数据和测试数据按照3:1的比例划分数据集;在深度学习框架MXNet下,利用这些数据集,分别训练基于ResNet50的Faster R-CNN、基于ResNet50的SSD、基于mobilenet的SSD和YOLOv3等4种模型,对比4种模型的准确率、检测速度和模型大小。
      结果  使用无人机采集了236张图像,裁剪成1024×1024大小的图片,去除成像质量差的图像,利用标注软件labelme获得100张标注的玉米雄穗数据集;最终得到4个模型的mAP值分别为0.73、0.49、0.58和0.72。在测试数据集上进行测试,Faster R-CNN模型的准确率最高为93.79%,YOLOv3的准确率最低,仅有20.04%,基于ResNet50的SSD和基于mobilenet的SSD分别为89.9%和89.6%。在识别的速度上,SSD_mobilenet最快(8.9 samples·s−1),Faster R-CNN最慢(2.6 samples·s−1),YOLOv3检测速度为3.47 samples·s−1, SDD_ResNet50检测速度为7.4 samples·s−1。在模型大小上,YOLO v3的模型最大,为241 Mb,SSD_mobilenet的模型最小,为55.519 Mb。
      结论  由于无人机的机载平台计算资源稀缺,综合模型的速度、准确率和模型大小考虑,SSD_mobilenet最适于部署在无人机机载系统上用于玉米雄穗的检测。
    Abstract:
      Objective  Deep-learning and computation were applied to analyze the images collected by drones on the status of tassel on corn plants in the field for estimating crop growth and forecasting production.
      Method  Drones, or unmanned aerial vehicles (UAV), flying at a height of 25 m above corn crop in the field were used to generate RGB images showing the position and size of tassel on the plants at heading stage. Under the deep-learning framework of MXNet, the data sets on a 3-to-1 training-to-testing ratio were fed into 4 models of the ResNet50-based Faster R-CNN, the ResNet50-based SSD, the mobilenet-based SSD, and YOLOv3. The algorithms provided by the models were tested to intelligently extract information from the images for an accurate and rapid report on the status of corn tassel.
      Results  mThe 236 UAV-collected images were cropped individually into 1024×1024 size. Those of poor quality were discarded to result in 100 labeled datasets using the Labelme software. The mAPs of the 4 models were 0.73, 0.49, 0.58 and 0.72, respectively. The highest accuracy rate of 93.79% on the test was obtained from the Faster R-CNN model, followed by 89.9% from SSD_ResNet50, 89.6% from SSD_mobilenet, and 20.04% from YOLOv3. On processing speed, SSD_mobilenet was the fastest at 8.9 samples·s−1, followed by YOLOv3 at 3.47 samples·s−1, SSD_ResNet50 at 7.4 samples·s−1, and Faster R-CNN at 2.6 samples·s−1. Among the 4 models, YOLOv3 was the largest, 241 Mb in size, while SSD_mobilenet the smallest 55.519 Mb.
      Conclusion  Considering the scarcity of available resources on the airborne UAV platform, as well as the detection accuracy, processing speed, and size of the programs, the SSD_mobilenet model was selected as the choice for the field surveying of corn tassel by UAV.
  • 图  1   无人机航线规划

    Figure  1.   Planning of UAV flight routes

    图  2   试验田、航线设置、DJI无人机与LabelImg数据标注

    Figure  2.   Experimentation field, flight routes, DJI drone, and LabelImg data annotation

    图  3   模型在训练过程中的损失函数曲线

    Figure  3.   Loss function of model during training

    图  4   SSD_mobilenet的预测结果

    Figure  4.   Prediction by SSD-mobilenet

    图  5   SSD_ResNet50的预测结果

    Figure  5.   Prediction by SSD_ResNet50

    图  6   Faster R-CNN的预测结果

    Figure  6.   Prediction by Faster R-CNN

    图  7   YOLO v3的预测结果

    Figure  7.   Prediction by YOLOv3

    图  8   模型的预测结果对比

    注:Faster R-CNN、YOLO v3、SSD_ResNet50和SSD_mobilenet的计数准确率分别为93.79%、20.04%、87.6%和89.9%.

    Figure  8.   Comparison of predictions by various models

    Note: Detection accuracy of Faster R-CNN was 93.79%; YOLO v3, 20.04%; SSD_ResNet50, 87.6%; and, SSD_mobilenet, 89.9%.

    图  9   模型训练过程的mAP曲线

    Figure  9.   mAP curves of models in testing

    表  1   试验硬件与软件信息

    Table  1   Information on hardware and software for testing

    硬件信息 Hardware information软件信息 Software information
    平台 Platform型号 Model参数 Parameters平台 Platform版本 Version
    CPU E5-2680v2 2.8 GHz CUDA 10.0
    RAM DDR3 128 G CUDNN 7.6.5
    GPU 1080ti 11 G MXNet 1.5.0
    下载: 导出CSV

    表  2   模型训练的超参数

    Table  2   Hyperparameters for model training

    参数 ParametersFaster R-CNNYOLO v3SSDSSD
    base-network ResNet50 darknet53 ResNet50 mobilenet
    batch-size 4 8 16 16
    epochs 400 300 300 400
    learning rate 0.001 0.0001 0.0001 0.0001
    下载: 导出CSV

    表  3   模型的mAP

    Table  3   mAPs of models

    模型 ModelFaster R-CNNSSD_ResNet50SSD_mobilenetYOLO v3
    mAP0.73060.49050.57800.7265
    下载: 导出CSV

    表  4   模型的测试误差和计数准确率比较

    Table  4   Comparison of test errors and detection accuracies by models

    模型 ModelFaster R-CNNSSD_ResNet50SSD_mobilenetYOLO v3
    误差均值 Mean error 4.7308 9.2692 7.5385 62.1154
    均方差 Mean square error 5.3649 11.5175 8.8915 14.0311
    计算准确率 Calculation accuracy/% 93.79 87.60 89.90 20.04
    下载: 导出CSV

    表  5   模型的处理速度和参数大小比较

    Table  5   Comparison of processing speeds and parameters of models

    模型 ModelFaster R-CNNSSD_ResNet50SSD_mobilenetYOLO v3
    处理速度 Processing speed/(samples·s−1 2.6 7.4 8.9 3.47
    参数大小 Parameter size/M 133.873 144.277 55.519 241.343
    下载: 导出CSV
  • [1]

    HUANG J X, GÓMEZ-DANS J L, HUANG H, et al. Assimilation of remote sensing into crop growth models: Current status and perspectives [J]. Agricultural and Forest Meteorology, 2019, 276/277: 107609. DOI: 10.1016/j.agrformet.2019.06.008

    [2]

    MADEC S, JIN X L, LU H, et al. Ear density estimation from high resolution RGB imagery using deep learning technique [J]. Agricultural and Forest Meteorology, 2019, 264: 225−234. DOI: 10.1016/j.agrformet.2018.10.013

    [3]

    QUAN L Z, FENG H Q, LV Y, et al. Maize seedling detection under different growth stages and complex field environments based on an improved Faster R–CNN [J]. Biosystems Engineering, 2019, 184: 1−23. DOI: 10.1016/j.biosystemseng.2019.05.002

    [4]

    ZHOU C Q, LIANG D, YANG X D, et al. Recognition of wheat spike from field based phenotype platform using multi-sensor Fusion and improved maximum entropy segmentation algorithms [J]. Remote Sensing, 2018, 10(2): 246. DOI: 10.3390/rs10020246

    [5]

    KURTULMUŞ F, KAVDIR İ. Detecting corn tassels using computer vision and support vector machines [J]. Expert Systems with Applications, 2014, 41(16): 7390−7397. DOI: 10.1016/j.eswa.2014.06.013

    [6]

    LU H, CAO Z G, XIAO Y, et al. Fine-grained maize tassel trait characterization with multi-view representations [J]. Computers and Electronics in Agriculture, 2015, 118: 143−158. DOI: 10.1016/j.compag.2015.08.027

    [7]

    LU H, CAO Z G, XIAO Y, et al. TasselNet: counting maize tassels in the wild via local counts regression network [J]. Plant Methods, 2017, 13: 79. DOI: 10.1186/s13007-017-0224-0

    [8]

    LIU Y L, CEN C J, CHE Y P, et al. Detection of maize tassels from UAV RGB imagery with faster R-CNN [J]. Remote Sensing, 2020, 12(2): 338. DOI: 10.3390/rs12020338

    [9]

    LabelImg (Available online) [DB/OL]. https://www.github.com/tzutalin/labelImg (accessed on 25 December 2015).

    [10]

    LIU L, OUYANG W, WANG X G, et al. Deep learning for generic object detection: a survey [J]. International Journal of Computer Vision, 2020, 128(2): 261−318. DOI: 10.1007/s11263-019-01247-4

    [11]

    LIU W, ANGUELOV D, ERHAN D, et al. SSD: single shot MultiBox detector[M]. Computer Vision-ECCV. Cham: Springer International Publishing, 2016: 21-37.[LinkOut]

    [12]

    REDMON J, FARHADI A. YOLO9000: better, faster, stronger[C]//IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 21-26, 2017. Honolulu, HI. IEEE, 2017.

    [13]

    REN S Q, HE K M, GIRSHICK R, et al. Faster R-CNN: towards real-time object detection with region proposal networks [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(6): 1137−1149. DOI: 10.1109/TPAMI.2016.2577031

    [14]

    CHEN T Q, LI M, LI Y T, et al. MXNet: a flexible and efficient machine learning library for heterogeneous distributed systems[EB/OL]. 2015-12-03. https://arxiv.org/abs/1512.01274.

  • 期刊类型引用(13)

    1. 何皎,牛明福,张改平,郭赛楠. 天然复配香辛料对猪肉保鲜的影响. 中国食品学报. 2024(09): 409-419 . 百度学术
    2. 孙亦阳,顾超广,华小庆,王凯旋,盛清,聂作明,王丹. 幼年大鼠和老年大鼠肠道菌群结构及多样性分析. 浙江理工大学学报(自然科学版). 2021(01): 76-83 . 百度学术
    3. 郑雪芳,刘波,朱育菁,王阶平,蓝江林,陈倩倩. 养猪发酵床不同发酵程度垫料微生物群落结构特征的PLFA分析. 中国生态农业学报(中英文). 2019(01): 42-49 . 百度学术
    4. 郭倩倩,卢彪. 基于高通量测序不同生产工艺晴隆酸菜细菌多样性分析. 中国酿造. 2019(08): 73-76 . 百度学术
    5. 饶君凤,吕伟德,宋涛. 微生物肥料对西红花生长及代谢物的影响. 浙江大学学报(理学版). 2019(06): 731-736 . 百度学术
    6. 崔宪,张乐平,孙辉,温啸宇,郭建斌,董仁杰. 碳氮比对干黄秸秆贮存及后续甲烷产量的影响. 农业工程学报. 2019(23): 250-257 . 百度学术
    7. 郑雪芳,刘波,朱育菁,王阶平,陈倩倩,魏云华. 磷脂脂肪酸生物标记法分析养猪发酵床微生物群落结构的空间分布. 农业环境科学学报. 2018(04): 804-812 . 百度学术
    8. 陈梅春,朱育菁,刘波,王阶平,刘晓港,杨文靖. 基于宏基因组茉莉花植株土壤细菌多样性研究. 农业生物技术学报. 2018(09): 1480-1493 . 百度学术
    9. 陈梅春,朱育菁,刘波,王阶平,刘晓港,杨文靖. 基于宏基因组的茉莉花内生细菌多样性分析. 热带亚热带植物学报. 2018(06): 633-643 . 百度学术
    10. 宦海琳,顾洪如,张霞,潘孝青,杨杰,丁成龙,徐小明. 养猪发酵床垫料不同时期碳氮和微生物群落结构变化研究. 农业工程学报. 2018(S1): 27-34 . 百度学术
    11. 沈大春,周文灵,敖俊华,陈迪文,黄莹,卢颖林,黄振瑞,李奇伟,江永. 基于2种栽培管理模式的蔗田土壤细菌群落结构分析. 甘蔗糖业. 2017(03): 6-10 . 百度学术
    12. 鞠雷,郭洪梅,朱术会,李丛丛. 保育猪舍不同粒径悬浮颗粒物细菌群落组成的初步研究. 畜牧兽医学报. 2017(11): 2198-2204 . 百度学术
    13. 雷少楠,程志强,熊娟,马荣琴,田宝玉. 患根肿病的上海青根内生菌组成和结构的研究. 中国农学通报. 2017(33): 39-45 . 百度学术

    其他类型引用(13)

图(9)  /  表(5)
计量
  • 文章访问数:  1654
  • HTML全文浏览量:  952
  • PDF下载量:  85
  • 被引次数: 26
出版历程
  • 收稿日期:  2020-03-11
  • 修回日期:  2020-04-14
  • 刊出日期:  2020-03-31

目录

    /

    返回文章
    返回