• <noscript id="y4y0w"><source id="y4y0w"></source></noscript>
    <table id="y4y0w"><option id="y4y0w"></option></table>
  • <li id="y4y0w"></li>
    <noscript id="y4y0w"></noscript>
    <noscript id="y4y0w"><kbd id="y4y0w"></kbd></noscript>
    <noscript id="y4y0w"><source id="y4y0w"></source></noscript>
    <menu id="y4y0w"></menu>
    <table id="y4y0w"><rt id="y4y0w"></rt></table>
    • 《工程索引》(EI)刊源期刊
    • 中文核心期刊
    • 中國科技論文統計源期刊
    • 中國科學引文數據庫來源期刊

    留言板

    尊敬的讀者、作者、審稿人, 關于本刊的投稿、審稿、編輯和出版的任何問題, 您可以本頁添加留言。我們將盡快給您答復。謝謝您的支持!

    姓名
    郵箱
    手機號碼
    標題
    留言內容
    驗證碼

    DS-YOLOv5:一種實時的安全帽佩戴檢測與識別模型

    白培瑞 王瑞 劉慶一 韓超 杜紅萱 軒轅夢玉 傅穎霞

    白培瑞, 王瑞, 劉慶一, 韓超, 杜紅萱, 軒轅夢玉, 傅穎霞. DS-YOLOv5:一種實時的安全帽佩戴檢測與識別模型[J]. 工程科學學報. doi: 10.13374/j.issn2095-9389.2022.11.11.006
    引用本文: 白培瑞, 王瑞, 劉慶一, 韓超, 杜紅萱, 軒轅夢玉, 傅穎霞. DS-YOLOv5:一種實時的安全帽佩戴檢測與識別模型[J]. 工程科學學報. doi: 10.13374/j.issn2095-9389.2022.11.11.006
    BAI Peirui, WANG Rui, LIU Qingyi, HAN Chao, DU Hongxuan, XUANYUAN Mengyu, FU Yingxia. DS-YOLOv5: A real-time detection and recognition model for helmet wearing[J]. Chinese Journal of Engineering. doi: 10.13374/j.issn2095-9389.2022.11.11.006
    Citation: BAI Peirui, WANG Rui, LIU Qingyi, HAN Chao, DU Hongxuan, XUANYUAN Mengyu, FU Yingxia. DS-YOLOv5: A real-time detection and recognition model for helmet wearing[J]. Chinese Journal of Engineering. doi: 10.13374/j.issn2095-9389.2022.11.11.006

    DS-YOLOv5:一種實時的安全帽佩戴檢測與識別模型

    doi: 10.13374/j.issn2095-9389.2022.11.11.006
    基金項目: 國家自然科學基金資助項目(61471225)
    詳細信息
      通訊作者:

      E-mail: fyx22@163.com

    • 中圖分類號: TP391.4;TU714

    DS-YOLOv5: A real-time detection and recognition model for helmet wearing

    More Information
    • 摘要: 基于視頻分析技術對生產現場人員安全帽佩戴情況進行自動檢測與識別是保障安全生產的重要手段. 但是,復雜的現場環境和多變的外界因素為安全帽檢測與識別的精確性提出挑戰. 本文基于YOLOv5模型的框架,提出一種DS-YOLOv5安全帽檢測與識別模型. 首先,利用改進的Deep SORT多目標跟蹤的優勢,提高視頻檢測中多目標檢測和有遮擋的容錯率,減少漏檢情況;其次在主干網絡中融合簡化的Transformer模塊,加強對圖像的全局信息的捕獲進而加強對小目標的特征學習;最后在網絡的Neck部分應用雙向特征金字塔網絡(BiFPN)融合多尺度特征,以便適應由攝影距離造成的目標尺度變化. 所提模型在GDUT-HWD和MOT多目標跟蹤數據集上進行了驗證實驗,結果表明DS-YOLOv5模型可以更好地適應遮擋和目標尺度變化,全類平均精度(mAP)可以達到95.5%,優于其他常見的安全帽檢測與識別方法.

       

    • 圖  1  改進的Deep SORT算法流程圖

      Figure  1.  Flow chart of the improved Deep SORT algorithm

      圖  2  改進的YOLOv5的網絡結構

      Figure  2.  Improved network architecture of YOLOv5

      圖  3  Transformer 模塊

      Figure  3.  Transformer block

      圖  4  PANet與BiFPN結構示意圖. (a) PANet; (b) BiFPN

      Figure  4.  Schematic diagram of the Path Aggregation Network and bidirectional feature pyramid network structures: (a) PANet; (b) BiFPN

      圖  5  Deep SORT與改進模型在MOT16-09測試對比. (a) Deep SORT在場景 1 的檢測結果; (b) Deep SORT在場景 2 的檢測結果; (c) Deep SORT在場景 3 的檢測結果; (d)改進模型在場景 1 的檢測結果; (e)改進模型在場景 2 的檢測結果; (f)改進模型在場景 3 的檢測結果

      Figure  5.  Comparison of Deep SORT and improved Deep SORT in MOT16-09 detection: (a) detection results of Deep SORT in scenario 1; (b) detection results of Deep SORT in scenario 2; (c) detection results of Deep SORT in scenario 3; (d) detection results of improved Deep SORT in scenario 1; (e) detection results of improved Deep SORT in scenario2; (f) detection results of improved Deep SORT in scenario 3

      圖  6  復雜環境下DS-YOLOv5對安全帽佩戴情況檢測與識別結果. (a) 光線不足; (b) 遮擋情況; (c) 目標尺度差異

      Figure  6.  Detection and recognition results of the DS-YOLOv5 model for the wearing of safety helmets in complex environments: (a) underlighting conditions; (b) occlusion conditions; (c) different target scales

      表  1  基于GDUT-HWD數據集的消融實驗結果

      Table  1.   Results of the ablation experiments based on the GDUT-HWD dataset

      ModelmAP/%P/%R/%F1/%Param/106Spend time/min
      YOLOv592.592.190.691.47.16.3
      YOLOv5 + BiFPN93.992.591.692.07.56.1
      YOLOv5 + Transformer93.792.69794.78.19.1
      YOLOv5 + Deep SORT93.389.391.990.67.16.3
      YOLOv5 + BiFPN + Transformer94.492.19995.38.28.8
      YOLOv5 + BiFPN + Transformer + Deep SORT95.589.69893.68.28.8
      下載: 導出CSV

      表  2  Deep SORT改進前后的實驗結果對比

      Table  2.   Comparison results for improved Deep SORT

      ModelMOTA↑MOTP↑IDs↓
      Deep SORT48.577.148
      Improved Deep SORT51.277.640
      下載: 導出CSV

      表  3  基于GDUT數據集的對比實驗結果

      Table  3.   Results of comparison experiments based on the GDUT dataset

      ModelPNone/%PRed/%PWhite/%PYellow/%Pblue/%P/%R/%mAP/%Time/msWeight/MBfps
      SSD512[6]74.878.879.586.380.883.579.281.636.834.6
      YOLOv3[10]82.492.475.781.994.486.980.486.214.584.32
      YOLOv3-tiny[10]74.182.875.380.885.084.576.379.66.428.212
      YOLOv4[11]83.494.286.192.095.792.481.190.314.2237.4
      YOLOv5[12]90.293.691.492.693.492.190.692.52.014.225
      DS-YOLOv592.595.396.298.695.089.698.095.52.215.718
      下載: 導出CSV

      表  4  安全帽檢測模型對比實驗

      Table  4.   Results of the comparison of different safety helmet detection models

      Detection modelsP/%R/%mAP/%Weights/MBParam/106Time/ms
      SSD-RPA512[8]73.374.484.7158.952.642.4
      MobileNet-SSD[9]84.390.586.124.38.119.3
      Improved YOLOX[28]94.290.393.734.58.917.7
      Improved YOLOv4-tiny[29]93.288.492.936.513.214.9
      DS-YOLOv589.698.095.515.78.22.2
      下載: 導出CSV
    • <noscript id="y4y0w"><source id="y4y0w"></source></noscript>
      <table id="y4y0w"><option id="y4y0w"></option></table>
    • <li id="y4y0w"></li>
      <noscript id="y4y0w"></noscript>
      <noscript id="y4y0w"><kbd id="y4y0w"></kbd></noscript>
      <noscript id="y4y0w"><source id="y4y0w"></source></noscript>
      <menu id="y4y0w"></menu>
      <table id="y4y0w"><rt id="y4y0w"></rt></table>
    啪啪啪视频
  • [1] Liu X H, Ye X N. Skin color detection and hu moments in helmet recognition research. J East China Univ Sci Technol (Nat Sci Ed), 2014, 40(3): 365

    劉曉慧, 葉西寧. 膚色檢測和Hu矩在安全帽識別中的應用. 華東理工大學學報(自然科學版), 2014, 40(3):365
    [2] Li K, Zhao X G, Bian J, et al. Automatic safety helmet wearing detection // 2017 IEEE 7th Annual International Conference on CYBER Technology in Automation, Control, and Intelligent Systems (CYBER). Honolulu, 2017: 617
    [3] Zhang L Y, Wu W H, Niu H M, et al. Summary of application research on helmet detection algorithm based on deep learning. Comput Eng Appl, 2022, 58(16): 1

    張立藝, 武文紅, 牛恒茂, 等. 深度學習中的安全帽檢測算法應用研究綜述. 計算機工程與應用, 2022, 58(16):1
    [4] Yogameena B, Menaka K, Saravana Perumaal S. Deep learning-based helmet wear analysis of a motorcycle rider for intelligent surveillance system. IET Intell Transp Syst, 2019, 13(7): 1190 doi: 10.1049/iet-its.2018.5241
    [5] Ferdous M, Masudul Ahsan S M. Multi-scale safety hardhat wearing detection using deep learning: A top-down and bottom-up module // 2021 International Conference on Electrical, Communication, and Computer Engineering (ICECCE ). Kuala Lumpur, 2021: 1
    [6] Liu W, Anguelov D, Erhan D, et al. SSD: Single shot multibox detector // European Conference on Computer Vision. Amsterdam, 2016: 21
    [7] Redmon J, Divvala S, Girshick R, et al. You only look once: Unified, real-time object detection // 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, 2016: 779
    [8] Wu J X, Cai N, Chen W J, et al. Automatic detection of hardhats worn by construction personnel: A deep learning approach and benchmark dataset. Autom Constr, 2019, 106: 102894 doi: 10.1016/j.autcon.2019.102894
    [9] Xu X F, Zhao W F, Zou H Q, et al. Detection algorithm of safety helmet wear based on MobileNet-SSD. Comput Eng, 2021, 47(10): 298

    徐先峰, 趙萬福, 鄒浩泉, 等. 基于MobileNet-SSD的安全帽佩戴檢測算法. 計算機工程, 2021, 47(10):298
    [10] Redmon J, Farhadi A. Yolov3: An incremental improvement [J/OL]. arXiv eprints (2018-4-8) [2022-11-11]. https://arxiv.org/abs/1804.02767
    [11] Bochkovskiy A, Wang C Y, Liao H Y M. YOLOv4: Optimal speed and accuracy of object detection [J/OL]. arXiv (2020-4-23) [2022-11-11]. https://arxiv.org/abs/2004.10934
    [12] Ultralytics. yolov5 [DB/OL]. Ultralytics (2023)[2022-11-11] https://github.com/ultralytics/yolov5
    [13] Zhao R, Liu H, Liu P L, et al. Research on safety helmet detection algorithm based on improved YOLOv5s. J B Univ Aeronaut Astronaut, 2021: 1

    趙睿, 劉輝, 劉沛霖, 等. 基于改進YOLOv5s的安全帽檢測算法. 北京航空航天大學學報, 2021:1
    [14] Yue H, Huang X M, Lin M H, et al. Helmet-wearing detection based on improved YOLOv5. Comput Mod, 2022(6): 104

    岳衡, 黃曉明, 林明輝, 等. 基于改進YOLOv5的安全帽佩戴檢測. 計算機與現代化, 2022(6):104
    [15] Huang L Q, Jiang L W, Gao X F. Improved algorithm of YOLOv3 for real-time helmet wear detection in videos. Mod Comput, 2020(30): 32

    黃林泉, 蔣良衛, 高曉峰. 改進YOLOv3的實時性視頻安全帽佩戴檢測算法. 現代計算機, 2020(30):32
    [16] Vaswani A, Shazeer N, Parmar N, et al. Attention is all you need // Proceedings of the 31st International Conference on Neural Information Processing Systems. Long Beach, 2017: 6000
    [17] Zhu X K, Lyu S C, Wang X, et al. TPH-YOLOv5: Improved YOLOv5 based on transformer prediction head for object detection on drone-captured scenarios [J/OL]. arXiv (2021-8-26) [2022-11-11]. https://arxiv.org/abs/2108.11539
    [18] He K M, Zhang X Y, Ren S Q, et al. Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Trans Pattern Anal Mach Intell, 2015, 37(9): 1904 doi: 10.1109/TPAMI.2015.2389824
    [19] Liu S, Qi L, Qin H F, et al. Path aggregation network for instance segmentation // 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, 2018: 8759
    [20] Wojke N, Bewley A, Paulus D. Simple online and realtime tracking with a deep association metric // 2017 IEEE International Conference on Image Processing (ICIP). Beijing, 2017: 3645
    [21] Kalman R E. A new approach to linear filtering and prediction problems. J Basic Eng, 1960, 82(1): 35 doi: 10.1115/1.3662552
    [22] Zheng Z H, Wang P, Liu W, et al. Distance-IoU loss: Faster and better learning for bounding box regression. Proc AAAI Conf Artif Intell, 2020, 34(7): 12993
    [23] Carion N, Massa F, Synnaeve G, et al. End-to-end object detection with transformers // European Conference on Computer Vision. Glasgow, 2020: 213
    [24] Dosovitskiy A, Beyer L, Kolesnikov A, et al. An image is worth 16x16 words: Transformers for image recognition at scale [J/OL]. arXiv (2021-6-3) [2022-11-11]. https://arxiv.org/abs/2010.11929
    [25] Tan M X, Pang R M, Le Q V. EfficientDet: scalable and efficient object detection // 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle, 2020: 10778
    [26] Zheng L, Shen L Y, Tian L, et al. Scalable person re-identification: A benchmark // 2015 IEEE International Conference on Computer Vision (ICCV). Santiago, 2015: 1116
    [27] Milan A, Leal-Taixé L, Reid I, et al. MOT16: A benchmark for multi-object tracking [J/OL]. arXiv eprints (2016-5-3) [2022-11-11]. https://ui.adsabs.harvard.edu/abs/2016arXiv160300831M
    [28] Lv Z X, Wei X, Ma Z G. Improve the lightweight safety helmet detection method of YOLOX. Comput Eng Appl, 2022, 59(1): 61

    呂志軒, 魏霞, 馬志鋼. 改進YOLOX的輕量級安全帽檢測方法. 計算機工程與應用, 2022, 59(1):61
    [29] Wang J B, Wu Y X. Helmet wearing detection algorithm of improved YOLOV4-tiny, Comput Eng Appl, 2023, 59(4): 183

    王建波, 武友新. 改進YOLOv4-tiny 的安全帽佩戴檢測算法. 計算機工程與應用. 2023, 59(4):183
  • 加載中
  • 圖(6) / 表(4)
    計量
    • 文章訪問數:  654
    • HTML全文瀏覽量:  98
    • PDF下載量:  125
    • 被引次數: 0
    出版歷程
    • 收稿日期:  2022-11-11
    • 網絡出版日期:  2023-03-03

    目錄

      /

      返回文章
      返回