§ 瀏覽學位論文書目資料
  
系統識別號 U0002-2307202513302200
DOI 10.6846/tku202500607
論文名稱(中文) 個人防護裝備(PPE)特徵分析與異常檢測
論文名稱(英文) Feature Analysis and Anomaly Detection of Personal Protective Equipment
第三語言論文名稱
校院名稱 淡江大學
系所名稱(中文) 機械與機電工程學系碩士班
系所名稱(英文) Department of Mechanical and Electro-Mechanical Engineering
外國學位學校名稱
外國學位學院名稱
外國學位研究所名稱
學年度 113
學期 2
出版年 114
研究生(中文) 許瑞丞
研究生(英文) RUI-CHENG XU
學號 612370337
學位類別 碩士
語言別 繁體中文
第二語言別
口試日期 2025-07-03
論文頁數 59頁
口試委員 指導教授 - 王銀添(ytwang@mail.tku.edu.tw)
口試委員 - 邱銘杰(mcchiu@gm.ttu.edu.tw)
口試委員 - 朱政安(168576@mail.tku.edu.tw)
關鍵字(中) 個人防護裝備
物件偵測
異常偵測
關鍵字(英) Personal Protective Equipment
Object Detection
Anomaly Detection
第三語言關鍵字
學科別分類
中文摘要
在高風險的工業製造環境中,個人防護裝備(PPE)對於保障作業人員安全至關重
要。為解決傳統人力檢查效率低、誤判率高與成本高昂的問題,本研究提出一套結合深
度學習與影像處理技術的自動化PPE偵測系統,並於PCB生產線進行實地應用。系統
以YOLOv7為核心模型,輔以PCA進行特徵降維與雜訊剔除,提升辨識精度。資料前
處理方面透過樣本比例調整及資料擴增(如縮放、旋轉)來強化模型對少數類別的學習
與泛化能力。此外,本研究亦設計完整的IoU與Confidence比對機制,並建立精確的誤
檢與漏檢判定邏輯。實驗結果顯示,經調整後模型在防靜電帽與防護衣的mAP@0.5最
高可達0.996,尤其旋轉擴增對模型效能提升最為明顯,展現出本系統於實務應用中的高
效穩定性與錯誤控制能力。
英文摘要
In high-risk industrial manufacturing environments, personal protective equipment (PPE) 
plays a critical role in ensuring worker safety. To address the inefficiencies, high error rates, and 
high costs associated with traditional manual inspections, this study proposes an automated PPE 
detection system that integrates deep learning and image processing techniques. The system is 
implemented and validated on a PCB production line, utilizing YOLOv7 as the core detection 
model and incorporating PCA for feature dimensionality reduction and noise elimination to 
enhance recognition accuracy. Data preprocessing involves sample ratio adjustment and data 
augmentation methods such as scaling and rotation to improve the model’s ability to learn from 
minority classes and enhance generalization. Additionally, the study establishes a 
comprehensive IoU and confidence comparison mechanism, along with precise false positive 
and false negative determination logic. Experimental results show that the adjusted model 
achieves a maximum mAP@0.5 of 0.996 for detecting anti-static helmets and protective gowns. 
Among the augmentation methods, rotation has the most significant impact on performance 
improvement, demonstrating the system’s robustness and effectiveness in real-world 
applications.
第三語言摘要
論文目次
目錄
致謝	I
目錄	IV
圖目錄	VII
表目錄	IX
第1章緒論	1
1.1 研究動機	1
1.2 研究目的	2
1.3 研究範圍	2
1.4 研究成果	3
1.5 論文架構	4
第2章 探討現有PPE偵測技術與 YOLOv7相關研究	5
2.1 傳統影像處理方法	5
2.1.1邊緣檢測	5
2.1.2形態學處理	6
2.2 YOLO系列演算法演進	7
2.3 YOLOv7於PPE偵測的優勢	8
第3章 PCB生產線PPE檢測系統之研究方法與實作	9
3.1 PCB生產線資料集資料前處理	10
3.2 PCA特徵分析	11
3.3 YOLOv7資料集調整	14
3.4 YOLOv7模型訓練	15
3.4.1 YOLOv7參數調整	15
3.5 YOLOv7資料擴增	16
3.5.1放大縮小(Zoom in or out)	17
3.5.2 旋轉(Rotate)	19
3.6『OpenCV 旋轉』 vs 『imgaug 縮放』差異比較	20
3.6.1 擴增方式的不同	20
3.6.2 邊界框(Bounding Box) 處理方式	21
3.7 交集佔聯集(Iou)及置信度(confidence)比對	23
3.7.1 交集佔聯集(Iou)比對前處理	24
3.7.2 假陰性(False Negative) 及 假陽性(False Positive) 判斷邏輯	25
第4章 實驗結果與異常穿戴辨識表現	28
4.1 YOLOv7模型訓練結果	28
4.2 模型分類錯誤分布與混淆矩陣分析	32
4.3 資料擴增後YOLOv7模型訓練結果	37
4.4 資料擴增後的混淆矩陣分析	42
4.5 YOLOv7模型偵測結果與誤判情境分析	44
4.6 資料擴增後模型偵測結果與誤判情境分析	47
4.7 多類別訓練及偵測結果比較	51
4.8 多類別模型分類錯誤分布與混淆矩陣分析	52
4.9 YOLOv7多類別模型偵測結果與誤判情境分析	53
4.10 YOLOv7使用不同權重進行偵測結果與誤判情境分析	54
第5章 PPE異常檢測結果分析與應用反思	55
5.1 研究成果總結	55
5.2 實務應用價值與潛在挑戰	55
5.3 模型效能與誤判特性反思	56
5.4 結語	56
參考文獻	57

 
圖目錄
圖1.1 PPE檢測流程	1
圖2.1邊緣偵測的結果影像	6
圖2.2形態學處理影像	6
圖3.1完整PPE檢測流程圖	9
圖3.2PPE資料前處理流程圖	10
圖3.3PPE偵測流程圖	10
圖3.4生產線場景圖	11
圖3.5未戴帽與戴帽特徵	11
圖3.6攤平示意圖	12
圖3.73維 PCA結果	12
圖3.8模糊或離群特徵	13
圖3.9 Zoom in or out前的原圖	18
圖3.10 Zoom in or out後的原圖	18
圖3.11 Rotate前的原圖	19
圖3.12 Rotate後的原圖	20
圖3.13預測邊界框座標資訊	24
圖3.14真實標註框座標資訊	24
圖3.15處理過後的預測邊界框座標資訊	24
圖4.2比例 0.5:1混淆矩陣	33
圖4.3人工標註類別監控場景圖	34
圖4.4Yolov7標註類別監控場景圖	35
圖4.8 Zoom In/Out比例 0.1:1混淆矩陣	42
圖4.9 Rotate比例 0.1:1混淆矩陣	43

 
表目錄
表3.1比例範圍範例	14
表3.2硬體規格和超參數	15
表3.3比對與錯誤判斷邏輯	26
表4.1不同 Isolation Gown_on/ off 比例下之 YOLOv7 訓練結果分析	29
表4.2 Zoom In/Out 訓練結果	37
表4.3 Rotate 訓練結果	37
表4.4各項指標詳細分析表	40
表4.5 Helmet 比例0.5到0.1 FN與FP數量	44
表4.6 Isolation Gown 比例0.5到0.1 FN與FP數量	45
表4.7 Helmet 縮放擴增後比例0.5到0.1 FN與FP數量	47
表4.8 Helmet 旋轉擴增後比例0.5到0.1 FN與FP數量	48
表4.9 Isolation Gown 縮放擴增後比例0.5到0.1 FN與FP數量	49
表4.10Isolation Gown 旋轉擴增後比例0.5到0.1 FN與FP數量	50
表4.11 Helmet及Isolation Gown不同比例下 YOLOv7 訓練結果	51
表4.12Helmet及Isolation Gown 比例0.5到0.1 FN與FP數量	53
表4.13 Helmet及Isolation Gown 比例0.1:1和0.3:1 的FN與	54





參考文獻
參考文獻
[1]	Occupational Safety and Health Administration, “Personal Protective Equipment,” OSHA 3151‑12R, U.S. Dept. Labor, Washington, DC, USA, 2004.
[2]	R. S. Khandpur, "Printed circuit boards design, fabrication, and assembly," 2006..
[3]	ISO, “Occupational health and safety management systems – Requirements with guidance for use,” ISO 45001:2018, Geneva, Switzerland: International Organization for Standardization, 2018.
[4]	I. Goodfellow, Y. Bengio, A. Courville, and Y. Bengio, Deep learning (no. 2). MIT press Cambridge, 2016.
[5]	C.-Y. Wang, A. Bochkovskiy, and H.-Y. M. Liao, “YOLOv7: Trainable bag‑of‑freebies sets new state‑of‑the‑art for real‑time object detectors,” arXiv preprint arXiv:2207.02696, 2022.
[6]	C. Shorten and T. M. J. J. o. b. d. Khoshgoftaar, "A survey on image data augmentation for deep learning," vol. 6, no. 1, pp. 1-48, 2019.
[7]	N. V. Chawla, K. W. Bowyer, L. O. Hall, and W. P. J. J. o. a. i. r. Kegelmeyer, "SMOTE: synthetic minority over-sampling technique," vol. 16, pp. 321-357, 2002.
[8]	K. J. T. L. Pearson, Edinburgh,, D. p. magazine, and j. o. science, "LIII. On lines and planes of closest fit to systems of points in space," vol. 2, no. 11, pp. 559-572, 1901.
[9]	M. Everingham, L. Van Gool, C. K. Williams, J. Winn, and A. J. I. j. o. c. v. Zisserman, "The pascal visual object classes (voc) challenge," vol. 88, no. 2, pp. 303-338, 2010.
[10]	T. J. P. r. l. Fawcett, "An introduction to ROC analysis," vol. 27, no. 8, pp. 861-874, 2006.
[11]	J. J. I. T. o. p. a. Canny and m. intelligence, "A computational approach to edge detection," no. 6, pp. 679-698, 2009.
[12]	R. Adams, L. J. I. T. o. p. a. Bischof, and m. intelligence, "Seeded region growing," vol. 16, no. 6, pp. 641-647, 1994.
[13]	A. Krizhevsky, I. Sutskever, and G. E. J. A. i. n. i. p. s. Hinton, "Imagenet classification with deep convolutional neural networks," vol. 25, 2012.
[14]	R. C. Gonzalez, Digital image processing. Pearson education india, 2009.
[15]	X. Ma, “CS131: Edge Detection,” Xiaoma’s Blog, 2017. [Online]. Available: https://xmfbit.github.io/2017/01/24/cs131-edge-detection/
[16]	J. Serra, "Mathematical morphology," in Encyclopedia of mathematical geosciences: Springer, 2023, pp. 820-835.
[17]	S. Fang, B. Zhang, and J. J. S. Hu, "Improved mask R-CNN multi-target detection and segmentation for autonomous driving in complex scenes," vol. 23, no. 8, p. 3853, 2023.
[18]	D. G. J. I. j. o. c. v. Lowe, "Distinctive image features from scale-invariant keypoints," vol. 60, no. 2, pp. 91-110, 2004.
[19]	A. Rosebrock, “OpenCV morphological operations,” PyImageSearch, 2021. [Online]. Available: https://pyimagesearch.com/2021/04/28/opencv-morphological-operations/
[20]	J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, "You only look once: Unified, real-time object detection," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 779-788.
[21]	R. Girshick, J. Donahue, T. Darrell, and J. Malik, "Rich feature hierarchies for accurate object detection and semantic segmentation," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2014, pp. 580-587.
[22]	C. Li et al., "YOLOv6: A single-stage object detection framework for industrial applications," 2022.
[23]	A. Bochkovskiy, C.-Y. Wang, and H.-Y. M. Liao, “YOLOv4: Optimal speed and accuracy of object detection,” arXiv preprint arXiv:2004.10934, 2020.
[24]	G. Jocher, A. Chaurasia, and J. Qiu, “YOLOv8: Cutting‑edge object detection, segmentation, and classification model,” Ultralytics, 2023. [Online]. Available: https://docs.ultralytics.com
[25]	P.-C. Kuo et al., "Recalibration of deep learning models for abnormality detection in smartphone-captured chest radiograph," vol. 4, no. 1, p. 25, 2021.
論文全文使用權限
國家圖書館
同意無償授權國家圖書館,書目與全文電子檔於繳交授權書後, 於網際網路立即公開
校內
校內紙本論文立即公開
同意電子論文全文授權於全球公開
校內電子論文立即公開
校外
同意授權予資料庫廠商
校外電子論文立即公開

如有問題,歡迎洽詢!
圖書館數位資訊組 (02)2621-5656 轉 2487 或 來信