§ 瀏覽學位論文書目資料
  
系統識別號 U0002-2308202414071600
DOI 10.6846/tku202400708
論文名稱(中文) 基於FPGA車牌辨識AI模型之開發
論文名稱(英文) Development of an FPGA-based AI Model for Licensee Plate Recognition
第三語言論文名稱
校院名稱 淡江大學
系所名稱(中文) 機械與機電工程學系碩士班
系所名稱(英文) Department of Mechanical and Electro-Mechanical Engineering
外國學位學校名稱
外國學位學院名稱
外國學位研究所名稱
學年度 112
學期 2
出版年 113
研究生(中文) 李厚誼
研究生(英文) HO-I LEE
學號 611370056
學位類別 碩士
語言別 繁體中文
第二語言別
口試日期 2024-07-04
論文頁數 90頁
口試委員 指導教授 - 王銀添(ytwang@mail.tku.edu.tw)
口試委員 - 許閔傑
口試委員 - 吳志清
關鍵字(中) 自動車牌辨識
邊緣運算
電腦視覺
關鍵字(英) Automatic Number Plate Recognition
Edge Computing
Computer Vision
第三語言關鍵字
學科別分類
中文摘要
本論文在FPGA架構的邊緣計算器中,發展車牌號碼辨識AI模型。將車牌號碼辨識任務分成三個階段,包含車牌偵測、車牌方向對齊、車牌數字辨識等,將個別規劃適合的AI模型。第一個階段,在影像中偵測車牌的位置與框選車牌影像範圍,本研究將使用適合邊緣計算器的YOLOv4-tiny模型。第二階段對齊車牌的歪斜方向,使車牌在影像中正面呈現。校正方法是先找出車牌四個角點,再以透視投影演算法將歪斜方向校正為正面方向。本論文提出以卷積姿態機器模型,執行車牌四個角點的偵測任務。第三階段是辨識車牌數字,本研究將修改文獻中現有的LPRNet模型,以執行車牌數字辨識任務。LPRNet模型修改重點在於使用適合邊緣計算器的指令,以及修改模型複雜度。最後,將三個階段的AI模型整合,實現在Xilinx FPGA邊緣計算器上,達到運算任務輕量化的目標。以上工作項目皆在實際系統上測試其功能,並且分析測試結果。
英文摘要
This study developed an AI model for license plate recognition in FPGA-based edge computing devices. The license plate recognition task was divided into three stages, including license plate detection, license plate orientation alignment, and license plate number recognition, with appropriate AI models designed for each stage. In the first stage, to detect the position of the license plate in the image and select the license plate image area, this study used the YOLOv4-tiny model, which is suitable for edge computing devices. The second stage aligns the skewed orientation of the license plate to present it frontally in the image. The correction method first found the four corner points of the license plate, then used a perspective projection algorithm to correct the skewed orientation to a frontal orientation. This paper proposed using a convolutional pose machine model to perform the detection task of the four corner points of the license plate. The third stage was to recognize the license plate numbers. This research modified the existing LPRNet model from the literature to perform the license plate number recognition task. The focus of the LPRNet model modification is on using instructions suitable for edge computing devices and adjusting the model complexity. Finally, the AI models from the three stages were integrated and implemented on a Xilinx FPGA edge computing device to achieve the goal of lightweight computational tasks. All of the above work items were tested for functionality on an actual system, and the test results were analyzed.
第三語言摘要
論文目次
目錄
摘要	I
致謝	III
目錄	IV
圖目錄	VII
表目錄	XI
第1章 序論	1
1.1 研究動機	1
1.2 研究目的	1
1.3 邊緣計算式車牌辨識	2
1.4 文獻探討	3
1.4.1 車牌辨識相關文獻	3
1.4.2 車牌歪斜校準相關文獻	3
1.4.3 邊緣計算器相關文獻	4
1.5 研究範圍	4
1.6 論文架構	4
第2章 車牌資料集蒐集與分析	6
2.1 CCPD車牌資料集	6
2.2 台灣車牌資料集	8
2.2.1 車牌資料集蒐集	8
.2.2.2 車牌標記	9
2.2.3 車牌資料集擴增	10
2.2.4 測試資料集	13
2.2.5 台灣車牌辨識測試	14
2.3 資料主成份分析	15
2.3.1 重新資料擴增	17
2.3.2擴增資料的多樣性	20
第3章 車牌偵測與歪斜校準	21
3.1車牌位置偵測演算法	21
3.2 車牌歪斜校準演算法	22
3.2.1 沙漏網路(Hourglass Network)	23
3.2.2 卷積姿態機網路	24
3.3 車牌角點偵測	26
3.3.1 CPM網路模型設計	26
3.3.2 輸入影像尺寸	28
3.3.3 信度函數與損失函數規劃	28
3.4 CPM模型訓練	29
3.4.1 CPM模型訓練程序	29
3.4.2 測試準確率分析指標	32
3.4.3 查看特徵圖	32
3.5 CPM模型測試結果與分析	33
3.5.1 測試資料集測試結果	34
3.5.2 測試資料主成份分析	36
3.5.3 使用KNN分析擴增資料	44
3.5.4 離群點車牌資料分析	44
3.5.5 PCK測試有誤的車牌資料分析	45
3.6 CPM模型壓縮與消融	48
3.6.1 減少池化層與卷積層數量	48
3.6.2 減少CPM模型Stage數量與再刪卷積層	49
3.6.3 減少卷積層內核維度	50
3.6.4 減少CPM模型輸出通道數量	53
3.7 測試旋轉角度範圍	57
3.8 PCK門檻值測試	60
3.9 優化器比較	62
第4章 車牌數字辨識模組	63
4.1 車牌辨識模型資料集	63
4.2 LPRNet網路架構	65
4.2 LPRNet架構規劃	66
4.2.1 輸入影像尺寸修改	66
4.2.2 替換未支援網路層	67
4.2.3 特徵提取與合併之運算	68
4.3 搜索數字類別	70
4.4 LPRNet架構修改	72
4.5 LPRNet模型訓練與測試	72
4.6 LPRNet模型交叉驗證	74
4.7 LPRNet模型測試執行速度	75
第5章 模型壓縮與邊緣計算	76
5.1 Xilinx邊緣運算器	76
5.2 Xilinx Vitis-AI框架	77
5.3 邊緣計算模型壓縮	78
5.4 Vitis-AI量化	78
5.5 Vitis AI Numpy PyTorch填入格式	79
5.6 量化模型速度及準確度測試	80
第6章 結果與討論	82
6.1 研究成果	82
6.2 未來研究方向	82
參考文獻	84

 
圖目錄
圖1.1車牌辨識流程	3
圖2.1使用LPRNet pretrained model直接測試CCPD資料集	7
圖2.2使用CCPD資料集32萬張車牌照片訓練LPRNet模型的測試結果	7
圖2.3使用挑選的CCPD資料集18萬張車牌照片訓練LPRNet模型之測試	8
圖2.4紅色點為標記車牌四個角點的位置	11
圖2.5在兩個綠色框之間隨機產生模擬用的藍色點	12
圖2.6轉換後車牌數字會有些微歪斜失真的現象	12
圖2.7透視投影轉換後不合理圖片	12
圖2.8擷取擴增用的車牌圖片	13
圖2.9高斯噪點的程式碼	13
圖2.10擴增車牌圖片加入高斯噪點	13
圖2.11模糊與大角度歪斜車牌導致辨識度不佳	14
圖2.12原始與測試兩種資料的特徵向量在二維主成份平面之分佈	16
圖2.13原始、擴增與測試三資料的特徵向量在二維主成份平面之分佈	16
圖2.14特徵向量在二維主成份平面之分佈,標出測試資料的車牌編號	17
圖2.15五個車牌特徵向量在二維主成份平面上有離群現象	17
圖2.16調整影像的亮度與對比度的範例	18
圖2.17增加亮度與對比度的調整	19
圖3.1 YOLO所框選的車牌特徵(左);擴大所框選車牌特徵的尺寸(右)	22
圖3.2圖左顯示歪斜的車牌圖片,圖右使用透視投影方法校準後圖片	22
圖3.3沙漏網路架構圖[39]	23
圖3.4使用沙漏網路校準車牌範例[39]	24
圖3.5 CPM網路架構[1]	26
圖3.6本論文修改的CPM架構	28
圖3.7總和Training loss與Validation loss收斂情形	30
圖3.8各階段Training loss收斂情形	31
圖3.9總和Training loss與Validation loss收斂情形(從20 epoch開始)	31
圖3.10各階段Training loss收斂情形(從20 epoch開始)	32
圖3.11 Hook程式範例	33
圖3.12堆疊各個階段的特徵圖與原圖(Stage 1)	34
圖3.13堆疊各個階段的特徵圖與原圖(Stage 2)	35
圖3.14堆疊各個階段的特徵圖與原圖(Stage 3)	35
圖3.15堆疊各個階段的特徵圖與原圖(Stage 4)	35
圖3.16堆疊各個階段的特徵圖與原圖(Stage 5)	35
圖3.17堆疊各個階段的特徵圖與原圖(Stage 5)	36
圖3.18堆疊各個階段的特徵圖與原圖(Stage 7)	36
圖3.19特徵向量投影在二維主成份平面(通道1)	37
圖3.20特徵向量投影在二維主成份平面(列出車牌號碼)(通道1)	38
圖3.21二維主成份特徵向量離群點的車牌圖片(通道1)	38
圖3.22特徵向量投影在二維主成份平面(通道2)	39
圖3.23特徵向量投影在二維主成份平面(列出車牌號碼)(通道2)	39
圖3.24二維主成份特徵向量離群點(通道2)	40
圖3.25特徵向量投影在二維主成份平面(通道3)	40
圖3.26特徵向量投影在二維主成份平面(列出車牌號碼)(通道3)	41
圖3.27二維主成份特徵向量離群點的車牌圖片(通道3)	41
圖3.28特徵向量投影在二維主成份平面(通道4)	42
圖3.29特徵向量投影在二維主成份平面(列出車牌號碼)(通道4)	42
圖3.30二維主成份特徵向量離群點的車牌圖片(通道4)	43
圖3.31在多數通道特徵向量顯示為離群點的車牌	44
圖3.32標出離群MFE-2767車牌經CPM模型偵測的車牌四個角落	45
圖3.33標出離群752-HYD車牌經CPM模型偵測的車牌四個角落	45
圖3.34超越PCK閥值的車牌	46
圖3.35標出離群MMZ-9966車牌經CPM模型偵測的車牌四個角落	46
圖3.36使用偵測到的四個角落進行透視投影轉換後MMZ-9966車牌之圖片	47
圖3.37標出離群RDL-7527車牌經CPM模型偵測的車牌四個角落	47
圖3.38使用偵測到的四個角落進行透視投影轉換後RDL-7527車牌之圖片	47
圖3.39標出離群G3P-698車牌經CPM模型偵測的車牌四個角落	47
圖3.40使用偵測到的四個角落進行透視投影轉換後G3P-698車牌之圖片	48
圖3.41將Stages由7降為2,並且再刪CPMStage_x的第一個99卷積層	50
圖3.42受視野(receptive field)計算公式	51
圖3.43受視野 = 3030查看視野範圍特徵圖	51
圖3.44更改卷積核尺寸為 77網路架構圖	52
圖3.45 kernel_size = 77;受視野 = 2424查看視野範圍圖	52
圖3. 46更改卷積核尺寸為 55網路架構圖	53
圖3.47 kernel_size = 55;受視野 = 1818查看視野範圍圖	53
圖3.48 CPMStage_x輸出分布	55
圖3.49計算APoZ位置	56
圖3.50計算APoZ位置	56
圖3.51剪枝後做APoZ分析	57
圖3.52最終模型測試範例圖	59
圖3.53模型測試不同角度	60
圖3.54測試圖片PCK統計圖	61
圖3. 55辨識錯誤圖片與PCK值關係之統計	61
圖4.1找到中心點	64
圖4.2轉換為相對於中心點的偏移量	65
圖4.3加入高斯躁點及亮暗程式	65
圖4.4 LPRNet Backbone架構圖與Small basic block (尋邊功能)	67
圖4.5 MaxPool3D 改MaxPool2D	68
圖4.6車牌數字編碼	71
圖4.7貪婪搜索(Greedy_Decode_Eval)	72
圖5.1 Innodisk EXMU-X261[61]	77
圖5.2 Xilinx Vitis-AI框架 [63][64]	78
圖6.1多角度追蹤辨識 [67]	83

表目錄
表2.1針對不同的測試集之準確率比較	8
表2.2針對不同的N值做分類測試	20
表3.1電腦硬體規格配置	29
表3.2電腦環境版本配置	30
表3.3模型訓練參數	30
表3.4 原始架構模型的PCK值	34
表3.5針對不同的N值做分類測試	44
表3.6離群點車牌角點偵測與車牌辨識	46
表3.7保留Stage數量所對應之PCK值	49
表3.8 kernel_size = 99;PCK值	50
表3.9 kernel_size = 77;PCK值	52
表3.10 kernel_size = 55;PCK值	53
表3.11 根據APoZ剪枝後的PCK值	57
表3.12 LPR預測錯誤圖片PCK值之比較	61
表4.1 LPRNet模型[11]	66
表4.2 Vitis AI使用手冊列舉x261支援的運算子與限制[57]	68
表4.3 LPRNet模型訓練與測試結果	74
表4.4交叉驗證結果	75
表4.5 LPR模型測試平均執行速度平均準確度與FLOPs之比較	75
表5.1比較加入CPM模型對於LPR辨識準確度影響	81
表5.2各階段模型運行時間(單位Sec)	81
表5.3 PCK準確度比較	81
表5.4 LPR準確度比較	81
參考文獻
參考文獻
[1]	S.-E. Wei, V. Ramakrishna, T. Kanade, and Y. Sheikh, "Convolutional pose machines," in Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, 2016, pp. 4724-4732.
[2]	Field programmable gate array (FPGA), https://en.wikipedia.org/wiki/Field-programmable_gate_array. (Accessed on June 1, 2024)
[3]	Xilinx KV260, https://xilinx.github.io/kria-apps-docs/kv260/2022.1/build/ html/docs/nlp-smartvision/docs/hw_arch_accel_nlp.html. (Accessed on June 1, 2024)
[4]	The Xilinx Deep Learning Processing Unit (DPU), https://docs.xilinx.com/ r/3.2-English/pg338-dpu/Introduction?tocId=iBBrgQ7pinvaWB_KbQH6hQ. (Accessed on June 1, 2024)
[5]	林宏軒: ‘DNN 車牌辨識於智慧城市的應用’, 電腦與通訊, 2020, (180), pp. 41-44.
[6]	J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, "You only look once: Unified, real-time object detection," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 779-788.
[7]	X. Li, J. Pan, F. Xie, J. Zeng, Q. Li, X. Huang, D. Liu, and X. Wang, “Fast and accurate green pepper detection in complex backgrounds via an improved Yolov4-tiny model,” Computers and Electronics in Agriculture, vol. 191, pp. 106503, 2021.
[8]	E. R. Lee, P. K. Kim, and H. J. Kim, "Automatic recognition of a car license plate using color image processing," in IEEE Proceedings of 1st international conference on image processing, 1994, vol. 2, pp.301-305.
[9]	T. Naito, T. Tsukada, K. Yamada, K. Kozuka, and S. Yamamoto, "Robust license-plate recognition method for passing vehicles under outside environment," IEEE transactions on vehicular technology, vol. 49, no. 6, pp. 2309-2319, 2000.
[10]	S. Du, M. Ibrahim, M. Shehata, and W. Badawy, "Automatic license plate recognition (ALPR): A state-of-the-art review," IEEE Transactions on circuits and systems for video technology, vol. 23, no. 2, pp. 311-325, 2012.
[11]	Zherzdev, S., and Gruzdev, A.: ‘Lprnet: License plate recognition via deep neural networks’, arXiv preprint arXiv:1806.10447, 2018.
[12]	Wang, D., Tian, Y., Geng, W., Zhao, L., and Gong, C.: ‘LPR-Net: Recognizing Chinese license plate in complex environments’, Pattern Recognition Letters, 2020, 130, pp. 148-156.
[13]	J. Shashirangana, H. Padmasiri, D. Meedeniya, and C. Perera, "Automated license plate recognition: a survey on methods and techniques," IEEE Access, vol. 9, pp. 11203-11225, 2020
[14]	M. M. Khan, M. U. Ilyas, I. R. Khan, S. M. Alshomrani, and S. Rahardja, "A review of license plate recognition methods employing neural networks," IEEE Access, 2023.
[15]	Recurrent Neural Network, website https://en.wikipedia.org/wiki/Recurrent_ neural_network. (accessed on July 1, 2023)
[16]	J. Špaňhel, J. Sochor, R. Juránek, and A. Herout, "Geometric alignment by deep learning for recognition of challenging license plates," in 2018 21st International Conference on Intelligent Transportation Systems (ITSC), 2018: IEEE, pp. 3524-3529.
[17]	A. Newell, K. Yang, and J. Deng, "Stacked hourglass networks for human pose estimation," in Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part VIII 14, 2016: Springer, pp. 483-499.
[18]	D. Osokin, “Global context for convolutional pose machines,” arXiv preprint arXiv:1906.04104, 2019.
[19]	X. Nie, J. Feng, J. Zhang, and S. Yan, "Single-stage multi-person pose machines," in Proceedings of the IEEE/CVF international conference on computer vision, 2019, pp. 6951-6960.
[20]	K. Wang, L. Lin, C. Jiang, C. Qian, and P. Wei, "3D human pose machines with self-supervised learning," IEEE transactions on pattern analysis and machine intelligence, vol. 42, no. 5, pp. 1069-1082, 2019.
[21]	F. Gong, Y. Ma, P. Zheng, and T. Song, “A deep model method for recognizing activities of workers on offshore drilling platform by multistage convolutional pose machine,” Journal of Loss Prevention in the Process Industries, vol. 64, pp. 104043, 2020.
[22]	T. Zhang, H. Lin, Z. Ju, and C. Yang, “Hand Gesture recognition in complex background based on convolutional pose machine and fuzzy Gaussian mixture models,” International Journal of Fuzzy Systems, vol. 22, no. 4, pp. 1330-1341, 2020.
[23]	N. Santavas, I. Kansizoglou, L. Bampis, E. Karakasis, and A. Gasteratos, “Attention! a lightweight 2d hand pose estimation approach,” IEEE Sensors Journal, vol. 21, no. 10, pp. 11488-11496, 2020.
[24]	T. Pan, Z. Wang, and Y. Fan, “Optimized convolutional pose machine for 2D hand pose estimation,” Journal of Visual Communication and Image Representation, vol. 83, pp. 103461, 2022.
[25]	Z. Cao, T. Simon, S.-E. Wei, and Y. Sheikh, "Realtime multi-person 2d pose estimation using part affinity fields," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 7291-7299.
[26]	Xilinx Vitas-AI, website https://www.xilinx.com/products/design-tools/vitis/ Vitis-AI.html. (accessed on March 1, 2023)
[27]	CCPD, website https://github.com/detectRecog/CCPD (accessed on 6/6/2024)
[28]	Z. Xu et al., "Towards end-to-end license plate detection and recognition: A large dataset and baseline," in Proceedings of the European conference on computer vision (ECCV), 2018, pp. 255-271.
[29]	LPRNet_Pytorch github, https://github.com/sirius-ai/LPRNet_Pytorch.
[30]	https://blog.csdn.net/qq_38253797/article/details/125042833.
[31]	ImgAug, website https://github.com/aleju/imgaug
[32]	LabelMe, website https://github.com/labelmeai/labelme (accessed on May 28, 2024)
[33]	C. Li, W. Liu, R. Guo, X. Yin, K. Jiang, Y. Du, Y. Du, L. Zhu, B. Lai, and X. Hu, “PP-OCRv3: More attempts for the improvement of ultra lightweight OCR system,” arXiv preprint arXiv:2206.03001, 2022.
[34]	PaddleOCR, website https://github.com/PaddlePaddle/PaddleOCR (accessed on 6/6/2024)
[35]	S. Wold, K. Esbensen, and P. Geladi, “Principal component analysis,” Chemometrics and intelligent laboratory systems, vol. 2, no. 1-3, pp. 37-52, 1987.
[36]	O. Rodionova, S. Kucheryavskiy, and A. Pomerantsev, “Efficient tools for principal component analysis of complex data—A tutorial,” Chemometrics and Intelligent Laboratory Systems, vol. 213, pp. 104304, 2021.
[37]	Jiang, Z., Zhao, L., Li, S., and Jia, Y.: ‘Real-time object detection method based on improved YOLOv4-tiny’, arXiv preprint arXiv:2011.04244, 2020.
[38]	K. Wang, B. Fang, J. Qian, S. Yang, X. Zhou, and J. Zhou, "Perspective transformation data augmentation for object detection," IEEE Access, vol. 8, pp. 4935-4943, 2019.
[39]	J. Špaňhel, J. Sochor, R. Juránek, and A. Herout, "Geometric alignment by deep learning for recognition of challenging license plates," in 2018 21st International Conference on Intelligent Transportation Systems (ITSC), 2018: IEEE, pp. 3524-3529.
[40]	A. Newell, K. Yang, and J. Deng, "Stacked hourglass networks for human pose estimation," in Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part VIII 14, 2016: Springer, pp. 483-499.
[41]	Y. Ying, W. Shichuan, and Z. Wei, "Detection of the Bolt Loosening Angle through Semantic Key Point Extraction Detection by Using an Hourglass Network," Structural Control and Health Monitoring, vol. 2023, 2023.
[42]	A. Hrovatič, P. Peer, V. Štruc, and Ž. Emeršič, "Efficient ear alignment using a two‐stack hourglass network," IET Biometrics, vol. 12, no. 2, pp. 77-90, 2023.
[43]	D. Osokin, “Global context for convolutional pose machines,” arXiv preprint arXiv:1906.04104, 2019.
[44]	X. Nie, J. Feng, J. Zhang, and S. Yan, "Single-stage multi-person pose machines," in Proceedings of the IEEE/CVF international conference on computer vision, 2019, pp. 6951-6960.
[45]	K. Wang, L. Lin, C. Jiang, C. Qian, and P. Wei, "3D human pose machines with self-supervised learning," IEEE transactions on pattern analysis and machine intelligence, vol. 42, no. 5, pp. 1069-1082, 2019.
[46]	F. Gong, Y. Ma, P. Zheng, and T. Song, “A deep model method for recognizing activities of workers on offshore drilling platform by multistage convolutional pose machine,” Journal of Loss Prevention in the Process Industries, vol. 64, pp. 104043, 2020.
[47]	T. Zhang, H. Lin, Z. Ju, and C. Yang, “Hand Gesture recognition in complex background based on convolutional pose machine and fuzzy Gaussian mixture models,” International Journal of Fuzzy Systems, vol. 22, no. 4, pp. 1330-1341, 2020.
[48]	N. Santavas, I. Kansizoglou, L. Bampis, E. Karakasis, and A. Gasteratos, “Attention! a lightweight 2d hand pose estimation approach,” IEEE Sensors Journal, vol. 21, no. 10, pp. 11488-11496, 2020.
[49]	T. Pan, Z. Wang, and Y. Fan, “Optimized convolutional pose machine for 2D hand pose estimation,” Journal of Visual Communication and Image Representation, vol. 83, pp. 103461, 2022.
[50]	Z. Cao, T. Simon, S.-E. Wei, and Y. Sheikh, "Realtime multi-person 2d pose estimation using part affinity fields," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 7291-7299.
[51]	V. Ramakrishna, D. Munoz, M. Hebert, J. Andrew Bagnell, and Y. Sheikh, "Pose machines: Articulated pose estimation via inference machines," in Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part II 13, 2014: Springer, pp. 33-47.
[52]	Frank Odom, How to Use PyTorch Hooks, website https://medium.com/the-dl/how-to-use-pytorch-hooks-5041d777f904. (accessed on May 28, 2024)
[53]	Pytorch hooks, website https://pytorch.org/docs/stable/generated/torch.Tensor. register_hook.html (accessed on May 28, 2024)
[54]	Recurrent Neural Network, website https://en.wikipedia.org/wiki/Recurrent_ neural_network. (accessed on July 1, 2023)
[55]	GitHub - sirius-ai/LPRNet_Pytorch: Pytorch Implementation For LPRNet, A High Performance And Lightweight License Plate Recognition Framework.,website https://github.com/sirius-ai/LPRNet_Pytorch?tab=readme-ov-file (Accessed on June 1, 2023)
[56]	使用YOLOv5與LPRNet進行車牌檢測與識別(CCPD数據集),website https://github.com/HuKai97/YOLOv5-LPRNet-Licence-Recognition (accessed on June 1, 2023)
[57]	Vitis AI User Guide - https://docs.amd.com/r/1.4.1-English/ug1414-vitis-ai/Currently-Supported-Operators (Accessed on June 1, 2023)
[58]	The Xilinx DeepThe Xilinx Deep Learning Processing Unit (DPU), https://docs.xilinx.com/r/3.2-English/pg338-dpu/Introduction?tocId=iBBrgQ7 pinvaWB_KbQH6hQ. (Accessed on June 1, 2023)
[59]	Kria KV260 Vision AI Starter Kit, https://www.xilinx.com/products/som/ kria/kv260-vision-starter-kit.html. (accessed on March 1, 2023)
[60]	InnoDisk EXMU-X261, website https://www.innodisk.com/en/products/ embedded-peripheral/fpga/exmu-x261 (accessed on March 1, 2023)
[61]	Innodisk EV2U Camera Module, website https://www.innodisk.com/epaper/ innonews/202211-camera-module/cht/index.html. (accessed on July 1, 2023)
[62]	Pytorch, website https://github.com/pytorch/pytorch (accessed on July 9, 2023)
[63]	Xilinx Vitas-AI, website https://www.xilinx.com/products/design-tools/vitis/ (accessed on July 9, 2023)
[64]	翁瑞宏,應用邊緣計算技術於輕量化自動車牌號碼辨識系統,淡江大學機械與機電工程學系,碩士論文,2023。
[65]	vaiGO_github, website https://github.com/InnoIPA/vaiGO/tree/master (accessed on July 9, 2023)
[66]	Dpusc_github website:https://github.com/InnoIPA/dpu-sc (accessed on July 9, 2023)
[67]	F. Alim, E. Kavakli, S. B. Okcu, E. Dogan, and C. Cigla, "Simultaneous license plate recognition and face detection application at the edge," in AI and Optical Data Sciences IV, 2023, vol. 12438: SPIE, pp. 232-243.
[68]	YOLOv4-tiny, website https://github.com/yss9701/Ultra96-Yolov4-tiny-and-Yolo-Fastest (accessed on July 1, 2024)
論文全文使用權限
國家圖書館
不同意無償授權國家圖書館
校內
校內紙本論文立即公開
同意電子論文全文授權於全球公開
校內電子論文立即公開
校外
同意授權
校外電子論文立即公開

如有問題,歡迎洽詢!
圖書館數位資訊組 (02)2621-5656 轉 2487 或 來信