§ 瀏覽學位論文書目資料
系統識別號 U0002-0809202017135000
DOI 10.6846/TKU.2020.00186
論文名稱(中文) 基於MobileNet之輕量化即時多物件追蹤設計
論文名稱(英文) MobileNet-Based Lightweight Real-Time Multi-object Tracking Design
第三語言論文名稱
校院名稱 淡江大學
系所名稱(中文) 電機工程學系機器人工程碩士班
系所名稱(英文) Master's Program In Robotics Engineering, Department Of Electrical And Computer Engineering
外國學位學校名稱
外國學位學院名稱
外國學位研究所名稱
學年度 108
學期 2
出版年 109
研究生(中文) 蘇煜凱
研究生(英文) Yu-Kai Su
學號 607470035
學位類別 碩士
語言別 繁體中文
第二語言別
口試日期 2020-07-17
論文頁數 53頁
口試委員 指導教授 - 蔡奇謚
共同指導教授 - 許駿飛
委員 - 許陳鑑
委員 - 李世安
委員 - 蔡奇謚
關鍵字(中) 深度學習
多物件追蹤
Mobilenet-SSDv2
One-Shot Tracking By Detection
關鍵字(英) Deep Learning
Multi-Object Tracking
Mobilenet-SSDv2
One-Shot Tracking By Detection
第三語言關鍵字
學科別分類
中文摘要
多物件追蹤是電腦視覺領域中極具挑戰且重要的研究議題之一。雖然目前在文獻中已提出許多的多物件追蹤方法,但這些方法大多都無法達到即時運算的能力,特別是在運算資源有限的嵌入式平台中。為了解決此問題,本論文提出一種基於MobileNet所設計的輕量化即時多物件追蹤系統,其可有效提升多物件追蹤運算的處理速度。所提出的系統採用One-Shot Tracking By Detection的架構進行設計,其由多物件追蹤模型與後處理模組所組成。原先我們使用HarDNet作為多物件追蹤模型,後來發現其在嵌入式平台中的運算速度並不理想,因此我們沿用本實驗室所提出的Mobilenet-SSDv2作為追蹤模型中的偵測網路,用來輸出影像中目標物的偵測資訊,並在此基礎上針對錨框設定進行改良,以此提升追蹤準確率。在後處理模組中,我們提出一個簡單篩選(Simple Filtering)的方法來替換現有方法中使用的卡爾曼濾波器。此方法雖然會使得追蹤準確率略微下降,但可大幅度提高運算速度。最後,透過簡單篩選與匈牙利匹配所組成的後處理模組將偵測資訊與追蹤資訊進行匹配,以此來完成多物件追蹤任務。實驗結果顯示,透過對追蹤模型中錨框與Mobile FPN的改良,整體系統的追蹤準確率提高了9.8% MOTA與9.4% IDF1,並且在MOT16中得到59.1% MOTA與52.6% IDF1的結果。在桌上型電腦與嵌入式平台上的運算速度測試中,所提出的系統分別達到41.4 FPS與 10.7 FPS的運算速度。透過簡單篩選的替換卡爾曼濾波器後,追蹤準確率降低為58.1% MOTA與47.7% IDF1,但運算處理速度分別提升為54.2 FPS與12.1 FPS。以HarDNet作為多物件追蹤模型搭配錨框與Mobile FPN的改良在MOT16中獲得61.3% MOTA與53.3% IDF1的結果,運算處理速度分別為33.5 FPS與3.9 FPS;將卡爾曼濾波器換成簡單篩選後,追蹤準確率為60.7% MOTA與52.1% IDF1,運算處理速度分別為41.5 FPS與4.2 FPS。
英文摘要
Multi-object tracking is one of the most challenging and important research topics in the field of computer vision. Although many multi-object tracking methods have been proposed in the literature, most of them cannot achieve real-time processing performance, especially in embedded platforms with limited computing resources. In order to solve this problem, this thesis proposes a lightweight real-time multi-object tracking system based on MobileNet, which can effectively improve the processing speed of the multi-object tracking process. The system is designed using the One-Shot Tracking By Detection architecture, which consists of a multi-object tracking model and a post-processing module. Originally, we used HarDNet as the multi-object tracking model, but later discovered that its computing speed in the embedded platform was not ideal. Therefore, we use the Mobilenet-SSDv2 proposed by our laboratory as the detection network in the tracking model to produce the detection information of multiple targets in the image, and improve the anchor box design and Mobile FPN to improve the tracking accuracy. In the post-processing module, we propose a simple filtering method to replace the Kalman filter used in the existing method. Although this method will slightly reduce the tracking accuracy, it can greatly improve the calculation speed. Finally, a post-processing module consisting of the simple filtering and Hungarian algorithm is used to match the detection information with the tracking information to complete the multi-object tracking task. Experimental results show that by improving of the anchor box and mobile FPN in the tracking model, the tracking accuracy of the entire system has reached 59.1% MOTA and 52.6% IDF1 on MOT16. In the evaluation of computing speed on the desktop computer and the embedded platform, the computing speed of the proposed system reached 41.4 FPS and 10.7 FPS, respectively. After replacing the Kalman filter with the proposed simple filtering method, the tracking accuracy is reduced to 58.1% MOTA and 47.7% IDF1, but the processing speed is increased to 54.2 FPS and 12.1 FPS, respectively. Using HarDNet as a multi-object tracking model with the improving of the anchor box and mobile FPN, the results of 61.3% MOTA and 53.3% IDF1 were obtained in MOT16, and the processing speed was 33.5 FPS and 3.9 FPS respectively; after replacing the Kalman filter with simple filtering , The tracking accuracy is 60.7% MOTA and 52.1% IDF1, and the processing speed is 41.5 FPS and 4.2 FPS respectively.
第三語言摘要
論文目次
目錄
中文摘要	Ⅰ
英文摘要	III
目錄	V
圖目錄	VIII
表目錄	X
第一章 序論	1
1.1 研究背景	1
1.2 研究動機與目的	2
1.3 論文架構	5
第二章 相關研究與論文流程架構	6
2.1 物件偵測	6
2.1.1 RCNN系列	6
2.1.2 YOLOv3	8
2.1.3 SSD	9
2.2 TBD架構的多物件追蹤方法	11
2.2.1 Two-Step TBD	12
2.2.1.1 SORT	12
2.2.1.2 DeepSORT	13
2.2.2 One-Shot TBD	14
2.2.2.1 JDE	15
2.3 輕量化網路	15
2.3.1 MobileNetV1	16
2.3.2 MobileNetV2	17
2.3.3 HarDNet	18
2.4 論文方法流程架構	18
第三章 輕量化多物件追蹤模型	21
3.1 Mobilenet-SSDv2	22
3.2 輕量化預測器	25
3.3 錨框	27
3.4 訓練方法	29
3.4.1 訓練數據集	29
3.4.2 訓練方法與參數	30
第四章 後處理模組	33
4.1 非極大值抑制	33
4.2 數據關聯	34
第五章 實驗結果與分析	38
5.1 實驗平台	38
5.2 評估標準	39
5.3 實驗數據	39
5.3.1 錨框改良	40
5.3.2 Mobile FPN的改良	41
5.3.3 數據關聯的改良	42
5.4 與現有方法比較	44
第六章 結論與未來展望	47
參考文獻	48

圖目錄
圖1.1、TBD示意圖	2
圖1.2、DFT示意圖	2
圖1.3、One-Shot TBD示意圖	3
圖2.1、Faster-RCNN示意圖	7
圖2.2、FPN示意圖	9
圖2.3、YOLOv3架構圖	9
圖2.4、SSD架構圖	11
圖2.5、IOU距離示意圖	12
圖2.6、DeepSORT示意圖	14
圖2.7、JDE架構圖	15
圖2.8、(a) 卷積層與(b) Depth-Wise卷積層與1x1卷積層示意圖	17
圖2.9、HarDNet示意圖	18
圖2.10、多物件追蹤系統架構圖	20
圖3.1、多物件追蹤模型示意圖	21
圖3.2、(a) Mobile Block、(b) IR Block1、(c) IR Block2	22
圖3.3、Mobilenet-SSDv2網路骨架圖	23
圖3.4、(a) JDE中的Conv FPN與(b) 本論文所提出的Mobile FPNv2	24
圖3.5、(a) JDE的預測器與(b) 本論文使用的輕量化預測層	26
圖3.6、(a) 錨框於實際影像示意圖、(b) 8x8特徵圖與(c) 4x4特徵圖	28
圖3.7、(a) 我們所提出的錨框與原先的錨框與(b) 錨框詳細配置	29
圖4.1、後處理模組架構圖	33
圖4.2、NMS結果示意圖	34
圖4.3、JDE的數據關聯流程圖	34
圖4.4、本論文所提出的數據關聯流程圖	36
圖5.1、Jetson AGX Xavier	38

表目錄
表2.1、DarkNet53網路架構表	8
表2.2、VGG-SSD網路架構表	10
表2.3、DeepSORT外觀模型網路架構表	14
表3.1、錨框的縮放比例	28
表3.2、訓練數據集	30
表5.1、實驗硬體規格表	38
表5.2、錨框改良的實驗結果	41
表5.3、Mobile FPN改良的實驗結果	41
表5.4、數據關聯改良之比較	43
表5.5、多物件追蹤模型大小與在Xavier上的速度比較	43
表5.6、所提方法與現有方法於MOT16測試集的實驗結果	46
參考文獻
[1]	R. Girdhar, G. Gkioxari, L. Torresani, M. Paluri, D. Tran, “Detect-and-Track: Efficient Pose Estimation in Videos,” IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, Dec, 2018, pp. 350-359.
[2]	D. Tran, H. Wang, L. Torresani, J. Ray, Y. LeCun and M. Paluri, “A Closer Look at Spatiotemporal Convolutions for Action Recognition,” 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, Dec, 2018, pp. 6450-6459.
[3]	S. Ahmed, M.N. Huda, S. Rajbhandari, C. Saha, M. Elshaw, S. Kanarachos, “Pedestrian and Cyclist Detection and Intent Estimation for Autonomous Vehicles: A Survey,” Appl. Sci. June, 2019.
[4]	B. Yang and R. Nevatia, “Online learned discriminative partbased appearance models for multi-human tracking,” in 12th European Conference Computer Vision, Oct, 2012, pp. 484–498.
[5]	W. Hu, X. Li, W. Luo, X. Zhang, S. Maybank and Z. Zhang, “Single and Multiple Object Tracking Using Log-Euclidean Riemannian Subspace and Block-Division Appearance Model,” in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, no. 12, pp. 2420-2440, Dec. 2012.
[6]	L. Zhang and L. van der Maaten, “Structure Preserving Object Tracking,” 2013 IEEE Conference on Computer Vision and Pattern Recognition, Portland, OR, Oct,2013, pp. 1838-1845.
[7]	L. Zhang and L. van der Maaten, “Preserving Structure in Model-Free Tracking,” in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 36, no. 4, pp. 756-769, April 2014.
[8]	M. Yang, T. Yu and Y. Wu, “Game-Theoretic Multiple Target Tracking,” 2007 IEEE 11th International Conference on Computer Vision, Rio de Janeiro, Dec, 2007, pp. 1-8.
[9]	Q. Wang, Z. Teng, J. Xing, J. Gao, W. Hu, S.Maybank, “Multiple Object Tracking: A Literature Review,” Computer Vision and Pattern Recognition, May, 2018, arXiv:1409.7618v4.
[10]	C. Dicle, O. I. Camps and M. Sznaier, “The Way They Move: Tracking Multiple Targets with Similar Appearance,” 2013 IEEE International Conference on Computer Vision, Sydney, NSW, March, 2013, pp. 2304-2311.
[11]	Z. Wang, Z. Wang, Y. Liu, S. Wang, “Towards Real-Time Multi-Object Tracking,” Computer Vision and Pattern Recognition, Sep, 2019, arXiv:1909.12605v1.
[12]	L. Chen, H. Ai, Z. Zhuang and C. Shang, “Real-Time Multiple People Tracking with Deeply Learned Candidate Selection and Person Re-Identification,” 2018 IEEE International Conference on Multimedia and Expo (ICME), San Diego, CA, July, 2018, pp. 1-6.
[13]	A. Bewley, Z. Ge, L. Ott, F. Ramos and B. Upcroft, “Simple online and realtime tracking,” 2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, Aug, 2016, pp. 3464-3468.
[14]	N. Wojke, A. Bewley and D. Paulus, “Simple online and realtime tracking with a deep association metric,” 2017 IEEE International Conference on Image Processing (ICIP), Beijing, Feb, 2017, pp. 3645-3649.
[15]	S. Ren, K. He, R. Girshick and J. Sun, “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 6, pp. 1137-1149, 1 June 2017.
[16]	P. Voigtlaender et al., “MOTS: Multi-Object Tracking and Segmentation,” 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, Jan, 2019, pp. 7934-7943. 
[17]	T. Xiao, S. Li, B. Wang, L. Lin and X. Wang, “Joint Detection and Identification Feature Learning for Person Search,” 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, July, 2017, pp. 3376-3385. 
[18]	J. Redmon, A. Farhadi, “YOLOv3: An Incremental Improvement,” Computer Vision and Pattern Recognition, April, 2018, arXiv:1804.02767v1.
[19]	Y.-C. Chiu, C.-Y. Tsai, M.-D. Ruan, G.-Y. Shen and T.-T. Lee, “Mobilenet-SSDv2: An Improved Object Detection Model for Embedded Systems,” International Conference on System Science and Engineering (ICSSE), Kagawa, Japan, August, 2020, accepted.
[20]	Tamer Basar, “A New Approach to Linear Filtering and Prediction Problems,” in Control Theory: Twenty-Five Seminal Papers (1932-1981), IEEE, 2001, pp.167-179.
[21]	A. Milan, L. L. Taixe, I. Reid, S. Roth, K. Schindler, “MOT16: A Benchmark for Multi-Object Tracking,” Computer Vision and Pattern Recognition, May, 2016, arXiv:1603.00831v2.
[22]	E. Ristani, F. Solera, R. S. Zou, R. Cucchiara, C. Tomasi, “Performance Measures and a Data Set for Multi-Target, Multi-Camera Tracking,” European Conference on Computer Vision, Sep, 2016, pp.17-35.
[23]	Q. Zhao, T. Sheng, Y. Wang, Z. Tang, Y. Chen, L. Cai, and H. Ling, “M2det: A single-shot object detector based on multi-level feature pyramid network,” Thirty-Third AAAI Conference on Artificial Intelligence, Honolulu, Hawaii, USA, July, 2019.
[24]	M. Tan, R. Pang, Q. V. Le, “EfficientDet: Scalable and Efficient Object Detection,” In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June, 2020, pp. 10781-10790.
[25]	R. Girshick, J. Donahue, T. Darrell, J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, Sep, 2014.
[26]	R. Girshick, “Fast R-CNN,” 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Feb, 2015, pp. 1440-1448.
[27]	J. R. R. Uijlings, K. E. A. van de Sande, T. Gevers, A. W. M. Smeulders, “Selective Search for Object Recognition,” International journal of computer vision, 2013.
[28]	A. Krizhevsky, I. Sutskever, and G. Hinton, “Imagenet classification with deep convolutional neural networks,” in Neural Information Processing Systems (NIPS), Jan, 2012.
[29]	J. Redmon, S. Divvala, R. Girshick and A. Farhadi, “You Only Look Once: Unified, Real-Time Object Detection,” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, Dec, 2016, pp. 779-788. 
[30]	J. Redmon and A. Farhadi, “YOLO9000: Better, Faster, Stronger,” 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, 2017, pp. 6517-6525.
[31]	Q. Zhao, T. Sheng, Y. Wang, F. Ni, L. Cai, “CFENet: An Accurate and Efficient Single-Shot Object Detector for Autonomous Driving,” Computer Vision and Pattern Recognition, 2018, arXiv:1806.09790v2.
[32]	K. He, X. Zhang, S. Ren and J. Sun, “Deep Residual Learning for Image Recognition,” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, 2016, pp. 770-778. 
[33]	T. Y. Lin, P. Dollar, R. Girshick, K. He, B. Hariharan, S. Belongie, “Feature Pyramid Networks for Object Detection,” IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 2017.
[34]	W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C. Berg, “SSD: Single shot multibox detector,” European Conference on Computer Vision, Amsterdam, Netherlands, Dec, 2016, pp. 21-37.
[35]	K. Simonyan, A. Zisserman, “Very Deep Convolutional Networks for Large-Scale Image Recognition,” Computer Vision and Pattern Recognition, 2015, arXiv:140931556v6.
[36]	TheWay They Move: TrackingJ. Hosang, R. Benenson, B. Schiele, “Learning non-maximum suppression,” Computer Vision and Pattern Recognition, April, 2017, arXiv:1705.02950v2.
[37]	J. H. Yoon, M. Yang, J. Lim and K. Yoon, “Bayesian Multi-object Tracking Using Motion Context from Multiple Objects,” 2015 IEEE Winter Conference on Applications of Computer Vision, Waikoloa, HI, Feb, 2015, pp. 33-40. 
[38]	C. Kim, F. Li, A. Ciptadi and J. M. Rehg, “Multiple Hypothesis Tracking Revisited,” 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Dec, 2015, pp. 4696-4704. 
[39]	A. Bewley, L. Ott, F. Ramos, B. Upcroft, “ALExTRAC: Affinity Learning by Exploring Temporal Reinforcement within Association Chains,” IEEE International Conference on Robotics and Automation, Stockholm, Sweden, June, 2016.
[40]	H.W. Kuhn, “The Hungarian Method for the Assignment Problem,” Naval Research Logistics Quarterly, 1955, pp. 83–97.
[41]	Y. Zhang, C. Wang, X. Wang, W. Zeng, W. Liu, “A Simple Baseline for Multi-Object Tracking,” Computer Vision and Pattern Recognition, May, 2020, arXiv:2004.01888v4.
[42]	M. Everingham, S. M. A. Eslami, L. V. Gool, C. K. I. Williams, J. Winn, A. Zisserman, “The PASCAL Visual Object Classes Challenge: A Retrospective,” International Journal of Computer Vision, June, 2014.
[43]	F. N. Iandola, S. Han, M. W. Moskewicz, K. Ashraf, W. J. Dally, K. Keutzer, “SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size,” International Conference on Learning Representations, Nov, 2016.
[44]	X. Zhang, X. Zhou, M. Lin and J. Sun, “ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices,” 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, Dec, 2018, pp. 6848-6856. 
[45]	N. Ma, X. Zhang, H. T. Zheng, J. Sun, “ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design,” European Conference on Computer Vision, July, 2018.
[46]	A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, H. Adam, “MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications,” Computer Vision and Pattern Recognition, April, 2017, arXiv:1704.04861v1.
[47]	M. Sandler, A. Howard, M. Zhu, A. Zhmoginov and L. Chen, “MobileNetV2: Inverted Residuals and Linear Bottlenecks,” 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, Dec, 2018, pp. 4510-4520. 
[48]	P. Chao, C. Kao, Y. Ruan, C. Huang and Y. Lin, “HarDNet: A Low Memory Traffic Network,” 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Korea (South), Nov, 2019, pp. 3551-3560.
[49]	G. Huang, Z. Liu, L. Van Der Maaten and K. Q. Weinberger, “Densely Connected Convolutional Networks,” 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, July, 2017, pp. 2261-2269.
[50]	J. Hosang, R. Benenson and B. Schiele, “Learning Non-maximum Suppression,” 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, Nov, 2017, pp. 6469-6477.
[51]	A. Ess, B. Leibe, K. Schindler, L. V. Gool, “A Mobile Vision System for Robust Multi-Person Tracking,” IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, AK, USA, 2008.
[52]	S. Zhang, R. Benenson, B. Schiele, “CityPersons: A Diverse Dataset for Pedestrian Detection,” Computer Vision and Pattern Recognition, Feb, 2017, arXiv:1702.05693v1.
[53]	P. Dollar, C. Wojek, B. Schiele and P. Perona, “Pedestrian detection: A benchmark,” 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, Aug, 2009, pp. 304-311.
[54]	L. Zheng, H. Zhang, S. Sun, M. Chandraker, Y. Yang and Q. Tian, “Person Re-identification in the Wild,” 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, Nov, 2017, pp. 3346-3355.
[55]	K. Bernardin, R. Stiefelhagen, “Evaluating Multiple Object Tracking Performance: The CLEAR MOT Metric,” EURASIP Journal on Image and Video Processing, Jan, 2008.
[56]	Y. Li, C. Huang and R. Nevatia, “Learning to associate: HybridBoosted multi-target tracker for crowded scene,” 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, Aug, 2009, pp. 2953-2960.
[57]	R. Sanchez-Matilla, F. Poiesi, A. Cavallaro, “Online multi-target tracking with strong and weak detections,” European Conference on Computer Vision, Oct, 2016, pp.84-99.
[58]	K. Fang, Y. Xiang, X. Li and S. Savarese, “Recurrent Autoregressive Networks for Online Multi-object Tracking,” 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), Lake Tahoe, NV, May, 2018, pp. 466-475.
[59]	X. Wan, J. Wang, Z. Kong, Q. Zhao and S. Deng, “Multi-Object Tracking Using Online Metric Learning with Long Short-Term Memory,” 2018 25th IEEE International Conference on Image Processing (ICIP), Athens, Sep, 2018, pp. 788-792.
[60]	Z. Zhou, J. Xing, M. Zhang and W. Hu, “Online Multi-Target Tracking with Tensor-Based High-Order Graph Matching,” 2018 24th International Conference on Pattern Recognition (ICPR), Beijing, Nov 2018, pp. 1809-1814.
[61]	Q. Chu, W. Ouyang, H. Li, X. Wang, B. Liu and N. Yu, “Online Multi-object Tracking Using CNN-Based Single Object Tracker with Spatial-Temporal Attention Mechanism,” 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Aug, 2017, pp. 4846-4855.
[62]	F. Yu, W. Li, Q. Li, Y. Liu, X. Shi, J. Yan, “POI: Multiple Object Tracking with High Performance Detection and Appearance Feature,” European Conference on Computer Vision, Nov, 2016, pp.36-42.
論文全文使用權限
校內
校內紙本論文延後至2025-09-09公開
同意電子論文全文授權校園內公開
校內電子論文延後至2025-09-09公開
校內書目立即公開
校外
同意授權
校外電子論文延後至2025-09-09公開

如有問題,歡迎洽詢!
圖書館數位資訊組 (02)2621-5656 轉 2487 或 來信