淡江大學覺生紀念圖書館 (TKU Library)
進階搜尋


系統識別號 U0002-1906201909580900
中文論文名稱 以人工智慧及物聯網技術設計及實作兒童學習之穿戴式設備
英文論文名稱 Design and Implementation of a Wearable Learning Device for Kids using AI and IoTs Techniques
校院名稱 淡江大學
系所名稱(中) 資訊工程學系全英語碩士班
系所名稱(英) Master’s Program, Department of Computer Science and Information Engineering (English-taught program
學年度 107
學期 2
出版年 108
研究生中文姓名 鄧百佳
研究生英文姓名 Dande Venkata Naga Sai Bhargavi
學號 606785037
學位類別 碩士
語文別 英文
口試日期 2019-06-14
論文頁數 41頁
口試委員 指導教授-張志勇
委員-武士戎
委員-蘇民揚
委員-張志勇
中文關鍵字 資料收集機制  斷開的無線感測器網路  有效的道路  分區段  工作量 
英文關鍵字 Artificial Intelligence (AI)  Convolutional Neural Networks (CNN)  Internet of Things (IoTs)  OpenCV 
學科別分類 學科別應用科學資訊工程
中文摘要 論文提要內容:
物聯網(IoT)和人工智能(AI)近年來備受關注。物聯網設備嵌入傳感器並連接到互聯網,可以收集大量數據並與人交互。通過應用人工智能機制探索數據背後的信息,然後對人與物之間的相互作用產生影響,可以進一步分析物聯網收集的數據。本文旨在設計和實現智能帽,這是一種可穿戴設備,主要應用物聯網和人工智能技術,旨在幫助孩子以輕鬆,積極和積極的方式探索知識。設計的Smart Hat可以識別外部環境中的對象,並將輸出作為音頻格式,採用IoT和AI技術。學習智能帽子旨在幫助孩子們幫助他們完成識別物品的主要學習任務,而不需要現實生活中的第三方(父母,老師,其他人等)的監督。這款Smart Hat設備為孩子們提供了一種先進的技術,可以在日常生活中輕鬆,積極,積極地學習。性能研究表明,所獲得的結果是有希望的,非常令人滿意。
英文摘要 Internet of Things (IoT) and Artificial Intelligence (AI) have received much attention in recent years. Embedded with sensors and connected to the Internet, the IoT device can collect massive data and interact with a human. The data collected by IoT can be further analyzed by applying AI mechanisms for exploring the information behind the data and then have impacts on the interactions between human and things. This paper aims to design and implement a Smart Hat, which is a wearable device and majorly applies the IoT and AI technologies, aiming to help a kid for exploring knowledge in a manner of easy, active, and aggressive. The designed Smart Hat can identify objects in the outside environment and give output as an audio format, which adopts the IoT and AI technologies. The learning Smart Hat intends to help kids aid them in the primary learning task of identifying objects without the supervision of the third party (parents, teachers, others etc.,) in real life. This Smart Hat device provides a sophisticated technology to kids for easy, active, and aggressive learning in daily life. Performance studies show that the obtained results are promising and very satisfactory.
論文目次 List of Figures VII
List of Tables VIII
Chapter 1. Introduction - 1 -
Chapter 2. Related Works - 6 -
Chapter 3. Smart Hat System Overview…………………………………………… - 10 -
3.1 IoT Controller - 10 -
3.2 IoT Hardware Platform of Learning Hat…………………………………...- 12 -
3.3 AI Software Platform of Learning Hat……………………………………...- 13 -
3.4 Google Cloud Server - 14 -
Chapter 4. Design and Implementation……………………………………………...- 15 -
4.1 Features of Hardware Components - 15 -
4.1.1 Arduino……………………………………………………………… - 15-
4.1.2 Raspberry Pi……………………………………….…………………. -15-
4.1.3 WiFi Wireless Module……………………………...…………………-17-
4.1.4. Bluetooth Wireless Module…………………….………...……………-18-
4.2 Design and Implementation of Hardware Platform…………………. ……. - 19 -
4.2.1 Task I: Handheld Button Device…………………..............................- 19 -
4.2.2 Task II: Hardware Platform of Smart Hat……………………………- 20 –
4.3. Design and Implementation of Software Platform………………………...- 21 –
4.3.1. OpenCV…………………….………………………………………...-21-
4.3.2. TensorFlow…………………….……………………………………-21-
4.3.3. Google Cloud Server…………………….…………………………-22-
Chapter 5. Programming Design for Deep Learning System - 23 –
Chapter 6.Physical Prototype…………………….………………………………….-25-
Chapter 7. Performance Evaluation - 26 –
Chapter 8. Conclusion………………………………………………………………- 39 –
References - 40 -
List of Figures
Fig. 1. The four-year-old kid using our Smart Hat.. - 3 -
Fig. 2. The proposed architecture for Smart Hat. - 11 -
Fig. 3. Arduino hardware component.. - 15 -
Fig. 4. Raspberry Pi 3 hardware component.. - 17 -
Fig. 5. WiFi wireless module.. - 18 -
Fig. 6. Bluetooth module. . - 18 -
Fig. 7. Bluetooth button wiring diagram. . - 20 -
Fig. 8. Raspberry Pi Camera module.. - 20 -
Fig. 9. Smart Hat prototype... - 25 –
Fig.10.Three scenarios considered in the experiments. .. - 29 -
Fig.11.Comparison of “Time delay” for seven stages in terms of pixel density. .. - 30 -
Fig.12.Comparison of accuracy for “three scenarios” in terms of pixel density. .. - 30 -
Fig.13.Comparison of “Accumulated time delay” by varying pixel density. .. - 31 -
Fig.14. Comparison of “Accuracy” for three scenarios in terms of Time delay by varying pixel density. .. - 32 -
Fig.15.Comparison of “Efficiency index” ‘for different pixel density by varying the allowed delay. .. - 34 -
Fig.16.Comparison of “Accuracy” in terms of different ages of kids by varying pixel density for three scenarios. .. - 35 -
Fig. 17.Comparison of “Accuracy” by varying the distances between AP and Smart Hat. .. - 36 -
Fig.18. Examples of various objects recognition on the system.. - 37 -

List of Tables
TABLE I - 9 -
TABLE II - 27 -
TABLE III - 28 -



參考文獻 1. Zanella, A.; Bui, N.; Castellani, A.; Vangelista, L.; Zorzi, M. Internet of Things for smart cities, in IEEE Internet of Things Journal, vol. 1, no. 1, Feb. 2014, pp. 22-32.
2. Yang, G.; Xie, L.; Mantysalo, M.; et al., A Health-IoT platform based on the integration of intelligent packaging, unobtrusive bio-sensor, and intelligent medicine box, in IEEE Transactions on Industrial Informatics, vol. 10, no. 4, Nov. 2014, pp. 2180-2191.
3. Hafidh, B.; Al Osman, H.; Arteaga-Falconi, J. S.; Dong, H.; Saddik, A.E. SITE: The simple internet of things enabler for smart homes, in IEEE Access, vol. 5, 2017, pp. 2034-2049.
4. Perera, C.; Liu C. H.; Jayawardena, S. The emerging internet of things marketplace from an industrial perspective: a survey, in IEEE Transactions on Emerging Topics in Computing, vol. 3, no. 4, Dec. 2015, pp. 585-598.
5. Li, F.; Du, Y. From AlphaGo to power system AI: what engineers can learn from solving the most complex board game, in IEEE Power and Energy Magazine, vol. 16, no. 2, March-April 2018, pp. 76-84.
6. Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M.; Berg, A. C.; Fei-Fei, L. ImageNet large scale visual recognition challenge, International Journal of Computer Vision (IJCV), vol. 115, no. 3, 2015, pp. 211–252.
7. Karpathy, A.; Fei-Fei, L. Deep visual-semantic alignments for generating image descriptions, in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 4, 1 April 2017, pp. 664-676.
8. Mulfari, D.; Longo Minnolo, A.; Puliafito, A. Building TensorFlow applications in smart city scenarios, 2017 IEEE International Conference on Smart Computing (SMARTCOMP), Hong Kong, 2017, pp. 1-5.
9. Lewis, L. L.; Kim, Y. A.; Bey, J. A. Teaching practices and strategies to involve inner-city parents at home and in the school, Journal of Teaching and Teacher Education, vol. 27, no. 1, 2015, pp.221 - 234.
10. Grlaš, V.; Marinović, M. Ecological education in preschool education through visual communication, The 33rd International Convention MIPRO, Opatija, 2010, pp. 959-963.
11. Drigas, A.S.; Kokkalia, G.; Economou, A. Mobile learning for preschool education, International Journal of Interactive Mobile Technologies (iJIM), vol. 10, Jan. 2016, pp.67.
12. Beschorner, B.; Hutchison, A. iPads as a literacy teaching tool in early childhood, International Journal of Education in Mathematics, Science and Technology, vol. 1, Jan. 2013, pp.16-24.
13. Causo, A.; Win, P. Z. ; Guo, P. S.; Chen, I. Deploying social robots as teaching aid in pre-school K2 classes: A proof-of-concept study,2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, 2017, pp. 4264-4269.
14. Culjak, I.; Abram, D.; Pribanic, T.; Dzapo, H.; Cifrek, M. A brief introduction to OpenCV, 2012 Proceedings of the 35th International Convention MIPRO, Opatija, 2012, pp. 1725-1730.
15. Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the inception architecture for computer vision, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, 2016, pp. 2818-2826.
論文使用權限
  • 同意紙本無償授權給館內讀者為學術之目的重製使用,於2024-06-21公開。
  • 同意授權瀏覽/列印電子全文服務,於2024-06-21起公開。


  • 若您有任何疑問,請與我們聯絡!
    圖書館: 請來電 (02)2621-5656 轉 2486 或 來信