The Role of Faster R-CNN Algorithm in the Internet of Things to Detect Mask Wearing: The Endemic Preparations
Abstract
Faster R-CNN is an algorithm development that continuously starts from CNN then R-CNN and Faster R-CNN. The development of the algorithm is needed to test whether the heuristic algorithm has optimal provisions. Broadly speaking, faster R-CNN is included in algorithms that are able to solve neural network and machine learning problems to detect a moving object. One of the moving objects in the current phenomenon is the use of masks. Where various countries in the world have issued endemic orations after the Covid 19 pandemic occurred. Detection tool has been prepared that has been tested at the mandatory mask door, namely for mask users. In this paper, the role of the Faster R-CNN algorithm has been carried out to detect masks poured on Internet of Thinks (IoT) devices to automatically open doors for standard mask users. From the results received that testing on the detection of moving mask objects when used reaches 100% optimal at a distance of 0.5 to 1 meter and 95% at a distance of 1.5 to 2 meters so that the process of sending detection signals to IoT devices can be carried out at a distance of 1 meter at the position mask users to automatic doorsReferences
S. Singh, U. Ahuja, M. Kumar, K. Kumar, and M. Sachdeva, “Face mask detection using YOLOv3 and faster R-CNN models: COVID-19 environment,” Multimed. Tools Appl., vol. 80, no. 13, pp. 19753–19768, 2021, doi: https://doi.org/10.1007/s11042-021-10711-8.
W. Fang et al., “A deep learning-based approach for mitigating falls from height with computer vision: Convolutional neural network,” Adv. Eng. Informatics, vol. 39, pp. 170–177, 2019, doi: https://doi.org/10.1016/j.aei.2018.12.005.
Z.-Q. Zhao, P. Zheng, S. Xu, and X. Wu, “Object detection with deep learning: A review,” IEEE Trans. neural networks Learn. Syst., vol. 30, no. 11, pp. 3212–3232, 2019, doi: 10.1109/TNNLS.2018.2876865.
C. Cao et al., “An improved faster R-CNN for small object detection,” Ieee Access, vol. 7, pp. 106838–106846, 2019, doi: 10.1109/ACCESS.2019.2932731.
S. Sulistyawati et al., “Knowledge, attitudes, practices and information needs during the covid-19 pandemic in indonesia,” Risk Manag. Healthc. Policy, vol. 14, p. 163, 2021, doi: 10.2147/RMHP.S288579.
F. Kahar, G. D. Dirawan, S. Samad, N. Qomariyah, and D. E. Purlinda, “The epidemiology of COVID-19, attitudes and behaviors of the community during the Covid pandemic in Indonesia,” structure, vol. 10, p. 8, 2020, doi: 10.38124/IJISRT20AUG670.
U. Anand et al., “Novel coronavirus disease 2019 (COVID-19) pandemic: from transmission to control with an interdisciplinary vision,” Environ. Res., vol. 197, p. 111126, 2021, doi: 10.1016/j.envres.2021.111126.
F. Nurahmadi, F. Lubis, and P. I. Nainggolan, “Analysis Of Deep Learning Architecture In Classifying SNI Masks,” J. INFORMATICS Telecommun. Eng., vol. 5, no. 2, pp. 473–482, 2022, doi: 10.31289/jite.v5i2.6341.
P. Forouzandeh, K. O’Dowd, and S. C. Pillai, “Face masks and respirators in the fight against the COVID-19 pandemic: An overview of the standards and testing methods,” Saf. Sci., vol. 133, p. 104995, 2021, doi: 10.1016/j.ssci.2020.104995.
Al-Khowarizmi and Suherman, “Classification of Skin Cancer Images by Applying Simple Evolving Connectionist System,” IAES Int. J. Artif. Intell., vol. 10, no. 2, pp. 421–429, 2021, doi: 10.11591/ijai.v10.i2.pp421-429.
J. Aaron and T.-L. Chew, “A guide to accurate reporting in digital image processing–can anyone reproduce your quantitative analysis?,” J. Cell Sci., vol. 134, no. 6, p. jcs254151, 2021, doi: 10.1242/jcs.254151.
R. Herrera-Pereda, A. T. Crispi, D. Babin, W. Philips, and M. H. Costa, “A Review On digital image processing techniques for in-Vivo confocal images of the cornea,” Med. Image Anal., vol. 73, p. 102188, 2021, doi: https://doi.org/10.1016/j.media.2021.102188.
R. Syah and A.-K. Al-Khowarizmi, “Optimization of Applied Detection Rate in the Simple Evolving Connectionist System Method for Classification of Images Containing Protein,” J. Ilm. Tek. Elektro Komput. dan Inform., vol. 7, no. 1, p. 154, 2021, doi: 10.26555/jiteki.v7i1.20508.
A. Khowarizmi, Akhm, M. Lubis, and A. R. Lubis, “Classification of Tajweed Al-Qur’an on Images Applied Varying Normalized Distance Formulas,” no. 3, pp. 21–25, 2020, doi: 10.1145/3396730.3396739.
A. R. Lubis, S. Prayudani, Y. Y. Lase, and Y. Fatmi, “Similarity Normalized Euclidean Distance on KNN Method to Classify Image of Skin Cancer,” in 2021 4th International Seminar on Research of Information Technology and Intelligent Systems (ISRITI), 2021, pp. 68–73, doi: 10.1109/ISRITI54043.2021.9702826.
S. Suherman, F. Fahmi, Z. Herry, M. Al-Akaidi, and Al-Khowarizmi, “Sensor Based versus Server Based Image Detection Sensor using the 433 Mhz Radio Link,” in 2020 4rd International Conference on Electrical, Telecommunication and Computer Engineering (ELTICOM), 2020, pp. 7–10, doi: 10.1109/ELTICOM50775.2020.9230502.
I. H. Sarker, “Data science and analytics: an overview from data-driven smart computing, decision-making and applications perspective,” SN Comput. Sci., vol. 2, no. 5, pp. 1–22, 2021, doi: https://doi.org/10.1007/s42979-021-00765-8.
X. Mou, X. Chen, J. Guan, B. Chen, and Y. Dong, “Marine target detection based on improved faster R-CNN for navigation radar PPI images,” in 2019 International Conference on Control, Automation and Information Sciences (ICCAIS), 2019, pp. 1–5, doi: 10.1109/ICCAIS46528.2019.9074588.
V. Kafedziski, S. Pecov, and D. Tanevski, “Detection and classification of land mines from ground penetrating radar data using faster R-CNN,” in 2018 26th Telecommunications Forum (TELFOR), 2018, pp. 1–4, doi: 10.1109/TELFOR.2018.8612117.
B. Zhu, X. Wu, L. Yang, Y. Shen, and L. Wu, “Automatic detection of books based on Faster R-CNN,” in 2016 third international conference on digital information processing, data mining, and wireless communications (DIPDMWC), 2016, pp. 8–12, doi: 10.1109/DIPDMWC.2016.7529355.
R. Gavrilescu, C. Zet, C. Foșalău, M. Skoczylas, and D. Cotovanu, “Faster R-CNN: an approach to real-time object detection,” in 2018 International Conference and Exposition on Electrical And Power Engineering (EPE), 2018, pp. 165–168, doi: 10.1109/ICEPE.2018.8559776.
Y. Chen, H. Wang, W. Li, C. Sakaridis, D. Dai, and L. Van Gool, “Scale-aware domain adaptive faster r-cnn,” Int. J. Comput. Vis., vol. 129, no. 7, pp. 2223–2243, 2021, doi: https://doi.org/10.1007/s11263-021-01447-x.
R. Meng, S. G. Rice, J. Wang, and X. Sun, “A fusion steganographic algorithm based on faster R-CNN,” Comput. Mater. Contin., vol. 55, no. 1, pp. 1–16, 2018, doi: https://doi.org/10.3970/cmc.2018.055.001.
J. Julham, M. Lubis, A. R. Lubis, A.-K. Al-Khowarizmi, and I. Kamil, “Automatic face recording system based on quick response code using multicam,” IAES Int. J. Artif. Intell., vol. 11, no. 1, p. 327, 2022, doi: http://doi.org/10.11591/ijai.v11.i1.pp327-335.
M. Meyer and G. Kuschk, “Automotive radar dataset for deep learning based 3D object detection,” EuRAD 2019 - 2019 16th Eur. Radar Conf., no. January 2019, pp. 129–132, 2019.
Y. Liu, P. Sun, N. Wergeles, and Y. Shang, “A survey and performance evaluation of deep learning methods for small object detection,” Expert Syst. Appl., vol. 172, p. 114602, 2021, doi: https://doi.org/10.1016/j.eswa.2021.114602.
B. Benjdira, T. Khursheed, A. Koubaa, A. Ammar, and K. Ouni, “Car detection using unmanned aerial vehicles: Comparison between faster r-cnn and yolov3,” in 2019 1st International Conference on Unmanned Vehicle Systems-Oman (UVS), 2019, pp. 1–6, doi: https://doi.org/10.48550/arXiv.1812.10968.
Downloads
Published
Issue
Section
License
Copyright (c) 2023 International Journal of Electronics and Telecommunications
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
1. License
The non-commercial use of the article will be governed by the Creative Commons Attribution license as currently displayed on https://creativecommons.org/licenses/by/4.0/.
2. Author’s Warranties
The author warrants that the article is original, written by stated author/s, has not been published before, contains no unlawful statements, does not infringe the rights of others, is subject to copyright that is vested exclusively in the author and free of any third party rights, and that any necessary written permissions to quote from other sources have been obtained by the author/s. The undersigned also warrants that the manuscript (or its essential substance) has not been published other than as an abstract or doctorate thesis and has not been submitted for consideration elsewhere, for print, electronic or digital publication.
3. User Rights
Under the Creative Commons Attribution license, the author(s) and users are free to share (copy, distribute and transmit the contribution) under the following conditions: 1. they must attribute the contribution in the manner specified by the author or licensor, 2. they may alter, transform, or build upon this work, 3. they may use this contribution for commercial purposes.
4. Rights of Authors
Authors retain the following rights:
- copyright, and other proprietary rights relating to the article, such as patent rights,
- the right to use the substance of the article in own future works, including lectures and books,
- the right to reproduce the article for own purposes, provided the copies are not offered for sale,
- the right to self-archive the article
- the right to supervision over the integrity of the content of the work and its fair use.
5. Co-Authorship
If the article was prepared jointly with other authors, the signatory of this form warrants that he/she has been authorized by all co-authors to sign this agreement on their behalf, and agrees to inform his/her co-authors of the terms of this agreement.
6. Termination
This agreement can be terminated by the author or the Journal Owner upon two months’ notice where the other party has materially breached this agreement and failed to remedy such breach within a month of being given the terminating party’s notice requesting such breach to be remedied. No breach or violation of this agreement will cause this agreement or any license granted in it to terminate automatically or affect the definition of the Journal Owner. The author and the Journal Owner may agree to terminate this agreement at any time. This agreement or any license granted in it cannot be terminated otherwise than in accordance with this section 6. This License shall remain in effect throughout the term of copyright in the Work and may not be revoked without the express written consent of both parties.
7. Royalties
This agreement entitles the author to no royalties or other fees. To such extent as legally permissible, the author waives his or her right to collect royalties relative to the article in respect of any use of the article by the Journal Owner or its sublicensee.
8. Miscellaneous
The Journal Owner will publish the article (or have it published) in the Journal if the article’s editorial process is successfully completed and the Journal Owner or its sublicensee has become obligated to have the article published. Where such obligation depends on the payment of a fee, it shall not be deemed to exist until such time as that fee is paid. The Journal Owner may conform the article to a style of punctuation, spelling, capitalization and usage that it deems appropriate. The Journal Owner will be allowed to sublicense the rights that are licensed to it under this agreement. This agreement will be governed by the laws of Poland.
By signing this License, Author(s) warrant(s) that they have the full power to enter into this agreement. This License shall remain in effect throughout the term of copyright in the Work and may not be revoked without the express written consent of both parties.