ANALISIS PERBANDINGAN HASIL KLASIFIKASI JENIS PENYAKIT TANAMAN TOMAT MENGGUNAKAN ARSITEKTUR MOBILENET, DENSENET121, DAN XCEPTION

Authors

  • Kuwat Setiyanto Universitas Gunadarma
  • Michael Bolang Universitas Gunadarma

DOI:

https://doi.org/10.56127/jts.v3i3.1898

Keywords:

CNN, DenseNet121, Image Classification, Machine Learning, MobileNet, TensorFlow, Transfer Learning, Xception

Abstract

Machine learning can be applied in various needs, such as image classification. Plant disease classification is essential and significantly supports the agricultural sector in this modern era. With an application capable of classifying diseases in crops, farmers can accurately identify the diseases affecting their harvest and address them more efficiently and effectively compared to traditional methods, which can be more time-consuming. This research aims to determine the best TensorFlow architecture among the three architectures used in this study, namely MobileNet, DenseNet121, and Xception, to classify 9 types of tomato plant diseases and 1 healthy tomato plant. The study concludes that DenseNet121 is the best architecture for classifying the 9 types of tomato plant diseases and 1 healthy tomato plant. During testing, the DenseNet121 model achieved an accuracy, precision, recall, and F-1 score of approximately 0.987 or 98.7%. Xception ranked second with all four metrics scoring around 0.986 or 98.6%, while MobileNet ranked last with metrics scoring approximately 0.973 or 97.3%.

References

Bishop, C. M. (2006). Pattern Recognition and Machine Learning. Springer.

Brownlee, J. (2016). Machine Learning Mastery with Python: Understand Your Data, Create Accurate Models, and Work Projects End-to-End. Machine Learning Mastery.

Chollet, F. (2017). Deep Learning with Python. Manning Publications.

McKinney, W. (2018). Python for Data Analysis: Data Wrangling with Pandas, NumPy, and IPython (2nd ed.). O'Reilly Media.

Muller, A. C., & Guido, S. (2016). Introduction to Machine Learning with Python: A Guide for Data Scientists. O'Reilly Media.

Murphy, K. P. (2012). Machine Learning: A Probabilistic Perspective. MIT Press.

Russell, S., & Norvig, P. (2021). Artificial Intelligence: A Modern Approach (Fourth Edition). Pearson.

Raschka, S., & Mirjalili, V. (2019). Python Machine Learning: Machine Learning and Deep Learning with Python, scikit-learn, and TensorFlow 2. Packt Publishing.

Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep Learning. MIT Press. https://www.deeplearningbook.org/

Schmidhuber, J. (2015). Deep Learning in Neural Networks: An Overview. Neural Networks, 61, 85-117.

Shorten, C., & Khoshgoftaar, T. M. (2019). A Survey on Image Data Augmentation for Deep Learning. Journal of Big Data, 6(1), 60.

Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., & Salakhutdinov, R. (2014). Dropout: A Simple Way to Prevent Neural Networks from Overfitting. Journal of Machine Learning Research, 15, 1929-1958.

Chollet, F. (2017). Xception: Deep Learning with Depthwise Separable Convolutions. arXiv preprint arXiv:1610.02357. https://arxiv.org/abs/1610.02357

Gal, Y., & Ghahramani, Z. (2016). Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning. arXiv preprint arXiv:1506.02142. https://arxiv.org/abs/1506.02142

Howard, A. G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., & Sergey, I. (2017). MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. arXiv preprint arXiv:1704.04861. https://arxiv.org/abs/1704.04861

Huang, G., Liu, Z., van der Maaten, L., & Weinberger, K. Q. (2017). "Densely Connected Convolutional Networks." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). https://ieeexplore.ieee.org/document/8099726

Ioffe, S., & Szegedy, C. (2015). Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. arXiv preprint arXiv:1502.03167. https://arxiv.org/abs/1502.03167

Kingma, D. P., & Ba, J. (2014). Adam: A Method for Stochastic Optimization. arXiv preprint arXiv:1412.6980. https://arxiv.org/abs/1412.6980

Pan, S. J., & Yang, Q. (2010). A Survey on Transfer Learning. IEEE Transactions on Knowledge and Data Engineering, 22(10), 1345-1359. https://ieeexplore.ieee.org/document/5288526

Hunter, J. D. (2007). Matplotlib: A 2D Graphics Environment. Computing in Science & Engineering, 9(3), 90-95. https://ieeexplore.ieee.org/document/4160265

Waskom, M. L. (2021). seaborn: statistical data visualization. Journal of Open Source Software, 6(60), 3021. https://doi.org/10.21105/joss.03021

LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep Learning. Nature, 521(7553), 436-444. https://www.nature.com/articles/nature14539

Kaggle. (2024). Datasets. https://www.kaggle.com/datasets

Keras. (2024). GlobalAveragePooling2D Layer. https://keras.io/api/layers/pooling_layers/global_average_pooling2d/

NumPy. (2024). NumPy Documentation. https://numpy.org/doc/stable/

Pandas Documentation. (2024). Pandas Documentation. https://pandas.pydata.org/pandas-docs/stable/

Python Software Foundation. (2024). Python Documentation. https://docs.python.org/3/

TensorFlow. (2024). Image Augmentation. https://www.tensorflow.org/tutorials/images/data_augmentation

Downloads

Published

2024-11-03

How to Cite

Kuwat Setiyanto, & Michael Bolang. (2024). ANALISIS PERBANDINGAN HASIL KLASIFIKASI JENIS PENYAKIT TANAMAN TOMAT MENGGUNAKAN ARSITEKTUR MOBILENET, DENSENET121, DAN XCEPTION. Jurnal Teknik Dan Science, 3(3), 56–69. https://doi.org/10.56127/jts.v3i3.1898

Most read articles by the same author(s)

Obs.: This plugin requires at least one statistics/report plugin to be enabled. If your statistics plugins provide more than one metric then please also select a main metric on the admin's site settings page and/or on the journal manager's settings pages.