Conditional GAN Approach for Image to Image Translation on Potato Leaf

Authors

  • Radius Tanone Chaoyang University of Technology

DOI:

https://doi.org/10.24246/ijiteb.612023.17-20

Keywords:

Colorize Image, Conditional GAN, Image to Image Translation, Pix2Pix, Potato Leaf

Abstract

In taking pictures using a camera, for example, it often results in poor image quality. The resulting image quality can be in the form of black and white images. This can interfere with further processing of the image. Specifically, this can happen in the agricultural sector, such as the results of a bad potato leaf image (black and white). In fact, the images of leaves from agricultural land will be used for image processing which can help with the planting process, for example. This is of course very disturbing, so a way is needed to translate a gray scale image into an image that has a color resembling the shape of an actual leaf. In fact, the development of Deep Learning with various models has grown rapidly, one of which is the Conditional GAN which can process image to image translation. Seeing the purpose of the Conditional GAN, this paper implements the Pix2Pix model based on the Conditional GAN which aims to translate black and white images into images that have color. This experiment produces images with colors that match the original image with good quality. So that with the results of this experiment, it is hoped that problems with taking images that are sometimes not good enough can be solved by image translation using deep learning.

 

Downloads

Download data is not yet available.

References

“Bangladeshi crops disease dataset | Kaggle.” Available: https://www.kaggle.com/datasets/nafishamoin/bangladeshi-crops-disease-dataset. [Accessed: Mar. 29, 2022].

“pix2pix: Image-to-image translation with a conditional GAN | TensorFlow Core.” Available: https://www.tensorflow.org/tutorials/generative/pix2pix. [Accessed: Dec. 27, 2022].

“What is artificial intelligence (AI)? | IBM.” Available: https://www.ibm.com/cloud/learn/what-is-artificial-intelligence. [Accessed: Apr. 21, 2022].

“What is deep learning? | IBM.” Available: https://www.ibm.com/cloud/learn/deep-learning. [Accessed: Apr. 21, 2022].

“What is machine learning? | IBM.” Available: https://www.ibm.com/cloud/learn/machine-learning. [Accessed: Apr. 21, 2022].

C.-H. Hsieh and Q. Zhao, “Image enhancement and image hiding based on linear image fusion,” in Image Fusion, O. Ukimura, Ed. Rijeka: IntechOpen, 2011.

C.-H. Hsieh, “Dehazed image enhancement by a gamma correction with global limits,” in 2019 IEEE 10th International Conference on Awareness Science and Technology (iCAST), 2019, pp. 1–4, doi: 10.1109/ICAwST.2019.8923309.

I. J. Goodfellow et al., “Generative adversarial networks,” Sci. Robot., vol. 3, no. January, pp. 2672–2680, Jun. 2014, doi: 10.48550/arxiv.1406.2661.

L. Deng and D. Yu, Deep learning methods and applications, 2014, doi: 10.1561/2000000039.

M. T. Islam and A. N. Tusher, “Automatic detection of grape, potato, and strawberry leaf diseases using CNN and image processing,” Lect. Notes Networks Syst., vol. 238, pp. 213–224, 2022, doi: 10.1007/978-981-16-2641-8_20/TABLES/3.

P. Isola, J.-Y. Zhu, T. Zhou, A. A. Efros, and B. A. Research, “Image-to-image translation with conditional adversarial networks.” Available: https://github.com/phillipi/pix2pix. [Accessed: Dec. 27, 2022].

W. Wang and X. Yuan, “Recent advances in image dehazing,” IEEE/CAA J. Autom. Sin., vol. 4, no. 3, pp. 410–436, 2017, doi: 10.1109/JAS.2017.7510532.

Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proc. IEEE, vol. 86, no. 11, pp. 2278–2323, 1998, doi: 10.1109/5.726791.

Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp. 436–444, 2015, doi: 10.1038/nature14539.

Downloads

Published

2023-11-30