Методы генерации синтетических данных для обучения нейросетей в задаче сегментации уровня азотного режима растений на снимкахбеспилотных летательных аппаратов на сельскохозяйственном поле
DOI:
https://doi.org/10.21638/11701/spbu10.2024.103Аннотация
Работа посвящена автоматизации процесса построения масок изображений объектов сельского хозяйства больших размеров в задачах точного земледелия для обучения нейросетевых методов анализа обеспеченности растений полезными веществами по геопривязанным снимкам. Это направление крайне актуально, поскольку позволяет автоматизировать и заменить ручной процесс разметки данных, существенно сократив затраты ресурсов на подготовку обучающей выборки. Предложены четыре новых метода генерации синтетических данных для обучения нейросетей, направленных на сегментацию снимков беспилотных летательных аппаратов (БПЛА) по уровню обеспеченности растений азотом на сельскохозяйственном поле. В частности, описаны алгоритмы генерации синтетических данных на основе построения рядов, парабол и пятен. Поставлен эксперимент по тестированию и оценке качества приведенных алгоритмов на восьми современных методах сегментации изображений: два классических метода машинного обучения (Random Forest и XGBoost), четыре сверточных нейросетевых метода на базе архитектуры U-Net, два трансформера (TransUnet и UnetR). Эксперимент показал, что два алгоритма на основе пятен показывают наилучшую точность для обучения сверточных нейросетей и трансформеров — 98–100 %. Классические методы машинного обучения на сгенерированных синтетических данных показали очень низкие значения по всем метрикам качества — 27–44 %.
Ключевые слова:
сегментация уровня азота, глубокое обучение, машинное обучение, генерация синтетических данных, снимки БПЛА, разметка данных дистанционного зондирования земли, умное сельское хозяйство
Скачивания
Библиографические ссылки
Yang S., Chen Q., Yuan X., Liu X. Adaptive coherency matrix estimation for polarimetric SAR imagery based on local heterogeneity coefficients // IEEE Transactions on Geoscience and Remote Sensing. 2016. Vol. 56. P. 6732–6745.
Kussul N., Lavreniuk M., Skakun S., Shelestov A. Deep learning classification of land cover and crop types using remote sensing data // IEEE Geoscience and Remote Sensing Letters. 2017. Vol. 14. P. 778–782.
Jadhav J. K., Singh R. P. Automatic semantic segmentation and classification of remote sensing data for agriculture // Mathematical Models in Engineering. 2018. Vol. 4. P. 112–137.
Dechesne C., Mallet C., Le Bris A., Gouet-Brunet V. Semantic segmentation of forest stands of pure species as a global optimization problem // ISPRS Annals of Photogrammetry Remote Sensing, Spatial Information Sciences. 2017. Vol. 4. P. 141–148.
Zou K., Chen X., Wang Y., Zhang C., Zhang F. A modified U-Net with a specific data argumentation method for semantic segmentation of weed images in the field // Computers and Electronics in Agriculture. 2021. Vol. 187. Art. N 106242.
Anand T., Sinha S., Mandal M., Chamola V., Yu F. AgriSegNet: Deep aerial semantic segmentation framework for IoT-assisted precision agriculture // Mathematical Models in Engineering. 2021. Vol. 21. P. 17581–17590.
Singh P., Verma A., Alex J. Disease and pest infection detection in coconut tree through deep learning techniques // Computers and Electronics in Agriculture. 2021. Vol. 182. Art. N 105986.
Zhao S., Liu J., Bai Z., Hu C., Jin Y. Crop pest recognition in real agricultural environment using convolutional neural networks by a parallel attention mechanism // Mathematical Models in Engineering. 2022. Vol. 13. P. 1–14.
Blekanov I., Molin A., Zhang D., Mitrofanov E., Mitrofanova O., Yin L. Monitoring of grain crops nitrogen status from uav multispectral images coupled with deep learning approaches // Computers and Electronics in Agriculture. 2023. Vol. 212. Art. N 108047.
Salas E. A. L., Subburayalu S. K., Slater B., Dave R., Parekh P., Zhao K., Bhattacharya B. Assessing the effectiveness of ground truth data to capture landscape variability from an agricultural region using Gaussian simulation and geostatistical techniques // Heliyon. 2021. Vol. 7. Iss. 7. Art. N e07439.
Lynda D., Brahim F., Hamid S., Hamadoun C. Towards a semantic structure for classifying IoT agriculture sensor datasets: an approach based on machine learning and web semantic technologies // Journal of King Saud University — Computer and Information Sciences. 2023. Vol. 35. Iss. 8. Art. N 101700.
Wang H., Ding J., He S., Feng C., Zhang C., Fan G., Wu Y., Zhang Y. MFBP-UNet: A network for pear leaf disease segmentation in natural agricultural environments // Plants. 2023. Vol. 12. P. 3209.
Sa I., Popovic M., Khanna R., Chen Z., Lottes P., Liebisch F., Nieto J., Stachniss C., Walter A., Siegwart R. WeedMap: A large-scale semantic weed mapping framework using aerial multispectral imaging and deep neural network for precision farming // Remote Sensing. 2018. Vol. 10. P. 1423.
Nasiri A., Omid M., Taheri-Garavand A., Jafari A. Deep learning-based precision agriculture through weed recognition in sugar beet fields // Sustainable Computing: Informatics and Systems. 2022. Vol. 35. Art. N 100759.
Takahashi R., Matsubara T., Uehara K. Data augmentation using random image cropping and patching for deep CNNs // EEE Transactions on Circuits and Systems for Video Technology. 2020. Vol. 30. P. 2917–2931.
Su D., Kong H., Qiao Y., Sukkarieh S. Data augmentation for deep learning based semantic segmentation and crop-weed classification in agricultural robotics // Computers and Electronics in Agriculture. 2021. Vol. 190. Art. N 106418.
Picon A., San-Emeterio M. G., Bereciartua-Perez A., Klukas C., Eggers T., Navarra-Mestre R. Deep learning-based segmentation of multiple species of weeds and corn crop using synthetic and real image datasets // Computers and Electronics in Agriculture. 2022. Vol. 194. Art. N 106719.
Venkataramanan A., Faure-Giovagnoli P., Regan C., Heudre D., Figus C., Usseglio-Polatera P., Pradalier C., Laviale M. Usefulness of synthetic datasets for diatom automatic detection using a deep-learning approach // Engineering Applications of Artificial Intelligence. 2023. Vol. 117. Pt B. Art. N 105594.
Yang S., Zheng L., Yang H., Zhang M., Wu T., Sun S., Tomasetto F., Wang M. A synthetic datasets based instance segmentation network for high-throughput soybean pods phenotype investigation // Expert Systems with Applications. 2022. Vol. 192. Art. N 116403.
Abbas A., Jain S., Gour M., Vankudothu S. Tomato plant disease detection using transfer learning with C-GAN synthetic images // Computers and Electronics in Agriculture. 2021. Vol. 187. Art. N 106279.
Tempelaere A., Van De Looverbosch T., Kelchtermans K., Verboven P., Tuytelaars T., Nicolai B. Synthetic data for X-ray CT of healthy and disordered pear fruit using deep learning // Postharvest Biology and Technology. 2023. Vol. 200. Art. N 112342.
Ronneberger O., Fischer P., Brox T. U-Net: Convolutional networks for biomedical image segmentation // Medical Image Computing and Computer-Assisted Intervention — MICCAI 2015. 2015. Vol. 9351. P. 234–241.
Oktay O., Schlemper J., Folgoc L., Lee M., Heinrich M., Misawa K., Mori K., McDonagh S., Hammerla N. Y., Kainz B., Glocker B., Rueckert D. Attention U-Net: Learning where to look for the pancreas // arXiv: 1804.03999. 2018.
Alom Z., Hasan M., Yakopcic C., Taha T. M., Asari V. K. Recurrent residual convolutional neural network based on U-Net (R2U-Net) for medical image segmentation // arXiv: 1802.06955. 2018.
Zhou Z., Siddiquee M. R., Tajbakhsh N., Liang J. UNet++: A nested U-Net architecture for medical image segmentation // arXiv: 1807.10165. 2018.
Huang H., Lin L., Tong R., Hu H., Zhang Q., Iwamoto Y., Han X., Chen Y. W., Wu J. UNet 3+: A full-scale connected UNet for medical image segmentation // arXiv: 2004.08790. 2020.
Chen J., Lu Y., Yu Q., Luo X., Adeli E., Wang Y., Lu L., Yuille A. L., Zhou Y. TransUNet: Transformers make strong encoders for medical image segmentation // arXiv: 2102.04306. 2021.
Hatamizadeh A., Tang Y., Nath V., Yang D., Myronenko A., Landman B., Roth H., Xu D. UNETR: Transformers for 3d medical image segmentation // Proceedings of the IEEE/CVF. Winter Conference on Applications of Computer Vision. 2022. P. 1748–1758.
Breiman L. Random forests // Machine Learning. 2001. Vol. 45. P. 5–32.
Tianqi C., Carlos G. XGBoost: Scalable tree boosting system // Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 2016. P. 785–794.
References
Yang S., Chen Q., Yuan X., Liu X. Adaptive coherency matrix estimation for polarimetric SAR imagery based on local heterogeneity coefficients. IEEE Transactions on Geoscience and Remote Sensing, 2016, vol. 56, pp. 6732–6745.
Kussul N., Lavreniuk M., Skakun S., Shelestov A. Deep learning classification of land cover and crop types using remote sensing data. IEEE Geoscience and Remote Sensing Letters, 2017, vol. 14, pp. 778–782.
Jadhav J. K., Singh R. P. Automatic semantic segmentation and classification of remote sensing data for agriculture. Mathematical Models in Engineering, 2018, vol. 4, pp. 112–137.
Dechesne C., Mallet C., Le Bris A., Gouet-Brunet V. Semantic segmentation of forest stands of pure species as a global optimization problem. ISPRS Annals of Photogrammetry Remote Sensing, Spatial Information Sciences, 2017, vol. 4, pp. 141–148.
Zou K., Chen X., Wang Y., Zhang C., Zhang F. A modified U-Net with a specific data argumentation method for semantic segmentation of weed images in the field. Computers and Electronics in Agriculture, 2021, vol. 187, art. no. 106242.
Anand T., Sinha S., Mandal M., Chamola V., Yu F. AgriSegNet: Deep aerial semantic segmentation framework for IoT-assisted precision agriculture. Mathematical Models in Engineering, 2021, vol. 21, pp. 17581–17590.
Singh P., Verma A., Alex J. Disease and pest infection detection in coconut tree through deep learning techniques. Computers and Electronics in Agriculture, 2021, vol. 182, art. no. 105986.
Zhao S., Liu J., Bai Z., Hu C., Jin Y. Crop pest recognition in real agricultural environment using convolutional neural networks by a parallel attention mechanism. Mathematical Models in Engineering, 2022, vol. 13, pp. 1–14.
Blekanov I., Molin A., Zhang D., Mitrofanov E., Mitrofanova O., Yin L. Monitoring of grain crops nitrogen status from uav multispectral images coupled with deep learning approaches. Computers and Electronics in Agriculture, 2023, vol. 212, art. no. 108047.
Salas E. A. L., Subburayalu S. K., Slater B., Dave R., Parekh P., Zhao K., Bhattacharya B. Assessing the effectiveness of ground truth data to capture landscape variability from an agricultural region using Gaussian simulation and geostatistical techniques. Heliyon, 2021, vol. 7, iss. 7, art. no. e07439.
Lynda D., Brahim F., Hamid S., Hamadoun C. Towards a semantic structure for classifying IoT agriculture sensor datasets: an approach based on machine learning and web semantic technologies. Journal of King Saud University — Computer and Information Sciences, 2023, vol. 35, iss. 8, art. no. 101700.
Wang H., Ding J., He S., Feng C., Zhang C., Fan G., Wu Y., Zhang Y. MFBP-UNet: A network for pear leaf disease segmentation in natural agricultural environments. Plants, 2023, vol. 12, p. 3209.
Sa I., Popovic M., Khanna R., Chen Z., Lottes P., Liebisch F., Nieto J., Stachniss C., Walter A., Siegwart R. WeedMap: A large-scale semantic weed mapping framework using aerial multispectral imaging and deep neural network for precision farming. Remote Sensing, 2018, vol. 10, p. 1423.
Nasiri A., Omid M., Taheri-Garavand A., Jafari A. Deep learning-based precision agriculture through weed recognition in sugar beet fields. Sustainable Computing: Informatics and Systems, 2022, vol. 35, art. no. 100759.
Takahashi R., Matsubara T., Uehara K. Data augmentation using random image cropping and patching for deep CNNs. EEE Transactions on Circuits and Systems for Video Technology, 2020, vol. 30, pp. 2917–2931.
Su D., Kong H., Qiao Y., Sukkarieh S. Data augmentation for deep learning based semantic segmentation and crop-weed classification in agricultural robotics. Computers and Electronics in Agriculture, 2021, vol. 190, art. no. 106418.
Picon A., San-Emeterio M. G., Bereciartua-Perez A., Klukas C., Eggers T., Navarra-Mestre R. Deep learning-based segmentation of multiple species of weeds and corn crop using synthetic and real image datasets. Computers and Electronics in Agriculture, 2022, vol. 194, art. no. 106719.
Venkataramanan A., Faure-Giovagnoli P., Regan C., Heudre D., Figus C., Usseglio-Polatera P., Pradalier C., Laviale M. Usefulness of synthetic datasets for diatom automatic detection using a deep-learning approach. Engineering Applications of Artificial Intelligence, 2023, vol. 117, pt B, art. no. 105594.
Yang S., Zheng L., Yang H., Zhang M., Wu T., Sun S., Tomasetto F., Wang M. A synthetic datasets based instance segmentation network for high-throughput soybean pods phenotype investigation. Expert Systems with Applications, 2022, vol. 192, art. no. 116403.
Abbas A., Jain S., Gour M., Vankudothu S. Tomato plant disease detection using transfer learning with C-GAN synthetic images. Computers and Electronics in Agriculture, 2021, vol. 187, art. no. 106279.
Tempelaere A., Van De Looverbosch T., Kelchtermans K., Verboven P., Tuytelaars T., Nicolai B. Synthetic data for X-ray CT of healthy and disordered pear fruit using deep learning. Postharvest Biology and Technology, 2023, vol. 200, art. no. 112342.
Ronneberger O., Fischer P., Brox T. U-Net: Convolutional networks for biomedical image segmentation. Medical Image Computing and Computer-Assisted Intervention — MICCAI 2015, 2015, vol. 9351, pp. 234–241.
Oktay O., Schlemper J., Folgoc L., Lee M., Heinrich M., Misawa K., Mori K., McDonagh S., Hammerla N. Y., Kainz B., Glocker B., Rueckert D. Attention U-Net: Learning where to look for the pancreas. arXiv: 1804.03999, 2018.
Alom Z., Hasan M., Yakopcic C., Taha T. M., Asari V. K. Recurrent residual convolutional neural network based on U-Net (R2U-Net) for medical image segmentation. ArXiv: 1802.06955, 2018.
Zhou Z., Siddiquee M. R., Tajbakhsh N., Liang J. UNet++: A nested U-Net architecture for medical image segmentation. ArXiv: 1807.10165, 2018.
Huang H., Lin L., Tong R., Hu H., Zhang Q., Iwamoto Y., Han X., Chen Y. W., Wu J. UNet 3+: A full-scale connected UNet for medical image segmentation. ArXiv: 2004.08790, 2020.
Chen J., Lu Y., Yu Q., Luo X., Adeli E., Wang Y., Lu L., Yuille A. L., Zhou Y. TransUNet: Transformers make strong encoders for medical image segmentation. ArXiv: 2102.04306, 2021.
Hatamizadeh A., Tang Y., Nath V., Yang D., Myronenko A., Landman B., Roth H., Xu D. UNETR: transformers for 3d medical image segmentation. Proceedings of the IEEE/CVF, Winter Conference on Applications of Computer Vision, 2022, pp. 1748–1758.
Breiman L. Random forests. Machine Learning, 2001, vol. 45, pp. 5–32.
Tianqi C., Carlos G. XGBoost: Scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2016, pp. 785–794.
Загрузки
Опубликован
Как цитировать
Выпуск
Раздел
Лицензия
Статьи журнала «Вестник Санкт-Петербургского университета. Прикладная математика. Информатика. Процессы управления» находятся в открытом доступе и распространяются в соответствии с условиями Лицензионного Договора с Санкт-Петербургским государственным университетом, который бесплатно предоставляет авторам неограниченное распространение и самостоятельное архивирование.