<?xml version="1.0"?>
<!DOCTYPE article
PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.4 20190208//EN"
       "JATS-journalpublishing1.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" article-type="research-article" dtd-version="1.4" xml:lang="en">
 <front>
  <journal-meta>
   <journal-id journal-id-type="publisher-id">Spatial Data: Science, Research and Technology</journal-id>
   <journal-title-group>
    <journal-title xml:lang="en">Spatial Data: Science, Research and Technology</journal-title>
    <trans-title-group xml:lang="ru">
     <trans-title>Пространственные данные: наука и технологии</trans-title>
    </trans-title-group>
   </journal-title-group>
   <issn publication-format="online">2782-6678</issn>
  </journal-meta>
  <article-meta>
   <article-id pub-id-type="publisher-id">118334</article-id>
   <article-id pub-id-type="doi">10.30533/scidata-2025-16-12</article-id>
   <article-categories>
    <subj-group subj-group-type="toc-heading" xml:lang="ru">
     <subject>Геоинформатика, картография</subject>
    </subj-group>
    <subj-group subj-group-type="toc-heading" xml:lang="en">
     <subject>Geoinformatics, cartography</subject>
    </subj-group>
    <subj-group>
     <subject>Геоинформатика, картография</subject>
    </subj-group>
   </article-categories>
   <title-group>
    <article-title xml:lang="en">The Impact of High and Super Spatial Resolution Remote Sensing Images Datasets Composition on Training and Accuracy of Geofields Semantic Segmentation Neural Networks on Example of a Different Earth’s Surface Classes Recognition</article-title>
    <trans-title-group xml:lang="ru">
     <trans-title>Влияние состава выборок аэрокосмических изображений ДЗЗ высокого и сверхвысокого пространственного разрешения на обучение и точность нейронных сетей при семантической сегментации геополей на примере распознавания различных классов земной поверхности</trans-title>
    </trans-title-group>
   </title-group>
   <contrib-group content-type="authors">
    <contrib contrib-type="author">
     <name-alternatives>
      <name xml:lang="ru">
       <surname>Бирюков</surname>
       <given-names>Никита Андреевич</given-names>
      </name>
      <name xml:lang="en">
       <surname>Biryukov</surname>
       <given-names>Nikita Andreevich</given-names>
      </name>
     </name-alternatives>
     <xref ref-type="aff" rid="aff-1"/>
    </contrib>
   </contrib-group>
   <aff-alternatives id="aff-1">
    <aff>
     <institution xml:lang="ru">Московский государственный университет геодезии и картографии</institution>
    </aff>
    <aff>
     <institution xml:lang="en">Moscow State University of Geodesy and Cartography</institution>
    </aff>
   </aff-alternatives>
   <pub-date publication-format="print" date-type="pub" iso-8601-date="2025-08-29T00:00:00+03:00">
    <day>29</day>
    <month>08</month>
    <year>2025</year>
   </pub-date>
   <pub-date publication-format="electronic" date-type="pub" iso-8601-date="2025-08-29T00:00:00+03:00">
    <day>29</day>
    <month>08</month>
    <year>2025</year>
   </pub-date>
   <volume>16</volume>
   <issue>2</issue>
   <fpage>30</fpage>
   <lpage>57</lpage>
   <history>
    <date date-type="received" iso-8601-date="2025-06-14T00:00:00+03:00">
     <day>14</day>
     <month>06</month>
     <year>2025</year>
    </date>
    <date date-type="accepted" iso-8601-date="2025-08-22T00:00:00+03:00">
     <day>22</day>
     <month>08</month>
     <year>2025</year>
    </date>
   </history>
   <self-uri xlink:href="https://miigaik.editorum.ru/en/nauka/article/118334/view">https://miigaik.editorum.ru/en/nauka/article/118334/view</self-uri>
   <abstract xml:lang="ru">
    <p>Выборки из аэрокосмических изображений и масок, используемые при решении задач распознавания различных классов земной поверхности, могут оказывать существенное влияние на обучаемость моделей нейронных сетей и получаемые в дальнейшем с их помощью результаты распознавания. Состав выборок данных в большинстве случаев рассматривается не относительно самих выборок, а с точки зрения обработки данных нейронными сетями в целом в конкретной задаче. В контексте семантической сегментации геополей сформулированы общие для задач семантической сегментации объектов на аэрокосмических изображениях проблемы: разные яркостные характеристики снимков, тени, эквивалентность яркостных характеристик объектов целевого класса и других объектов сцен, некорректная разметка, граничные случаи дисбаланса классов. Все перечисленное рассматривается как проблемы представления исходного множества геополей в выборках данных. В результате эксперимента с нейронными сетями U Net, STT и MF‑CNN определено, что включаемые в выборки граничные случаи дисбаланса классов и применение снимков с разрешением, при котором дисбаланс классов выше, чем при использовании частей снимков, существенно снижают обучаемость нейронных сетей и точность распознавания, а отбор данных на основе удаления граничных случаев дисбаланса классов при предобработке позволяет как повысить точность распознавания, так и снизить необходимые для обучения моделей временные затраты.</p>
   </abstract>
   <trans-abstract xml:lang="en">
    <p>The remote sensing images and masks datasets that are used for different earth’s surface classes recognition tasks can significantly impact neural network models learnability and semantic segmentation results that are received after model training. Datasets composition problematic usually are not investigated from these datasets point of view as it is viewed as neural network data processing problematic in general in each specific remote sensing data semantic segmentation task. In geofields semantic segmentation context general problematic for object semantic segmentation on aerial and satellite data that includes such main problems as images with different spectral characteristics, images with shadows, images with “different object, same spectrum”, images with incorrect annotation and images with class imbalance borderline cases is determined. Mentioned problems are considered as problems of original geofields set representation in datasets. As result of a different earth’s surface classes semantic segmentation experiment with U-Net, STT and MF-CNN it was determined that class imbalance borderline cases and using images with resolution in which class imbalance is higher than using their crops reduce learning ability and recognition accuracy of neural networks and class imbalance borderline cases deletion based data selection in data preprocessing process leads to accuracy increase and models training time decrease.</p>
   </trans-abstract>
   <kwd-group xml:lang="ru">
    <kwd>геополе</kwd>
    <kwd>множество геополей</kwd>
    <kwd>семантическая сегментация</kwd>
    <kwd>состав выборок данных</kwd>
    <kwd>дисбаланс классов</kwd>
    <kwd>нейронная сеть</kwd>
    <kwd>точность распознавания</kwd>
   </kwd-group>
   <kwd-group xml:lang="en">
    <kwd>geofield</kwd>
    <kwd>geofields set</kwd>
    <kwd>semantic segmentation</kwd>
    <kwd>dataset composition</kwd>
    <kwd>class imbalance</kwd>
    <kwd>neural network</kwd>
    <kwd>recognition accuracy</kwd>
   </kwd-group>
  </article-meta>
 </front>
 <body>
  <p></p>
 </body>
 <back>
  <ref-list>
   <ref id="B1">
    <label>1.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Xu R., Mao R., Zhuang Z., et al. Building Extraction from Remote Sensing Images Based on Multi-Scale Attention Gate and Enhanced Positional Information // PeerJ Computer Science. 2025. Vol. 11. P. e2826. DOI:10.7717/peerj-cs.2826. https://doi.org/10.7717/peerj-cs.2826</mixed-citation>
     <mixed-citation xml:lang="en">Kadnichanskiy S.A., Kurkov V.M., Kurkov M.V. i dr. Fotogrammetricheskaya kalibrovka fotokamery dlya aerofotos'emki s bespilotnogo vozdushnogo sudna // Geoprofi. 2019. № 6. S. 35–40.</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B2">
    <label>2.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Chen Y., Xie Y., Yao W., et al. U-MGA: a Multi-Module Unet Optimized with Multi-Scale Global Attention Mechanisms for Fine-Grained Segmentation of Cultivated Areas // Remote Sensing. 2025. Vol. 17. Iss. 5. P. 760. DOI:10.3390/rs17050760. https://doi.org/10.3390/rs17050760</mixed-citation>
     <mixed-citation xml:lang="en">Chibunichev A.G., Kurkov V.M., Govorov A.V. i dr. Issledovanie tochnosti fototriangulyacii s ispol'zovaniem razlichnyh metodov laboratornoy i polevoy kalibrovki // Izvestiya vuzov «Geodeziya i aerofotos'emka». 2016. № 2. S. 42–47.</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B3">
    <label>3.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Cai J., Tao L., Li Y. CM-UNet++: A Multi-Level Information Optimized Network for Urban Water Body Extraction from High-Resolution Remote Sensing Imagery // Remote Sensing. 2025. Vol. 17. Iss. 6. P. 980. DOI:10.3390/rs17060980. https://doi.org/10.3390/rs17060980</mixed-citation>
     <mixed-citation xml:lang="en">Mihaylov A.P., Chibunichev A.G., Kurkov V.M. i dr. Primenenie cifrovyh nemetricheskih kamer i lazernyh skanerov dlya resheniya zadach fotogrammetrii. [Elektronnyy resurs]. Rezhim dostupa: http://www.racurs.ru/?page=321 (data obrascheniya: 10.02.2019).</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B4">
    <label>4.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Gui L., Gu X., Huang F., et al. Road Extraction from Remote Sensing Images Using a Skip-Connected Parallel CNN-Transformer Encoder-Decoder Model // Applied Sciences. 2025. Vol. 15. Iss. 3. P. 1427. DOI:10.3390/app15031427. https://doi.org/10.3390/app15031427</mixed-citation>
     <mixed-citation xml:lang="en">Lazareva N.S. Kalibrovka nemetricheskih maloformatnyh kamer i ih primenenie dlya resheniya nekotoryh zadach fotogrammetrii // Nauki o Zemle. 2011. № 1. S. 81–92. [Elektronnyy resurs]. Rezhim dostupa: https://geo-science.ru/wp-content/uploads/GeoScience_2011_1-80-91.pdf (data obrascheniya: 29.05.2025).</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B5">
    <label>5.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Liu Y., Duan Y., Zhang X., et al. FEPA-Net: A Building Extraction Network Based on Fusing the Feature Extraction and Position Attention Module // Applied Sciences. 2025. Vol. 15. Iss. 8. P. 4432. DOI:10.3390/app15084432. https://doi.org/10.3390/app15084432</mixed-citation>
     <mixed-citation xml:lang="en">Antipov I.T. Matematicheskie osnovy prostranstvennoy analiticheskoy fototriangulyacii. M.: Kartgeocentr — Geodezizdat, 2003. 297 s.</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B6">
    <label>6.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Zhu B., Yu D., Xiao X., et al. AP-Pointrend: An Improved Network for Building Extraction via High-Resolution Remote Sensing Images // Remote Sensing. 2025. Vol. 17. Iss. 9. P. 1481. DOI:10.3390/rs17091481. https://doi.org/10.3390/rs17091481</mixed-citation>
     <mixed-citation xml:lang="en">Lunev A.A. Vybor optimal'nyh parametrov kalibrovki cifrovoy kamery. [Elektronnyy resurs]. Rezhim dostupa: https://masters.donntu.ru/2007/ggeo/sagaidak/library/book7.htm (data obrascheniya: 29.05.2025).</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B7">
    <label>7.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Li X., Yang S., Meng F., et al. LCMorph: Exploiting Frequency Cues and Morphological Perception for Low-Contrast Road Extraction in Remote Sensing Images // Remote Sensing. 2025. Vol. 17. Iss. 2. P. 257. DOI:10.3390/rs17020257. https://doi.org/10.3390/rs17020257</mixed-citation>
     <mixed-citation xml:lang="en">Cramer M., Przybilla H.-J., Zurhorst A. UAV Cameras: Overview and Geometric Calibration Benchmark // The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. 2017. Vol. XLII-2/W6. P. 85–92. DOI:10.5194/isprs-archives-XLII-2-W6-85-2017. https://isprs-archives.copernicus.org/articles/XLII-2-W6/85/2017/</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B8">
    <label>8.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Weng Z., Li Q., Zheng Z., et al. SCR-Net: A Dual-Channel Water Body Extraction Model Based on Multi-Spectral Remote Sensing Imagery – A Case Study of Daihai Lake, China // Sensors. 2025. Vol. 25. Iss. 3. P. 763. DOI: 10.3390/s25030763. https://doi.org/10.3390/s25030763</mixed-citation>
     <mixed-citation xml:lang="en">Bykov L.V., Chehlov D.V. Osobennosti mezhevaniya granic zemel'nyh uchastkov po materialam aerofotos'emki s bespilotnogo letatel'nogo apparata // Geodeziya, zemleustroystvo i kadastry: problemy i perspektivy razvitiya: sbornik materialov II Mezhdunarodnoy nauchno-prakticheskoy konferencii, Omsk, 26 marta 2020 g. Omsk: Omskiy GAU im. P.A. Stolypina, 2020. S. 43–47.</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B9">
    <label>9.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Wu Q., Chen M., Shi H., et al. Algorithm for Detecting Trees Affected by Pine Wilt Disease in Complex Scenes Based on CNN-Transformer // Forests. 2025. Vol. 16. Iss. 4. P. 596. DOI:10.3390/f16040596. https://doi.org/10.3390/f16040596</mixed-citation>
     <mixed-citation xml:lang="en">Anikeeva I.A., Babashkin N.M., Kadnichanskiy S.A. i dr. O vozmozhnosti i effektivnosti ispol'zovaniya bespilotnyh vozdushnyh sudov pri vypolnenii kadastrovyh rabot // Geodeziya i kartografiya. 2018. T. 79, № 8. S. 44–52. DOI:10.22389/0016-7126-2018-938-8-44-52. https://geocartography.ru/scientific_article/2018_8_44-52</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B10">
    <label>10.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Wang L., Gao Y., Liu Y., et al. Monitoring Pine Shoot Beetle Damage Using UAV Imagery and Deep Learning Semantic Segmentation Under Different Forest Backgrounds // Forests. 2025. Vol. 16. Iss. 4. P. 668. DOI:10.3390/f16040668. https://doi.org/10.3390/f16040668</mixed-citation>
     <mixed-citation xml:lang="en">Bykov V.L. Polevaya kalibrovka snimkov s ispol'zovaniem sredstv sputnikovogo pozicionirovaniya // Geodeziya i kartografiya. 2007. № 8. S. 39–43.</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B11">
    <label>11.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Бирюков Н.А., Майоров А.А., Лапчинская М.П. Семантическая сегментация геополей с использованием нейронных сетей на примере проблематики выделения зданий на космо- и аэрофотоснимках // Известия вузов «Геодезия и аэрофотосъемка». 2024. Т. 68, № 1. С. 44–61. DOI:10.30533/GiA-2024-004. https://miigaik.ru/journal/archive/2024/2024_68_1_RU/GiA-2024-004.pdf</mixed-citation>
     <mixed-citation xml:lang="en">Biryukov N.A., Mayorov A.A., Lapchinskaya M.P. Semanticheskaya segmentaciya geopoley s ispol'zovaniem neyronnyh setey na primere problematiki vydeleniya zdaniy na kosmo- i aerofotosnimkah // Izvestiya vuzov «Geodeziya i aerofotos'emka». 2024. T. 68, № 1. S. 44–61. DOI:10.30533/GiA-2024-004. https://miigaik.ru/journal/archive/2024/2024_68_1_RU/GiA-2024-004.pdf</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B12">
    <label>12.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Li J., He W., Cao W., et al. UANet: An Uncertainty-Aware Network for Building Extraction from Remote Sensing Images // IEEE Transactions on Geoscience and Remote Sensing. 2024. Vol. 62. P. 5608513. DOI:10.1109/TGRS.2024.3361211. https://ieeexplore.ieee.org/document/10418227</mixed-citation>
     <mixed-citation xml:lang="en">Li J., He W., Cao W., et al. UANet: An Uncertainty-Aware Network for Building Extraction from Remote Sensing Images // IEEE Transactions on Geoscience and Remote Sensing. 2024. Vol. 62. P. 5608513. DOI:10.1109/TGRS.2024.3361211. https://ieeexplore.ieee.org/document/10418227</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B13">
    <label>13.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Han R., Fan X., Liu J. EUNet: Edge-Unet for Accurate Building Extraction and Edge Emphasis in Gaofen-7 Images // Remote Sensing. 2024. Vol. 16. Iss. 13. P. 2397. DOI:10.3390/rs16132397. https://doi.org/10.3390/rs16132397</mixed-citation>
     <mixed-citation xml:lang="en">Han R., Fan X., Liu J. EUNet: Edge-Unet for Accurate Building Extraction and Edge Emphasis in Gaofen-7 Images // Remote Sensing. 2024. Vol. 16. Iss. 13. P. 2397. DOI:10.3390/rs16132397. https://doi.org/10.3390/rs16132397</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B14">
    <label>14.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Fenglei W., Xin G., Zongze Z., et al. A Boundary-Enhanced Semantic Segmentation Model for Buildings // IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing. 2025. Vol. 18. P. 5733–5748. DOI:10.1109/JSTARS.2025.3529456. https://ieeexplore.ieee.org/document/10840290</mixed-citation>
     <mixed-citation xml:lang="en">Fenglei W., Xin G., Zongze Z., et al. A Boundary-Enhanced Semantic Segmentation Model for Buildings // IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing. 2025. Vol. 18. P. 5733–5748. DOI:10.1109/JSTARS.2025.3529456. https://ieeexplore.ieee.org/document/10840290</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B15">
    <label>15.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Chen P., Huang H., Ye F., et al. A Benchmark Gaofen-7 Dataset for Building Extraction from Satellite Images // Scientific Data. 2024. Vol. 11. P. 187. DOI:10.1038/s41597-024-03009-5. https://doi.org/10.1038/s41597-024-03009-5</mixed-citation>
     <mixed-citation xml:lang="en">Chen P., Huang H., Ye F., et al. A Benchmark Gaofen-7 Dataset for Building Extraction from Satellite Images // Scientific Data. 2024. Vol. 11. P. 187. DOI:10.1038/s41597-024-03009-5. https://doi.org/10.1038/s41597-024-03009-5</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B16">
    <label>16.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Li S., Bao T., Liu H., et al. Multilevel Feature Aggregated Network with Instance Contrastive Learning Constraint for Building Extraction // Remote Sensing. 2023. Vol. 15. Iss. 10. P. 2585. DOI:10.3390/rs15102585. https://doi.org/10.3390/rs15102585</mixed-citation>
     <mixed-citation xml:lang="en">Li S., Bao T., Liu H., et al. Multilevel Feature Aggregated Network with Instance Contrastive Learning Constraint for Building Extraction // Remote Sensing. 2023. Vol. 15. Iss. 10. P. 2585. DOI:10.3390/rs15102585. https://doi.org/10.3390/rs15102585</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B17">
    <label>17.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Lyu X., Jiang W., Li X., et al. MSAFNet: Multiscale Successive Attention Fusion Network for Water Body Extraction of Remote Sensing Images // Remote Sensing. 2023. Vol. 15. Iss. 12. P. 3121. DOI:10.3390/rs15123121. https://doi.org/10.3390/rs15123121</mixed-citation>
     <mixed-citation xml:lang="en">Lyu X., Jiang W., Li X., et al. MSAFNet: Multiscale Successive Attention Fusion Network for Water Body Extraction of Remote Sensing Images // Remote Sensing. 2023. Vol. 15. Iss. 12. P. 3121. DOI:10.3390/rs15123121. https://doi.org/10.3390/rs15123121</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B18">
    <label>18.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Li M., Hong L., Guo J., et al. Automated Extraction of Lake Water Bodies in Complex Geographical Environments by Fusing Sentinel-1/2 Data // Water. 2022. Vol. 14. Iss. 1. P. 30. DOI:10.3390/w14010030. https://doi.org/10.3390/w14010030</mixed-citation>
     <mixed-citation xml:lang="en">Li M., Hong L., Guo J., et al. Automated Extraction of Lake Water Bodies in Complex Geographical Environments by Fusing Sentinel-1/2 Data // Water. 2022. Vol. 14. Iss. 1. P. 30. DOI:10.3390/w14010030. https://doi.org/10.3390/w14010030</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B19">
    <label>19.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Wang Y., Li S., Lin Y., et al. Lightweight Deep Neural Network Method for Water Body Extraction from High-Resolution Remote Sensing Images with Multisensors // Sensors. 2021. Vol. 21. Iss. 21. 7397. DOI:10.3390/s21217397. https://doi.org/10.3390/s21217397</mixed-citation>
     <mixed-citation xml:lang="en">Wang Y., Li S., Lin Y., et al. Lightweight Deep Neural Network Method for Water Body Extraction from High-Resolution Remote Sensing Images with Multisensors // Sensors. 2021. Vol. 21. Iss. 21. 7397. DOI:10.3390/s21217397. https://doi.org/10.3390/s21217397</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B20">
    <label>20.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Wang B., Chen Z., Wu L., et al. SADA-Net: A Shape Feature Optimization and Multiscale Context Information-Based Water Body Extraction Method for High-Resolution Remote Sensing Images // IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing. 2022. Vol. 15. P. 1744–1759. DOI:10.1109/JSTARS.2022.3146275. https://ieeexplore.ieee.org/document/9695297</mixed-citation>
     <mixed-citation xml:lang="en">Wang B., Chen Z., Wu L., et al. SADA-Net: A Shape Feature Optimization and Multiscale Context Information-Based Water Body Extraction Method for High-Resolution Remote Sensing Images // IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing. 2022. Vol. 15. P. 1744–1759. DOI:10.1109/JSTARS.2022.3146275. https://ieeexplore.ieee.org/document/9695297</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B21">
    <label>21.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Weng Y., Li Z., Tang G., et al. OCNet-Based Water Body Extraction from Remote Sensing Images // Water. 2023. Vol. 15. Iss. 20. P. 3557. DOI:10.3390/ w15203557. https://doi.org/10.3390/w15203557</mixed-citation>
     <mixed-citation xml:lang="en">Weng Y., Li Z., Tang G., et al. OCNet-Based Water Body Extraction from Remote Sensing Images // Water. 2023. Vol. 15. Iss. 20. P. 3557. DOI:10.3390/ w15203557. https://doi.org/10.3390/w15203557</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B22">
    <label>22.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Yu J., Cai Y., Lyu X., et al. Boundary-Guided Semantic Context Network for Water Body Extraction from Remote Sensing Images // Remote Sensing. 2023. Vol. 15. Iss. 17. P. 4325. DOI:10.3390/rs15174325. https://doi.org/10.3390/rs15174325</mixed-citation>
     <mixed-citation xml:lang="en">Yu J., Cai Y., Lyu X., et al. Boundary-Guided Semantic Context Network for Water Body Extraction from Remote Sensing Images // Remote Sensing. 2023. Vol. 15. Iss. 17. P. 4325. DOI:10.3390/rs15174325. https://doi.org/10.3390/rs15174325</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B23">
    <label>23.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Li M., Wu P., Wang B., et al. A Deep Learning Method of Water Body Extraction from High Resolution Remote Sensing Images with Multisensors // IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing. 2021. Vol. 14. P. 3120–3132. DOI:10.1109/JSTARS.2021.3060769. https://ieeexplore.ieee.org/document/9360447</mixed-citation>
     <mixed-citation xml:lang="en">Li M., Wu P., Wang B., et al. A Deep Learning Method of Water Body Extraction from High Resolution Remote Sensing Images with Multisensors // IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing. 2021. Vol. 14. P. 3120–3132. DOI:10.1109/JSTARS.2021.3060769. https://ieeexplore.ieee.org/document/9360447</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B24">
    <label>24.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Zhao Z., Yang J., Wang M., et al. The PCA-NDWI Urban Water Extraction Model Based on Hyperspectral Remote Sensing // Water. 2024. Vol. 16. Iss. 7. P. 963. DOI:10.3390/w16070963. https://doi.org/10.3390/w16070963</mixed-citation>
     <mixed-citation xml:lang="en">Zhao Z., Yang J., Wang M., et al. The PCA-NDWI Urban Water Extraction Model Based on Hyperspectral Remote Sensing // Water. 2024. Vol. 16. Iss. 7. P. 963. DOI:10.3390/w16070963. https://doi.org/10.3390/w16070963</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B25">
    <label>25.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Sandum H.N., Ørka H.O., Tomic O., et al. Semantic Segmentation of Forest Stands Using Deep Learning // Preprint arXiv.org, 2025. [Электронный ресурс]. Режим доступа: https://arxiv.org/pdf/2504.02471 (дата обращения: 09.06.2025).</mixed-citation>
     <mixed-citation xml:lang="en">Sandum H.N., Ørka H.O., Tomic O., et al. Semantic Segmentation of Forest Stands Using Deep Learning // Preprint arXiv.org, 2025. [Elektronnyy resurs]. Rezhim dostupa: https://arxiv.org/pdf/2504.02471 (data obrascheniya: 09.06.2025).</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B26">
    <label>26.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Lin N., Quan H., He J., et al. Urban Vegetation Extraction from High-Resolution Remote Sensing Imagery on SD-Unet and Vegetation Spectral Features // Remote Sensing. 2023. Vol. 15. Iss. 18. P. 4488. DOI:10.3390/rs15184488. https://doi.org/10.3390/rs15184488</mixed-citation>
     <mixed-citation xml:lang="en">Lin N., Quan H., He J., et al. Urban Vegetation Extraction from High-Resolution Remote Sensing Imagery on SD-Unet and Vegetation Spectral Features // Remote Sensing. 2023. Vol. 15. Iss. 18. P. 4488. DOI:10.3390/rs15184488. https://doi.org/10.3390/rs15184488</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B27">
    <label>27.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Chen P., Li X., Peng Y., et al. WSSGCN: Hyperspectral Forest Image Classification via Watershed Superpixel Segmentation and Sparse Graph Convolutional Networks // Forests. 2025. Vol. 16. Iss. 5. P. 827. DOI:10.3390/f16050827. https://doi.org/10.3390/f16050827</mixed-citation>
     <mixed-citation xml:lang="en">Chen P., Li X., Peng Y., et al. WSSGCN: Hyperspectral Forest Image Classification via Watershed Superpixel Segmentation and Sparse Graph Convolutional Networks // Forests. 2025. Vol. 16. Iss. 5. P. 827. DOI:10.3390/f16050827. https://doi.org/10.3390/f16050827</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B28">
    <label>28.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Chen S., Zhang M., Lei F. Mapping Vegetation Types by Different Fully Convolutional Neural Network Structures with Inadequate Training Labels in Complex Landscape Urban Areas // Forests. 2023. Vol. 14. Iss. 9. P. 1768. DOI:10.3390/f14091788. https://doi.org/10.3390/f14091788</mixed-citation>
     <mixed-citation xml:lang="en">Chen S., Zhang M., Lei F. Mapping Vegetation Types by Different Fully Convolutional Neural Network Structures with Inadequate Training Labels in Complex Landscape Urban Areas // Forests. 2023. Vol. 14. Iss. 9. P. 1768. DOI:10.3390/f14091788. https://doi.org/10.3390/f14091788</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B29">
    <label>29.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Li Y., Min S., Song B., et al. Multisource High-Resolution Remote Sensing Image Vegetation Extraction with Comprehensive Multifeature Perception // Remote Sensing. 2024. Vol. 16. Iss. 4. P. 712. DOI:10.3390/rs16040712. https://doi.org/10.3390/rs16040712</mixed-citation>
     <mixed-citation xml:lang="en">Li Y., Min S., Song B., et al. Multisource High-Resolution Remote Sensing Image Vegetation Extraction with Comprehensive Multifeature Perception // Remote Sensing. 2024. Vol. 16. Iss. 4. P. 712. DOI:10.3390/rs16040712. https://doi.org/10.3390/rs16040712</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B30">
    <label>30.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Wang B., Yao Y. Mountain Vegetation Classification Method Based on Multi-Channel Semantic Segmentation Model // Remote Sensing. 2024. Vol. 16. Iss. 2. P. 256. DOI:10.3390/rs16020256. https://doi.org/10.3390/rs16020256</mixed-citation>
     <mixed-citation xml:lang="en">Wang B., Yao Y. Mountain Vegetation Classification Method Based on Multi-Channel Semantic Segmentation Model // Remote Sensing. 2024. Vol. 16. Iss. 2. P. 256. DOI:10.3390/rs16020256. https://doi.org/10.3390/rs16020256</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B31">
    <label>31.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Tao J., Chen Z., Sun Z., et al. Seg-Road: A Segmentation Network for Road Extraction Based on Transformer and CNN with Connectivity Structures // Remote Sensing. 2023. Vol. 15. Iss. 6. P. 1602. DOI:10.3390/rs15061602. https://doi.org/10.3390/rs15061602</mixed-citation>
     <mixed-citation xml:lang="en">Tao J., Chen Z., Sun Z., et al. Seg-Road: A Segmentation Network for Road Extraction Based on Transformer and CNN with Connectivity Structures // Remote Sensing. 2023. Vol. 15. Iss. 6. P. 1602. DOI:10.3390/rs15061602. https://doi.org/10.3390/rs15061602</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B32">
    <label>32.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Lin S., Yao X., Liu X., et al. MS-AGAN: Road Extraction via Multi-Scale Information Fusion and Asymmetric Generative Adversarial Networks from High-Resolution Remote Sensing Images under Complex Backgrounds // Remote Sensing. 2023. Vol. 15. Iss. 13. P. 3367. DOI:10.3390/rs15133367. https://doi.org/10.3390/rs15133367</mixed-citation>
     <mixed-citation xml:lang="en">Lin S., Yao X., Liu X., et al. MS-AGAN: Road Extraction via Multi-Scale Information Fusion and Asymmetric Generative Adversarial Networks from High-Resolution Remote Sensing Images under Complex Backgrounds // Remote Sensing. 2023. Vol. 15. Iss. 13. P. 3367. DOI:10.3390/rs15133367. https://doi.org/10.3390/rs15133367</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B33">
    <label>33.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Zhong B., Dan H., Liu M., et al. FERDNet: High-Resolution Remote Sensing Road Extraction Network Based on Feature Enhancement of Road Directionality // Remote Sensing. 2025. Vol. 17. Iss. 3. P. 376. DOI:10.3390/rs17030376. https://doi.org/10.3390/rs17030376</mixed-citation>
     <mixed-citation xml:lang="en">Zhong B., Dan H., Liu M., et al. FERDNet: High-Resolution Remote Sensing Road Extraction Network Based on Feature Enhancement of Road Directionality // Remote Sensing. 2025. Vol. 17. Iss. 3. P. 376. DOI:10.3390/rs17030376. https://doi.org/10.3390/rs17030376</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B34">
    <label>34.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Chen J., Yang L., Wang H., et al. Road Extraction from High-Resolution Remote Sensing Images via Local and Global Context Reasoning // Remote Sensing. 2023. Vol. 15. Iss. 17. P. 4177. DOI:10.3390/rs15174177. https://doi.org/10.3390/rs15174177</mixed-citation>
     <mixed-citation xml:lang="en">Chen J., Yang L., Wang H., et al. Road Extraction from High-Resolution Remote Sensing Images via Local and Global Context Reasoning // Remote Sensing. 2023. Vol. 15. Iss. 17. P. 4177. DOI:10.3390/rs15174177. https://doi.org/10.3390/rs15174177</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B35">
    <label>35.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Mahara A., Khan M.R.K., Deng L., et al. Automated Road Extraction from Satellite Imagery Integrating Dense Depthwise Dilated Separable Spatial Pyramid Pooling with DeepLabV3+ // Applied Sciences. 2025. Vol. 15. Iss. 3. P. 1027. DOI:10.3390/app15031027. https://doi.org/10.3390/app15031027</mixed-citation>
     <mixed-citation xml:lang="en">Mahara A., Khan M.R.K., Deng L., et al. Automated Road Extraction from Satellite Imagery Integrating Dense Depthwise Dilated Separable Spatial Pyramid Pooling with DeepLabV3+ // Applied Sciences. 2025. Vol. 15. Iss. 3. P. 1027. DOI:10.3390/app15031027. https://doi.org/10.3390/app15031027</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B36">
    <label>36.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Li B., Tang X., Xiao R., et al. Dual Convolutional Network Based on Hypergraph and Multilevel Feature Fusion for Road Extraction from High-Resolution Remote Sensing Images // International Journal of Digital Earth. 2024. Vol. 17. No. 1. P. 2303354. DOI:10.1080/17538947.2024.2303354. https://doi.org/10.1080/17538947.2024.2303354</mixed-citation>
     <mixed-citation xml:lang="en">Li B., Tang X., Xiao R., et al. Dual Convolutional Network Based on Hypergraph and Multilevel Feature Fusion for Road Extraction from High-Resolution Remote Sensing Images // International Journal of Digital Earth. 2024. Vol. 17. No. 1. P. 2303354. DOI:10.1080/17538947.2024.2303354. https://doi.org/10.1080/17538947.2024.2303354</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B37">
    <label>37.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Zhao S., Feng Z., Chen L., et al. DANet: A Semantic Segmentation Network for Remote Sensing of Roads Based on Dual-ASPP Structure // Electronics. 2023. Vol. 12. Iss. 15. P. 3243. DOI:10.3390/electronics12153243. https://doi.org/10.3390/electronics12153243</mixed-citation>
     <mixed-citation xml:lang="en">Zhao S., Feng Z., Chen L., et al. DANet: A Semantic Segmentation Network for Remote Sensing of Roads Based on Dual-ASPP Structure // Electronics. 2023. Vol. 12. Iss. 15. P. 3243. DOI:10.3390/electronics12153243. https://doi.org/10.3390/electronics12153243</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B38">
    <label>38.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Zhao L., Zhang J., Meng X., et al. Road Extraction Method of Remote Sensing Image Based on Deformable Attention Transformer // Symmetry. 2024. Vol. 16. Iss. 4. P. 468. DOI:10.3390/sym16040468. https://doi.org/10.3390/sym16040468</mixed-citation>
     <mixed-citation xml:lang="en">Zhao L., Zhang J., Meng X., et al. Road Extraction Method of Remote Sensing Image Based on Deformable Attention Transformer // Symmetry. 2024. Vol. 16. Iss. 4. P. 468. DOI:10.3390/sym16040468. https://doi.org/10.3390/sym16040468</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B39">
    <label>39.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Zhang Y., Zhang L., Wang Y., et al. AGF-Net: Adaptive Global Feature Fusion Network for Road Extraction from Remote-Sensing Images // Complex &amp; Intelligent Systems. 2024. Vol. 10. P. 4311–4328. DOI:10.1007/s40747-024-01364-9. https://doi.org/10.1007/s40747-024-01364-9</mixed-citation>
     <mixed-citation xml:lang="en">Zhang Y., Zhang L., Wang Y., et al. AGF-Net: Adaptive Global Feature Fusion Network for Road Extraction from Remote-Sensing Images // Complex &amp; Intelligent Systems. 2024. Vol. 10. P. 4311–4328. DOI:10.1007/s40747-024-01364-9. https://doi.org/10.1007/s40747-024-01364-9</mixed-citation>
    </citation-alternatives>
   </ref>
   <ref id="B40">
    <label>40.</label>
    <citation-alternatives>
     <mixed-citation xml:lang="ru">Ma D., Jiang L., Li J., et al. Water Index and Swin Transformer (WISTE) for Water Body Extraction from Multispectral Remote Sensing Images // GIScience &amp; Remote Sensing. 2023. Vol. 60. No. 1. P. 2251704. DOI:10.1080/15481603.2023.2251704. https://doi.org/10.1080/15481603.2023.2251704</mixed-citation>
     <mixed-citation xml:lang="en">Ma D., Jiang L., Li J., et al. Water Index and Swin Transformer (WISTE) for Water Body Extraction from Multispectral Remote Sensing Images // GIScience &amp; Remote Sensing. 2023. Vol. 60. No. 1. P. 2251704. DOI:10.1080/15481603.2023.2251704. https://doi.org/10.1080/15481603.2023.2251704</mixed-citation>
    </citation-alternatives>
   </ref>
  </ref-list>
 </back>
</article>
