The use of generative artificial intelligence to improve the quality of aerial photography results during complex cadastral works
Abstract and keywords
Abstract (English):
Often, images obtained during aerial photography using UAV, due to various reasons, may have low resolution, various noises, smudges, artifacts and distortions, which makes it difficult to decrypt real estate objects and reduces the accuracy of determining their boundaries and areas, thereby increasing the labor costs of performing cadastral work. Based on this, in order to obtain accurate results, it is necessary to provide highquality initial data. Therefore, the possibility of using generative artificial intelligence was investigated in order to improve the quality of images obtained during aerial photography to solve real estate cadastre problems. The article presents the results of applying the machine learning method using generative adversarial networks. The study was carried out on the materials obtained during the complex cadastral works with the help of UAV. The results of processing the initial aerial photographs in the modified generative adversarial network Real-ESRGAN are presented. Photogrammetric processing of improved aerial photographs was performed, an orthophotoplan and a three-dimensional terrain model were created. The analysis of the processed images and the orthophotoplane obtained from them is carried out. The relevance and importance of using this technology is due to the tasks of providing the real estate cadastre with high-quality source data.

Keywords:
aerial photography, unmanned aerial vehicles, precision, generative network, orthophotoplane, complex cadastral works
References

1. Zakhlebin AS. Povyshenie tochnosti postroenija ortofotoplana mestnosti po videodannym s bespilotnogo letatel’nogo apparata [Improving the accuracy of constructing an orthophotoplane of an area based on video data from an unmanned aerial vehicle] [dissertation]. Tomsk, 2022. 139 p. (In Russian).

2. Zheng X, Xu Z, Yin Q, et al. A Transformer-Unet Generative Adversarial Network for the Super-Resolution Reconstruction of DEMs. Remote Sensing. 2024;16(19): 3676. DOIhttps://doi.org/10.3390/rs16193676.

3. Fomina ES. Ocenka jeffektivnosti primenenija specializirovannyh nejronnyh setej dlja povyshenija razreshenija izobrazhenij, poluchaemyh pri distancionnom zondirovanii Zemli [Evaluation of the effectiveness of the use of specialized neural networks to increase the resolution of images obtained during remote sensing of the Earth]. Control systems, communications and security. 2023;3: 71–90. (In Russian). DOIhttps://doi.org/10.24412/2410-9916-2023-3-71-90.

4. Kupyn O, Budzan V, Mykhailych M, et al. DeblurGAN: Blind Motion Deblurring Using Conditional Adversarial Networks. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, 2018: 8183–8192. DOIhttps://doi.org/10.1109/CVPR.2018.00854.

5. Liu Y, Yeoh J, Chua D. Deep Learning–Based Enhancement of Motion Blurred UAV Concrete Crack Images. Journal of Computing in Civil Engineering. 2020;34(5): 04020028. DOIhttps://doi.org/10.1061/(asce)cp.1943-5487.0000907.

6. Yuan B, Sun Z, Pei L, et al. Super-Resolution Reconstruction Method of Pavement Crack Images Based On an Improved Generative Adversarial Network. Sensors (Basel). 2022;22(23): 9092. DOIhttps://doi.org/10.3390/s22239092.

7. Wang X, Xie L, Dong C, et al. Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data. 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW). Montreal, 2021: 1905–1914. DOIhttps://doi.org/10.1109/ICCVW54120.2021.00217.

8. Liang J, Cao J, Sun G, et al. SwinIR: Image Restoration Using Swin Transformer. IEEE/CVF International Conference on Computer Vision (ICCV). Montreal, 2021: 1833–1844. DOIhttps://doi.org/10.1109/ICCVW54120.2021.00210.

9. Wang X, Yu K, Wu S, et al. ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks. Computer Vision – ECCV 2018 Workshops. Munich, 2018: 63–79. DOIhttps://doi.org/10.48550/arXiv.1809.00219.

10. Agustsson E, Timofte R. NTIRE 2017 Challenge on Single Image Super-Resolution: Dataset and Study. 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). Honolulu, 2017: 1122–1131. DOIhttps://doi.org/10.1109/CVPRW.2017.150.

11. Timofte R, Agustsson E, Gool LV, et al. NTIRE 2017 Challenge on Single Image Super-Resolution: Methods and Results. 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). Honolulu, 2017: 1110–1121. DOIhttps://doi.org/10.1109/CVPRW.2017.149.

12. Wang X, Yu K, Dong C, et al. Recovering Realistic Texture in Image Super-Resolution by Deep Spatial Feature Transform. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, 2018: 606–615. DOIhttps://doi.org/10.1109/CVPR.2018.00070.

13. Shin ER, Shchekina AYu, Cherkasov RA. Tehnologija sozdanija topoplanov masshtaba 1 : 500 po dannym s’’emki s kvadrokoptera Phantom 4 [The technology of creating topographic planes at a scale of 1: 500 according to shooting data from a Phantom 4 quadcopter]. Vektor GeoNauk. 2019;2(1): 54–58. (In Russian). DOIhttps://doi.org/10.24411/2619-0761-2019-10008.

14. Zakhlebin AS, Kuryachiy MI, Kapustin VV, et al. Povyshenie kontrasta i tochnosti lokalizacii ob’’ektov interesa na ortofotoplanah mestnosti, postroennyh po iskazhennym izobrazhenijam s bespilotnogo letatel’nogo apparata [Increasing the contrast and accuracy of localization of objects of interest on orthophotomaps of the terrain built from distorted images from an unmanned aerial vehicle]. Omsk Scientific Bulletin. 2024;1(189): 119–126. (In Russian). DOIhttps://doi.org/10.25206/1813-8225-2024-189-119-126.

Login or Create
* Forgot password?