Our highly skilled Research & Development Team is the main engine in our company, as they are working hand-in-hand with technologies and innovations to deliver fresh solutions to solve global issues. Now we would like to start introducing you to the expertise of our R&D Team.
Today with technology development we hold powerful tools to improve agriculture management and performance optimization. Remote sensing such as satellite imagery processing is a good example of this.
The importance of land-use statistics is undeniable in modern farming and business. Accurate and up-to-date mapping of land usage requires precise land classification, which is complicated without the Afterall detection of field boundaries.
Usually, existing maps are based on historic administrative maps or manually developed based on observational data. The first ones do not stand up to high accuracy, the second ones involve large amounts of manual labor that scales with the number of maps to be created or updated. Neither of these two survives the scrutiny of variability with time. In such circumstances, yield predictions will not produce accurate estimations and might result in shortages or disrupt particular markets of goods.
Crop area protection is yet another field of interest in agricultural management and benefits greatly from such maps. Given the growing trend in consumption common to both developed and still developing country’s cropland, protection allows to secure yields and preserve soil fertility.
Thus the crop field boundary detection problem is actual in multiple areas and despite a large number of solutions are already present still under development. Let’s focus on some of these solutions and discuss the main advantages and challenges that you can face during the implementation.
Plot boundary detection with classical computer vision approach
From the standpoint of classic approaches this problem could be tackled by raster analysis. The paper Boundary Delineation of Agricultural Fields in Multitemporal Satellite Imagery discussed this one approach in depth. A team of researchers had an objective to develop an algorithm to detect boundaries of New Zealand farm fields from satellite images.
The process starts with seven sequential dates selection, for each of the red, near-infrared (NIR), and short-wave infrared (SWIR) bands are selected. The algorithm involves extensive image analysis (i.e. bands) calculating standard deviation per small radial window (as the original paper suggests with a radius of 5 pixels), resulting map than averaged, convolved with directional filters to find potential lines, thus yielding one image per direction filter (16 filters in the paper). Figure 1 shows a few of them. The resulting images are then subjected to the threshold value and combined using element-wise OR operation for each pixel. During the vectorization step lines are thinned, smoothed, small lines are extended and filtered out if they do not form a closed contour. And as a final result, a map of boundaries is ready.
Accuracy assessment, however, concludes this method is applicable, where up to 20 m error is good enough. More strict evaluation gives the accuracy of only 59%.
As we tried to implement such logic in our research, we discovered this setup is quite brittle. Several different thresholds had to be applied, but these threshold values work for images taken in spring and do not provide good results for ones taken in autumn.
Credit: North, Heather & Pairman, David & Belliss, S.E.. (2018). Boundary Delineation of Agricultural Fields in Multitemporal Satellite Imagery. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing. PP. 1–15. 10.1109/JSTARS.2018.2884513.
Classic machine learning algorithms for boundaries detection
Classic computer vision approaches alone do not produce great results for such a task as ours, deep learning approaches, on the other hand, require huge amounts of labeled data. Is there a way to achieve reasonable enough accuracy with small datasets?
Classic machine learning algorithms usually work the best with limited datasets. One of the pipelines is proposed in the paper: A machine learning approach for agricultural parcel delineation through agglomerative segmentation. The research group used images derived from calculated indices, namely, normalized difference vegetation index (NDVI), normalized difference water index (NDWI), and spectral shape index (SSI). Such images allowed them to derive superpixels, local clusters of pixels, producing additional feature maps for the next step. From there researchers trained the RUSboost classifier to perform binary classification for pixels.
Such technique allowed a team to achieve 92% of accuracy, which is impressive given a small amount of data and relatively simple procedure of data preprocessing and model training.
Credit: A. García-Pedrero, C. Gonzalo-Martín & M. Lillo-Saavedra (2017) A machine learning approach for agricultural parcel delineation through agglomerative segmentation, International Journal of Remote Sensing, 38:7, 1809–1819.
Graph-based contour finding
Contours finding is yet another technique that could apply to the plot boundary detection task. The idea is suggested in the paper Extracting Agricultural Fields from Remote Sensing Imagery Using Graph-Based Growing Contours.
Active contours finding also called “snakes” is not new in computer vision, however, as the authors discussed, some common drawbacks, such as skipping rough corners, can be addressed mentioning also this can add complexity to an algorithm. The team of researchers from Kiel University picked as a subject area of Schleswig-Holstein state dominated by agricultural use. To tackle the task a series of transformations before actual contours finding were performed, in particular bilinear filtering, the color-space transformation between YUV and RGB color spaces taking gradient finding local anisotropy. Finally, preprocessed images consist of informative feature maps allowing to perform active contours finding, as such procedure based on the notions of internal and external energies. While gradient image provides rich information for the former, sub-pixel transformation does so for the latter. The approach to active contour finding is altered, however. First potential fields are seeded in the most likely points based on descriptors derived from preprocessing steps. Next, weighted graph building starts with the seed being the first node. Such a graph would be built as a circle, but weight assignment and recalculation allow Afterall to prune vertices and edges whose weights are too small. Obtained contours then transform polygons whose boundaries are a subject of extraction.
Evaluated on existing land-use maps, the approach detected 99% of total acreage and losing in the count of fields less than 9%. However, the authors point out that boundaries of fields that were close to urban areas often were extracted incorrectly. As a possible post-processing step, they suggested filtering out the urban structures.
Convolutional Neural Network approach
Deep neural networks help to address shortcomings of classical computer vision approaches and can discover more complex patterns. Another research team approached the task of boundary detection using state-of-the-art neural network architecture ResUNet in the paper Deep learning on edge: extracting field boundaries from satellite images with a convolutional neural network. An area of the main interest was 120,000 square kilometers of South Africa’s “Maize quadrangle”. Authors used regular Sentinel-2, particularly green, blue, red, and near-infrared bands. Not much preprocessing was done on images, but the labeling part was crucial: setting as a purpose to combine both region-based and edge-based detection, researchers labeled field boundaries as one class and entire field area as another. The results turned out to be quite decent ones hitting 90% overall accuracy.
We tried this architecture, but we had a single class, boundaries. Also, for a benchmark, we used a few other architectures — original UNet and UNet++. ResNet performed nearly the same as UNet, however, UNet++ performed better than the other two. Perhaps, one more thing also matters: usually boundaries are visible to a human eye, we supposed it would be obvious for an algorithm to discover this pattern too. Thus, we used regular red, blue, and green bands omitting near-infrared. It might be the next step in our pursuits.
Credit: François Waldner, Foivos I. Diakogiannis, Deep learning on edge: Extracting field boundaries from satellite images with a convolutional neural network, Remote Sensing of Environment, Volume 245, 2020,111741, ISSN 0034–4257.
Boundary detection with convolutional neural networks for other objects
The research team from the Institute of Geospatial Engineering and Geodesy, Poland, approached field boundaries detection based on an instance segmentation approach. As a study area, they have chosen a part of Warsaw and marked objects with class labels. As the main architecture to train Mask R-CNN has been chosen supplemented with another “mini-network” pointing where to look for a pattern of interest. Such a network is the Region Proposal Network (RPN) widely applied in object detection tasks of various kinds. Researchers have shown that enhanced Faster Edge Region CNN (FER-CNN) not only successfully classifies building on satellite images but also has a significantly better estimation of boundaries. What is the researchers’ innovation here? In comparison to the original RPN proposed FER-CNN predicts regions on the several resolutions of the feature map. That is: one feature map can be reduced 2, 4, 8 times, make the region proposal and we would select those regions where these agree on multiple scales. Results, when compared to conventional algorithms for contour finding, show astonishing precision of the method. Besides a great accuracy, they reported a reduced number of mistakes done by occlusions and shadows which suggests great robustness of the method. The paper Detection, Classification and Boundary Regularization of Buildings in Satellite Imagery Using Faster Edge Region Convolutional Neural Networks describes all details in depth.
Although buildings in the city are not the same objects as fields in the open space, the idea to regularize boundaries might be viable to the task of interest as well. The same labeling could be applied to the fields being just one class or we could combine two tasks at once, field delineation and classification, defining several classes for fields. It is worth a note the second would require much more time to be spent for the labeling procedure.
Credit: Reda, K.; Kedzierski, M. Detection, Classification and Boundary Regularization of Buildings in Satellite Imagery Using Faster Edge Region Convolutional Neural Networks. Remote Sens. 2020, 12, 2240.
Actual challenges and how might we approach them
Despite drastic advances in the task of field boundary detection, there are still issues that can be addressed to improve the accuracy.
First of all, different locations might have specific peculiarities making it harder to develop the “one size fits all” approach. As one example encountered in our research, Ukrainian fields are way larger than in many other locations. For instance, when comparing local fields to ones in South Africa, the difference is striking.
One way to solve it might be to train separate models for different locations or simply extend the dataset with images from other locations. One way or another, this brings another problem: labeled data. Collecting images is not time and resource-intensive, unlike accurate labeling.
Seasonal changes and the type of crop grown in the field sometimes pose a challenge for detection tasks. Even to a naked eye closely situated fields with grown wheat, for instance, might appear as one large field despite being two parcels. This issue might make a solution less scalable when operating on the unseen data. Numerous combinations of composite layers could be obtained from satellite imagery composed of both visible and non-visible spectrums, some of them might provide meaningful insights and discoveries. For instance, the near-infrared band is widely used for such tasks, however, there is not much literature and reports on the performance of other composites and combinations.
There was also an idea to detect roads and connect them in a graph-based manner as paper Hierarchical graph-based segmentation for extracting road networks from high-resolution satellite images suggested and derive field boundaries as a byproduct. This approach did not yield great results. We can find roads everywhere while it is not the same with fields. Also, not all fields and their boundaries follow straight-forward patterns as regular roads.
Remote sensing technologies allow humanity to automate and scale tasks previously unimaginable. An actual one is field boundary detection, which is important to several domains. In the past, people tried to automate it with classical computer vision techniques, since today the computational resources are cheaper than mistakes made by these algorithms, deep learning algorithms are more successful here. Current neural networks being complex systems yield far better results compared to their predecessors.